[jira] [Commented] (CASSANDRA-13174) Indexing is allowed on Duration type when it should not be

2017-02-22 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879513#comment-15879513
 ] 

Tyler Hobbs commented on CASSANDRA-13174:
-

+1 on the patch.  Nice work on making thorough tests and good error messages!

> Indexing is allowed on Duration type when it should not be
> --
>
> Key: CASSANDRA-13174
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13174
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.10
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
> Fix For: 3.11.x, 4.x
>
>
> Looks like secondary indexing is allowed on duration type columns. Since 
> comparisons are not possible for the duration type, indexing on it also 
> should be invalid.
> 1) 
> {noformat}
> CREATE TABLE duration_table (k int PRIMARY KEY, d duration);
> INSERT INTO duration_table (k, d) VALUES (0, 1s);
> SELECT * from duration_table WHERE d=1s ALLOW FILTERING;
> {noformat}
> The above throws an error: 
> {noformat}
> WARN  [ReadStage-2] 2017-01-31 17:09:57,821 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,10,main]: {}
> java.lang.RuntimeException: java.lang.UnsupportedOperationException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2591)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  ~[main/:na]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.lang.UnsupportedOperationException: null
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compareCustom(AbstractType.java:174)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:160) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compareForCQL(AbstractType.java:204)
>  ~[main/:na]
>   at org.apache.cassandra.cql3.Operator.isSatisfiedBy(Operator.java:201) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:719)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:324)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) 
> ~[main/:na]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:44) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:174) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:140)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:307)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:310)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:333) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1884)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
>  ~[main/:na]
>   ... 5 common frames omitted
> {noformat}
> 2)
> Similarly, if an index is created on the duration column:
> {noformat}
> CREATE INDEX d_index ON simplex.duration_table (d);
> SELECT * from duration_table WHERE d=1s;
> {noformat}
> results in:
> {noformat}
> WARN  [ReadStage-2] 

[jira] [Updated] (CASSANDRA-13174) Indexing is allowed on Duration type when it should not be

2017-02-22 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13174:

Status: Ready to Commit  (was: Patch Available)

> Indexing is allowed on Duration type when it should not be
> --
>
> Key: CASSANDRA-13174
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13174
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.10
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
> Fix For: 3.11.x, 4.x
>
>
> Looks like secondary indexing is allowed on duration type columns. Since 
> comparisons are not possible for the duration type, indexing on it also 
> should be invalid.
> 1) 
> {noformat}
> CREATE TABLE duration_table (k int PRIMARY KEY, d duration);
> INSERT INTO duration_table (k, d) VALUES (0, 1s);
> SELECT * from duration_table WHERE d=1s ALLOW FILTERING;
> {noformat}
> The above throws an error: 
> {noformat}
> WARN  [ReadStage-2] 2017-01-31 17:09:57,821 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-2,10,main]: {}
> java.lang.RuntimeException: java.lang.UnsupportedOperationException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2591)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  ~[main/:na]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.lang.UnsupportedOperationException: null
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compareCustom(AbstractType.java:174)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:160) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.compareForCQL(AbstractType.java:204)
>  ~[main/:na]
>   at org.apache.cassandra.cql3.Operator.isSatisfiedBy(Operator.java:201) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:719)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:324)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) 
> ~[main/:na]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:44) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:174) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:140)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:307)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:310)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:333) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1884)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
>  ~[main/:na]
>   ... 5 common frames omitted
> {noformat}
> 2)
> Similarly, if an index is created on the duration column:
> {noformat}
> CREATE INDEX d_index ON simplex.duration_table (d);
> SELECT * from duration_table WHERE d=1s;
> {noformat}
> results in:
> {noformat}
> WARN  [ReadStage-2] 2017-01-31 17:12:00,623 
> AbstractLocalAwareExecutorService.java:167 - 

[jira] [Updated] (CASSANDRA-13108) Uncaught exeption stack traces not logged

2017-02-22 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13108:

   Resolution: Fixed
Fix Version/s: 3.11.0
   3.0.12
   2.2.10
   2.1.18
   Status: Resolved  (was: Testing)

Test results look good, so I've committed this to 2.1 as 
{{3c2f87610de0f11071f3d5c005c1d14c06c832f8}} and merged up to 2.2, 3.0, 3.11, 
and trunk.  Thanks for the patch!

> Uncaught exeption stack traces not logged
> -
>
> Key: CASSANDRA-13108
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13108
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Christopher Batey
>Assignee: Christopher Batey
>Priority: Trivial
> Fix For: 2.1.18, 2.2.10, 3.0.12, 3.11.0
>
>
> In a refactoring to parameterized logging we lost the stack traces of 
> uncaught exceptions. This means, apart from the thread, I have no idea where 
> they come from e.g.
> {code}
> ERROR [OptionalTasks:1] 2017-01-06 12:53:14,204 CassandraDaemon.java:231 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.NullPointerException: null
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13108) Uncaught exeption stack traces not logged

2017-02-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13108:

Status: Testing  (was: Patch Available)

> Uncaught exeption stack traces not logged
> -
>
> Key: CASSANDRA-13108
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13108
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Christopher Batey
>Assignee: Christopher Batey
>Priority: Trivial
>
> In a refactoring to parameterized logging we lost the stack traces of 
> uncaught exceptions. This means, apart from the thread, I have no idea where 
> they come from e.g.
> {code}
> ERROR [OptionalTasks:1] 2017-01-06 12:53:14,204 CassandraDaemon.java:231 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.NullPointerException: null
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13108) Uncaught exeption stack traces not logged

2017-02-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876876#comment-15876876
 ] 

Tyler Hobbs commented on CASSANDRA-13108:
-

Pending test runs on 2.1 (skipping tests on 2.2+ since the change is quite 
trivial):

||branch||testall||dtest||
|[CASSANDRA-13108|https://github.com/thobbs/cassandra/tree/CASSANDRA-13108]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13108-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13108-dtest]|

> Uncaught exeption stack traces not logged
> -
>
> Key: CASSANDRA-13108
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13108
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Christopher Batey
>Assignee: Christopher Batey
>Priority: Trivial
>
> In a refactoring to parameterized logging we lost the stack traces of 
> uncaught exceptions. This means, apart from the thread, I have no idea where 
> they come from e.g.
> {code}
> ERROR [OptionalTasks:1] 2017-01-06 12:53:14,204 CassandraDaemon.java:231 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.NullPointerException: null
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13185) cqlsh COPY doesn't support dates before 1900 or after 9999

2017-02-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13185:

Status: Ready to Commit  (was: Patch Available)

> cqlsh COPY doesn't support dates before 1900 or after 
> --
>
> Key: CASSANDRA-13185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13185
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Stefania
> Fix For: 3.0.x, 3.11.x
>
>
> Although we fixed this problem for standard queries in CASSANDRA-10625, it 
> still exists for COPY.  In CASSANDRA-10625, we replaced datetimes outside of 
> the supported time range with a simple milliseconds-since-epoch long.  We may 
> not want to use the same solution for COPY, because we wouldn't be able to 
> load the same data back in through COPY.  Having consistency in the format of 
> values and support for loading exported data seems more important for COPY.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13185) cqlsh COPY doesn't support dates before 1900 or after 9999

2017-02-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876752#comment-15876752
 ] 

Tyler Hobbs commented on CASSANDRA-13185:
-

bq. Cqlsh COPY is already capable of importing dates as milliseconds from the 
epoch, this is the fallback in case the date cannot be parsed.

Okay, awesome.  The C* patch looks perfect to me, then.  For some reason, the 
3.11 and trunk tests failed to run.  I've restarted them, and if they look 
good, then I think this is ready to commit.

> cqlsh COPY doesn't support dates before 1900 or after 
> --
>
> Key: CASSANDRA-13185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13185
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Stefania
> Fix For: 3.0.x, 3.11.x
>
>
> Although we fixed this problem for standard queries in CASSANDRA-10625, it 
> still exists for COPY.  In CASSANDRA-10625, we replaced datetimes outside of 
> the supported time range with a simple milliseconds-since-epoch long.  We may 
> not want to use the same solution for COPY, because we wouldn't be able to 
> load the same data back in through COPY.  Having consistency in the format of 
> values and support for loading exported data seems more important for COPY.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13185) cqlsh COPY doesn't support dates before 1900 or after 9999

2017-02-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13185:

Reviewer: Tyler Hobbs

> cqlsh COPY doesn't support dates before 1900 or after 
> --
>
> Key: CASSANDRA-13185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13185
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Stefania
> Fix For: 3.0.x, 3.11.x
>
>
> Although we fixed this problem for standard queries in CASSANDRA-10625, it 
> still exists for COPY.  In CASSANDRA-10625, we replaced datetimes outside of 
> the supported time range with a simple milliseconds-since-epoch long.  We may 
> not want to use the same solution for COPY, because we wouldn't be able to 
> load the same data back in through COPY.  Having consistency in the format of 
> values and support for loading exported data seems more important for COPY.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12213) dtest failure in write_failures_test.TestWriteFailures.test_paxos_any

2017-02-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876663#comment-15876663
 ] 

Tyler Hobbs commented on CASSANDRA-12213:
-

[~Stefania] what if we simply flushed the schema tables in the order: columns, 
tables, keyspaces?  Based on how we load the schema, it seems like that would 
avoid this particular error.  The only thing I'm not sure of is whether we 
would end up with the correct internal schema state after CL replay is complete.

> dtest failure in write_failures_test.TestWriteFailures.test_paxos_any
> -
>
> Key: CASSANDRA-12213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12213
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.11.x
>
> Attachments: jenkins-stef1927-12014-dtest-2_logs.001.tar.gz, 
> node1_debug.log, node1_gc.log, node1.log, node2_debug.log, node2_gc.log, 
> node2.log, node3_debug.log, node3_gc.log, node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/10/testReport/write_failures_test/TestWriteFailures/test_paxos_any
> and:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/10/testReport/write_failures_test/TestWriteFailures/test_mutation_v3/
> Failed on CassCI build cassandra-3.9_dtest #10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13185) cqlsh COPY doesn't support dates before 1900 or after 9999

2017-02-03 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-13185:
---

 Summary: cqlsh COPY doesn't support dates before 1900 or after 
 Key: CASSANDRA-13185
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13185
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
 Fix For: 3.0.x, 3.x


Although we fixed this problem for standard queries in CASSANDRA-10625, it 
still exists for COPY.  In CASSANDRA-10625, we replaced datetimes outside of 
the supported time range with a simple milliseconds-since-epoch long.  We may 
not want to use the same solution for COPY, because we wouldn't be able to load 
the same data back in through COPY.  Having consistency in the format of values 
and support for loading exported data seems more important for COPY.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13177) sstabledump doesn't handle non-empty partitions with a partition-level deletion correctly

2017-02-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13177:

   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.11
   3.0.11
   Status: Resolved  (was: Ready to Commit)

Thank you, committed to {{cassandra-3.0}} as 
{{883c9f0f743139d78996f5faf191508a9be338b5}} and merged up to 
{{cassandra-3.11}} and {{trunk}}.

> sstabledump doesn't handle non-empty partitions with a partition-level 
> deletion correctly
> -
>
> Key: CASSANDRA-13177
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13177
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 3.0.11, 3.11
>
>
> If a partition has a partition-level deletion, but still contains rows (with 
> timestamps higher than the deletion), sstabledump will only show the deletion 
> and not the rows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13180) Better handling of missing entries in system_schema.columns during startup

2017-02-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13180:

   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.11
   3.0.11
   Status: Resolved  (was: Ready to Commit)

Thank you, committed as {{a70b0d4d37851891ec1c8af96063985a5122edda}} to 
{{cassandra-3.0}} and merged up to {{cassandra-3.11}} and {{trunk}}.

> Better handling of missing entries in system_schema.columns during startup
> --
>
> Key: CASSANDRA-13180
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13180
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 3.0.11, 3.11
>
>
> Like the error in CASSANDRA-12213 and CASSANDRA-12165, it's possible for 
> {{system_schema.keyspaces}} and {{tables}} to contain entries for a table 
> while {{system_schema.columns}} has none.  This produces an error during 
> startup, and there's no way for a user to recover from this without restoring 
> from backups.
> Although this has been seen in the wild on one occasion, the cause is still 
> not entirely known.  (It may be due to a concurrent DROP TABLE and ALTER 
> TABLE where a table property is altered.)  Until we know the root cause, it 
> makes sense to give users a way to recover from that situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13180) Better handling of missing entries in system_schema.columns during startup

2017-02-02 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13180:

Status: Patch Available  (was: Open)

Patch and pending test runs:

||branch||testall||dtest||
|[CASSANDRA-13180-3.0|https://github.com/thobbs/cassandra/tree/CASSANDRA-13180-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13180-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13180-3.0-dtest]|
|[CASSANDRA-13180-3.11|https://github.com/thobbs/cassandra/tree/CASSANDRA-13180-3.11]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13180-3.11-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13180-3.11-dtest]|
|[CASSANDRA-13180-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-13180-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13180-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13180-trunk-dtest]|

There were minor conflicts in the merges to 3.11 and trunk.

> Better handling of missing entries in system_schema.columns during startup
> --
>
> Key: CASSANDRA-13180
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13180
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 3.0.x, 3.x
>
>
> Like the error in CASSANDRA-12213 and CASSANDRA-12165, it's possible for 
> {{system_schema.keyspaces}} and {{tables}} to contain entries for a table 
> while {{system_schema.columns}} has none.  This produces an error during 
> startup, and there's no way for a user to recover from this without restoring 
> from backups.
> Although this has been seen in the wild on one occasion, the cause is still 
> not entirely known.  (It may be due to a concurrent DROP TABLE and ALTER 
> TABLE where a table property is altered.)  Until we know the root cause, it 
> makes sense to give users a way to recover from that situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13180) Better handling of missing entries in system_schema.columns during startup

2017-02-02 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-13180:
---

 Summary: Better handling of missing entries in 
system_schema.columns during startup
 Key: CASSANDRA-13180
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13180
 Project: Cassandra
  Issue Type: Improvement
  Components: Distributed Metadata
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 3.0.x, 3.x


Like the error in CASSANDRA-12213 and CASSANDRA-12165, it's possible for 
{{system_schema.keyspaces}} and {{tables}} to contain entries for a table while 
{{system_schema.columns}} has none.  This produces an error during startup, and 
there's no way for a user to recover from this without restoring from backups.

Although this has been seen in the wild on one occasion, the cause is still not 
entirely known.  (It may be due to a concurrent DROP TABLE and ALTER TABLE 
where a table property is altered.)  Until we know the root cause, it makes 
sense to give users a way to recover from that situation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13177) sstabledump doesn't handle non-empty partitions with a partition-level deletion correctly

2017-02-02 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13177:

Status: Patch Available  (was: Open)

> sstabledump doesn't handle non-empty partitions with a partition-level 
> deletion correctly
> -
>
> Key: CASSANDRA-13177
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13177
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 3.0.x, 3.x
>
>
> If a partition has a partition-level deletion, but still contains rows (with 
> timestamps higher than the deletion), sstabledump will only show the deletion 
> and not the rows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13177) sstabledump doesn't handle non-empty partitions with a partition-level deletion correctly

2017-02-02 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850446#comment-15850446
 ] 

Tyler Hobbs commented on CASSANDRA-13177:
-

Patch and pending test runs:

||branch||testall||dtest||
|[CASSANDRA-13177-3.0|https://github.com/thobbs/cassandra/tree/CASSANDRA-13177-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13177-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13177-3.0-dtest]|
|[CASSANDRA-13177-3.11|https://github.com/thobbs/cassandra/tree/CASSANDRA-13177-3.11]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13177-3.11-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13177-3.11-dtest]|
|[CASSANDRA-13177-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-13177-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13177-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-13177-trunk-dtest]|

dtest pull request: https://github.com/riptano/cassandra-dtest/pull/1435

> sstabledump doesn't handle non-empty partitions with a partition-level 
> deletion correctly
> -
>
> Key: CASSANDRA-13177
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13177
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 3.0.x, 3.x
>
>
> If a partition has a partition-level deletion, but still contains rows (with 
> timestamps higher than the deletion), sstabledump will only show the deletion 
> and not the rows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13177) sstabledump doesn't handle non-empty partitions with a partition-level deletion correctly

2017-02-02 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-13177:
---

 Summary: sstabledump doesn't handle non-empty partitions with a 
partition-level deletion correctly
 Key: CASSANDRA-13177
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13177
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 3.0.x, 3.x


If a partition has a partition-level deletion, but still contains rows (with 
timestamps higher than the deletion), sstabledump will only show the deletion 
and not the rows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13143) Cassandra can accept invalid durations

2017-01-26 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840238#comment-15840238
 ] 

Tyler Hobbs commented on CASSANDRA-13143:
-

+1

> Cassandra can accept invalid durations
> --
>
> Key: CASSANDRA-13143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13143
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A duration can be positive or negative. If the duration is positive the 
> months, days and nanoseconds must be greater or equals to zero. If the 
> duration is negative the months, days and nanoseconds must be smaller or 
> equals to zero.
> Currently, it is possible to send to C* a duration which does not respect 
> that rule and the data will not be reject.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13143) Cassandra can accept invalid durations

2017-01-26 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13143:

Status: Ready to Commit  (was: Patch Available)

> Cassandra can accept invalid durations
> --
>
> Key: CASSANDRA-13143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13143
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A duration can be positive or negative. If the duration is positive the 
> months, days and nanoseconds must be greater or equals to zero. If the 
> duration is negative the months, days and nanoseconds must be smaller or 
> equals to zero.
> Currently, it is possible to send to C* a duration which does not respect 
> that rule and the data will not be reject.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12850) Add the duration type to the protocol V5

2017-01-26 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840229#comment-15840229
 ] 

Tyler Hobbs commented on CASSANDRA-12850:
-

+1, the new inner class solution is much cleaner.  I just have a couple of 
nitpicks/typo fixes in the native protocol spec that can be fixed when 
committing:
* "The first [vint] represend a number of months, the second's a number of days 
and the last a number of milliseconds."  It would be clearer to say this as 
"The first [vint] represents a number of months, the second [vint] represents a 
number of days, and the last [vint] represents a number of milliseconds".
* "equals to zero" should be "equal to zero" or just "zero".

> Add the duration type to the protocol V5
> 
>
> Key: CASSANDRA-12850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12850
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> The Duration type need to be added to the protocol specifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12850) Add the duration type to the protocol V5

2017-01-26 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12850:

Status: Ready to Commit  (was: Patch Available)

> Add the duration type to the protocol V5
> 
>
> Key: CASSANDRA-12850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12850
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> The Duration type need to be added to the protocol specifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13125) Duplicate rows after upgrading from 2.1.16 to 3.0.10/3.9

2017-01-20 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832384#comment-15832384
 ] 

Tyler Hobbs commented on CASSANDRA-13125:
-

[~slebresne] the patch and new test look good to me.  For some reason related 
to sigar your new unit test is erroring on Jenkins in the 3.11 version -- maybe 
[~mshuler] knows what's up with that?

> Duplicate rows after upgrading from 2.1.16 to 3.0.10/3.9
> 
>
> Key: CASSANDRA-13125
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13125
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
>Assignee: Yasuharu Goto
>Priority: Critical
> Attachments: diff-a.patch, diff-b.patch
>
>
> I found that rows are splitting and duplicated after upgrading the cluster 
> from 2.1.x to 3.0.x.
> I found the way to reproduce the problem as below.
> {code}
> $ ccm create test -v 2.1.16 -n 3 -s   
> 
> Current cluster is now: test
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':3}"
> $ ccm node1 cqlsh -e "CREATE TABLE test.test (id text PRIMARY KEY, value1 
> set, value2 set);"
> # Upgrade node1
> $ for i in 1; do ccm node${i} stop; ccm node${i} setdir -v3.0.10; ccm 
> node${i} start;ccm node${i} nodetool upgradesstables; done
> # Insert a row through node1(3.0.10)
> $ ccm node1 cqlsh -e "INSERT INTO test.test (id, value1, value2) values 
> ('aaa', {'aaa', 'bbb'}, {'ccc', 'ddd'});"   
> # Insert a row through node2(2.1.16)
> $ ccm node2 cqlsh -e "INSERT INTO test.test (id, value1, value2) values 
> ('bbb', {'aaa', 'bbb'}, {'ccc', 'ddd'});" 
> # The row inserted from node1 is splitting
> $ ccm node1 cqlsh -e "SELECT * FROM test.test ;"
>  id  | value1 | value2
> -++
>  aaa |   null |   null
>  aaa | {'aaa', 'bbb'} | {'ccc', 'ddd'}
>  bbb | {'aaa', 'bbb'} | {'ccc', 'ddd'}
> $ for i in 1 2; do ccm node${i} nodetool flush; done
> # Results of sstable2json of node2. The row inserted from node1(3.0.10) is 
> different from the row inserted from node2(2.1.16).
> $ ccm node2 json -k test -c test
> running
> ['/home/zzheng/.ccm/test/node2/data0/test/test-5406ee80dbdb11e6a175f57c4c7c85f3/test-test-ka-1-Data.db']
> -- test-test-ka-1-Data.db -
> [
> {"key": "aaa",
>  "cells": [["","",1484564624769577],
>["value1","value2:!",1484564624769576,"t",1484564624],
>["value1:616161","",1484564624769577],
>["value1:626262","",1484564624769577],
>["value2:636363","",1484564624769577],
>["value2:646464","",1484564624769577]]},
> {"key": "bbb",
>  "cells": [["","",1484564634508029],
>["value1:_","value1:!",1484564634508028,"t",1484564634],
>["value1:616161","",1484564634508029],
>["value1:626262","",1484564634508029],
>["value2:_","value2:!",1484564634508028,"t",1484564634],
>["value2:636363","",1484564634508029],
>["value2:646464","",1484564634508029]]}
> ]
> # Upgrade node2,3
> $ for i in `seq 2 3`; do ccm node${i} stop; ccm node${i} setdir -v3.0.10; ccm 
> node${i} start;ccm node${i} nodetool upgradesstables; done
> # After upgrade node2,3, the row inserted from node1 is splitting in node2,3
> $ ccm node2 cqlsh -e "SELECT * FROM test.test ;"  
>   
>  id  | value1 | value2
> -++
>  aaa |   null |   null
>  aaa | {'aaa', 'bbb'} | {'ccc', 'ddd'}
>  bbb | {'aaa', 'bbb'} | {'ccc', 'ddd'}
> (3 rows)
> # Results of sstabledump
> # node1
> [
>   {
> "partition" : {
>   "key" : [ "aaa" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 17,
> "liveness_info" : { "tstamp" : "2017-01-16T11:03:44.769577Z" },
> "cells" : [
>   { "name" : "value1", "deletion_info" : { "marked_deleted" : 
> "2017-01-16T11:03:44.769576Z", "local_delete_time" : "2017-01-16T11:03:44Z" } 
> },
>   { "name" : "value1", "path" : [ "aaa" ], "value" : "" },
>   { "name" : "value1", "path" : [ "bbb" ], "value" : "" },
>   { "name" : "value2", "deletion_info" : { "marked_deleted" : 
> "2017-01-16T11:03:44.769576Z", "local_delete_time" : "2017-01-16T11:03:44Z" } 
> },
>   { "name" : "value2", "path" : [ "ccc" ], "value" : "" },
>   { "name" : "value2", "path" : [ "ddd" ], "value" : "" }
> ]
>   }
> ]
>   },
>   {
> "partition" : {
>   "key" : [ "bbb" ],
>   "position" : 48
> },
> "rows" : [
>   {
> 

[jira] [Updated] (CASSANDRA-12850) Add the duration type to the protocol V5

2017-01-20 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12850:

Reviewer: Tyler Hobbs

> Add the duration type to the protocol V5
> 
>
> Key: CASSANDRA-12850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12850
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> The Duration type need to be added to the protocol specifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13017) DISTINCT queries on partition keys and static column might not return all the results

2017-01-10 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13017:

Status: Ready to Commit  (was: Patch Available)

> DISTINCT queries on partition keys and static column might not return all the 
> results
> -
>
> Key: CASSANDRA-13017
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13017
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x
>
>
> In {{2.1}} and {{2.2}}, a {{DISTINCT}} query on partition keys and static 
> columns might not return all the data if some rows have no data and the 
> static columns have also no values.
> The problem can be reproduced using the Java driver with the following code:
> {code}
> session = cluster.connect();
> session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION 
> = {'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
> session.execute("USE test");
> session.execute("DROP TABLE IF EXISTS test");
> session.execute("CREATE TABLE test (pk int, c int, v int, s int 
> static, primary key(pk, c))");
> PreparedStatement prepare = session.prepare("INSERT INTO test (pk, c, 
> v, s) VALUES (?, ?, ?, ?)");
> for (int i = 0; i < 10; i++)
> for (int j = 0; j < 1; j++)
> session.execute(prepare.bind(i, j, null, null));
> for (Row row : session.execute(new SimpleStatement("SELECT DISTINCT 
> token(pk), pk, s FROM test").setFetchSize(2)))
> {
> System.out.println(row);
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13017) DISTINCT queries on partition keys and static column might not return all the results

2017-01-10 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815845#comment-15815845
 ] 

Tyler Hobbs commented on CASSANDRA-13017:
-

+1

> DISTINCT queries on partition keys and static column might not return all the 
> results
> -
>
> Key: CASSANDRA-13017
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13017
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x
>
>
> In {{2.1}} and {{2.2}}, a {{DISTINCT}} query on partition keys and static 
> columns might not return all the data if some rows have no data and the 
> static columns have also no values.
> The problem can be reproduced using the Java driver with the following code:
> {code}
> session = cluster.connect();
> session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION 
> = {'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
> session.execute("USE test");
> session.execute("DROP TABLE IF EXISTS test");
> session.execute("CREATE TABLE test (pk int, c int, v int, s int 
> static, primary key(pk, c))");
> PreparedStatement prepare = session.prepare("INSERT INTO test (pk, c, 
> v, s) VALUES (?, ?, ?, ?)");
> for (int i = 0; i < 10; i++)
> for (int j = 0; j < 1; j++)
> session.execute(prepare.bind(i, j, null, null));
> for (Row row : session.execute(new SimpleStatement("SELECT DISTINCT 
> token(pk), pk, s FROM test").setFetchSize(2)))
> {
> System.out.println(row);
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13017) DISTINCT queries on partition keys and static column might not return all the results

2017-01-10 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13017:

Reproduced In: 2.2.8, 2.1.13  (was: 2.1.13, 2.2.8)
 Reviewer: Tyler Hobbs

> DISTINCT queries on partition keys and static column might not return all the 
> results
> -
>
> Key: CASSANDRA-13017
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13017
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x
>
>
> In {{2.1}} and {{2.2}}, a {{DISTINCT}} query on partition keys and static 
> columns might not return all the data if some rows have no data and the 
> static columns have also no values.
> The problem can be reproduced using the Java driver with the following code:
> {code}
> session = cluster.connect();
> session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION 
> = {'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
> session.execute("USE test");
> session.execute("DROP TABLE IF EXISTS test");
> session.execute("CREATE TABLE test (pk int, c int, v int, s int 
> static, primary key(pk, c))");
> PreparedStatement prepare = session.prepare("INSERT INTO test (pk, c, 
> v, s) VALUES (?, ?, ?, ?)");
> for (int i = 0; i < 10; i++)
> for (int j = 0; j < 1; j++)
> session.execute(prepare.bind(i, j, null, null));
> for (Row row : session.execute(new SimpleStatement("SELECT DISTINCT 
> token(pk), pk, s FROM test").setFetchSize(2)))
> {
> System.out.println(row);
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string

2017-01-06 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10145:

Fix Version/s: (was: 3.x)
   4.0

> Change protocol to allow sending key space independent of query string
> --
>
> Key: CASSANDRA-10145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10145
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: Sandeep Tamhankar
> Fix For: 4.0
>
> Attachments: 10145-trunk.txt
>
>
> Currently keyspace is either embedded in the query string or set through "use 
> keyspace" on a connection by client driver. 
> There are practical use cases where client user has query and keyspace 
> independently. In order for that scenario to work, they will have to create 
> one client session per keyspace or have to resort to some string replace 
> hackery.
> It will be nice if protocol allowed sending keyspace separately from the 
> query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12883) Remove support for non-JavaScript UDFs

2017-01-05 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802437#comment-15802437
 ] 

Tyler Hobbs commented on CASSANDRA-12883:
-

+1

> Remove support for non-JavaScript UDFs
> --
>
> Key: CASSANDRA-12883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12883
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.2.x
>
>
> As recently reported in the user mailing list, JSR-223 languages other than 
> JavaScript no longer work since version 3.0.
> The reason is that the sandbox implemented in CASSANDRA-9402 restricts the 
> use of "evil" packages, classes and functions. Unfortunately, even "non-evil" 
> packages from JSR-223 providers are blocked.
> In order to get a JSR-223 provider working fine, we need to allow JSR-223 
> provider specific packages and also allow specific runtime permissions.
> The fact that "arbitrary" JSR-223 providers no longer work since 3.0 has just 
> been reported recently, means that this functionality (i.e. non-JavaSCript 
> JSR-223 UDFs) is obviously not used.
> Therefore I propose to remove support for UDFs that do not use Java or 
> JavaScript in 4.0. This will also allow to specialize scripted UDFs on 
> Nashorn and allow to use its security features, although these are limited, 
> more extensively. (Clarification: this ticket is just about to remove that 
> support)
> Also want to point out that we never "officially" supported UDFs that are not 
> Java or JavaScript.
> Sample error message:
> {code}
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1264, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip/cassandra-driver-3.5.0.post0-d8d0456/cassandra/cluster.py",
>  line 3650, in result
> raise self._final_exception
> FunctionFailure: Error from server: code=1400 [User Defined Function failure] 
> message="execution of 'e.test123[bigint]' failed: 
> java.security.AccessControlException: access denied: 
> ("java.lang.RuntimePermission" 
> "accessClassInPackage.org.python.jline.console")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12794) COPY FROM with NULL='' fails when inserting empty row in primary key

2017-01-05 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12794:

Status: Ready to Commit  (was: Patch Available)

> COPY FROM with NULL='' fails when inserting empty row in primary key 
> -
>
> Key: CASSANDRA-12794
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12794
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Tested using C* 2.1.15
>Reporter: Sucwinder Bassi
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> Using this table:
> {noformat}
> CREATE TABLE testtab (  a_id text,  b_id text,  c_id text,  d_id text,  
> order_id uuid,  acc_id bigint,  bucket bigint,  r_id text,  ts bigint,  
> PRIMARY KEY ((a_id, b_id, c_id, d_id), order_id));
> {noformat}
> insert one row:
> {noformat}
> INSERT INTO testtab (a_id, b_id , c_id , d_id , order_id, r_id ) VALUES ( '', 
> '', '', 'a1', 645e7d3c-aef7-4e3c-b834-24b792cf2e55, 'r1');
> {noformat}
> Use COPY to dump the row to temp.csv:
> {noformat}
> copy testtab TO 'temp.csv';
> {noformat}
> Which creates this file:
> {noformat}
> $ cat temp.csv 
> ,,,a1,645e7d3c-aef7-4e3c-b834-24b792cf2e55,,,r1,
> {noformat}
> Truncate the testtab table and then use copy from with NULL='' to insert the 
> row:
> {noformat}
> cqlsh:sbkeyspace> COPY testtab FROM 'temp.csv' with NULL='';
> Using 1 child processes
> Starting copy of sbkeyspace.testtab with columns ['a_id', 'b_id', 'c_id', 
> 'd_id', 'order_id', 'acc_id', 'bucket', 'r_id', 'ts'].
> Failed to import 1 rows: ParseError - Cannot insert null value for primary 
> key column 'a_id'. If you want to insert empty strings, consider using the 
> WITH NULL= option for COPY.,  given up without retries
> Failed to process 1 rows; failed rows written to import_sbkeyspace_testtab.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   3 rows/s
> 1 rows imported from 1 files in 0.398 seconds (0 skipped).
> {noformat}
> It shows 1 rows inserted, but the table is empty:
> {noformat}
> select * from testtab ;
>  a_id | b_id | c_id | d_id | order_id | acc_id | bucket | r_id | ts
> --+--+--+--+--+++--+
> (0 rows)
> {noformat}
> The same error is returned even without the with NULL=''. Is it actually 
> possible for copy from to insert an empty row into the primary key? The 
> insert command shown above inserts the empty row for the primary key without 
> any problems.
> Is this related to https://issues.apache.org/jira/browse/CASSANDRA-7792?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12794) COPY FROM with NULL='' fails when inserting empty row in primary key

2017-01-05 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801956#comment-15801956
 ] 

Tyler Hobbs commented on CASSANDRA-12794:
-

I think this is a good solution.  +1 on committing, the patch and tests look 
good.

> COPY FROM with NULL='' fails when inserting empty row in primary key 
> -
>
> Key: CASSANDRA-12794
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12794
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Tested using C* 2.1.15
>Reporter: Sucwinder Bassi
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> Using this table:
> {noformat}
> CREATE TABLE testtab (  a_id text,  b_id text,  c_id text,  d_id text,  
> order_id uuid,  acc_id bigint,  bucket bigint,  r_id text,  ts bigint,  
> PRIMARY KEY ((a_id, b_id, c_id, d_id), order_id));
> {noformat}
> insert one row:
> {noformat}
> INSERT INTO testtab (a_id, b_id , c_id , d_id , order_id, r_id ) VALUES ( '', 
> '', '', 'a1', 645e7d3c-aef7-4e3c-b834-24b792cf2e55, 'r1');
> {noformat}
> Use COPY to dump the row to temp.csv:
> {noformat}
> copy testtab TO 'temp.csv';
> {noformat}
> Which creates this file:
> {noformat}
> $ cat temp.csv 
> ,,,a1,645e7d3c-aef7-4e3c-b834-24b792cf2e55,,,r1,
> {noformat}
> Truncate the testtab table and then use copy from with NULL='' to insert the 
> row:
> {noformat}
> cqlsh:sbkeyspace> COPY testtab FROM 'temp.csv' with NULL='';
> Using 1 child processes
> Starting copy of sbkeyspace.testtab with columns ['a_id', 'b_id', 'c_id', 
> 'd_id', 'order_id', 'acc_id', 'bucket', 'r_id', 'ts'].
> Failed to import 1 rows: ParseError - Cannot insert null value for primary 
> key column 'a_id'. If you want to insert empty strings, consider using the 
> WITH NULL= option for COPY.,  given up without retries
> Failed to process 1 rows; failed rows written to import_sbkeyspace_testtab.err
> Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   3 rows/s
> 1 rows imported from 1 files in 0.398 seconds (0 skipped).
> {noformat}
> It shows 1 rows inserted, but the table is empty:
> {noformat}
> select * from testtab ;
>  a_id | b_id | c_id | d_id | order_id | acc_id | bucket | r_id | ts
> --+--+--+--+--+++--+
> (0 rows)
> {noformat}
> The same error is returned even without the with NULL=''. Is it actually 
> possible for copy from to insert an empty row into the primary key? The 
> insert command shown above inserts the empty row for the primary key without 
> any problems.
> Is this related to https://issues.apache.org/jira/browse/CASSANDRA-7792?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12794) COPY FROM with NULL='' fails when inserting empty row in primary key

2017-01-05 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12794:

Description: 
Using this table:

{noformat}
CREATE TABLE testtab (  a_id text,  b_id text,  c_id text,  d_id text,  
order_id uuid,  acc_id bigint,  bucket bigint,  r_id text,  ts bigint,  PRIMARY 
KEY ((a_id, b_id, c_id, d_id), order_id));
{noformat}

insert one row:

{noformat}
INSERT INTO testtab (a_id, b_id , c_id , d_id , order_id, r_id ) VALUES ( '', 
'', '', 'a1', 645e7d3c-aef7-4e3c-b834-24b792cf2e55, 'r1');
{noformat}

Use COPY to dump the row to temp.csv:

{noformat}
copy testtab TO 'temp.csv';
{noformat}

Which creates this file:

{noformat}
$ cat temp.csv 
,,,a1,645e7d3c-aef7-4e3c-b834-24b792cf2e55,,,r1,
{noformat}

Truncate the testtab table and then use copy from with NULL='' to insert the 
row:

{noformat}
cqlsh:sbkeyspace> COPY testtab FROM 'temp.csv' with NULL='';
Using 1 child processes

Starting copy of sbkeyspace.testtab with columns ['a_id', 'b_id', 'c_id', 
'd_id', 'order_id', 'acc_id', 'bucket', 'r_id', 'ts'].
Failed to import 1 rows: ParseError - Cannot insert null value for primary key 
column 'a_id'. If you want to insert empty strings, consider using the WITH 
NULL= option for COPY.,  given up without retries
Failed to process 1 rows; failed rows written to import_sbkeyspace_testtab.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   3 rows/s
1 rows imported from 1 files in 0.398 seconds (0 skipped).
{noformat}

It shows 1 rows inserted, but the table is empty:

{noformat}
select * from testtab ;

 a_id | b_id | c_id | d_id | order_id | acc_id | bucket | r_id | ts
--+--+--+--+--+++--+

(0 rows)
{noformat}

The same error is returned even without the with NULL=''. Is it actually 
possible for copy from to insert an empty row into the primary key? The insert 
command shown above inserts the empty row for the primary key without any 
problems.

Is this related to https://issues.apache.org/jira/browse/CASSANDRA-7792?

  was:
Using this table:

CREATE TABLE testtab (  a_id text,  b_id text,  c_id text,  d_id text,  
order_id uuid,  acc_id bigint,  bucket bigint,  r_id text,  ts bigint,  PRIMARY 
KEY ((a_id, b_id, c_id, d_id), order_id));

insert one row:

INSERT INTO testtab (a_id, b_id , c_id , d_id , order_id, r_id ) VALUES ( '', 
'', '', 'a1', 645e7d3c-aef7-4e3c-b834-24b792cf2e55, 'r1');

Use COPY to dump the row to temp.csv:

copy testtab TO 'temp.csv';

Which creates this file:

$ cat temp.csv 
,,,a1,645e7d3c-aef7-4e3c-b834-24b792cf2e55,,,r1,

Truncate the testtab table and then use copy from with NULL='' to insert the 
row:

cqlsh:sbkeyspace> COPY testtab FROM 'temp.csv' with NULL='';
Using 1 child processes

Starting copy of sbkeyspace.testtab with columns ['a_id', 'b_id', 'c_id', 
'd_id', 'order_id', 'acc_id', 'bucket', 'r_id', 'ts'].
Failed to import 1 rows: ParseError - Cannot insert null value for primary key 
column 'a_id'. If you want to insert empty strings, consider using the WITH 
NULL= option for COPY.,  given up without retries
Failed to process 1 rows; failed rows written to import_sbkeyspace_testtab.err
Processed: 1 rows; Rate:   2 rows/s; Avg. rate:   3 rows/s
1 rows imported from 1 files in 0.398 seconds (0 skipped).

It shows 1 rows inserted, but the table is empty:

select * from testtab ;

 a_id | b_id | c_id | d_id | order_id | acc_id | bucket | r_id | ts
--+--+--+--+--+++--+

(0 rows)

The same error is returned even without the with NULL=''. Is it actually 
possible for copy from to insert an empty row into the primary key? The insert 
command shown above inserts the empty row for the primary key without any 
problems.

Is this related to https://issues.apache.org/jira/browse/CASSANDRA-7792?


> COPY FROM with NULL='' fails when inserting empty row in primary key 
> -
>
> Key: CASSANDRA-12794
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12794
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Tested using C* 2.1.15
>Reporter: Sucwinder Bassi
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> Using this table:
> {noformat}
> CREATE TABLE testtab (  a_id text,  b_id text,  c_id text,  d_id text,  
> order_id uuid,  acc_id bigint,  bucket bigint,  r_id text,  ts bigint,  
> PRIMARY KEY ((a_id, b_id, c_id, d_id), order_id));
> {noformat}
> insert one row:
> {noformat}
> INSERT INTO testtab (a_id, b_id , c_id , d_id , order_id, r_id ) VALUES ( '', 
> '', '', 'a1', 645e7d3c-aef7-4e3c-b834-24b792cf2e55, 'r1');
> {noformat}
> Use COPY to dump the row to temp.csv:
> {noformat}
> copy testtab TO 'temp.csv';
> {noformat}
> Which creates this file:
> 

[jira] [Comment Edited] (CASSANDRA-13076) unexpected leap year differences for years between 0 and 1583

2016-12-30 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788138#comment-15788138
 ] 

Tyler Hobbs edited comment on CASSANDRA-13076 at 12/30/16 6:43 PM:
---

Can you provide the script that you're using to insert and fetch the data?


was (Author: thobbs):
Can you provide the script that you're using to insert the data?

> unexpected leap year differences for years between 0 and 1583
> -
>
> Key: CASSANDRA-13076
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13076
> Project: Cassandra
>  Issue Type: Bug
> Environment: cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native 
> protocol v4
>Reporter: Jens Geyer
>
> When inserting timestamps into a datetime column that are between year 0 and 
> 1583, there are unexpected differences between the CQL statement and the 
> actual data written into the field. 
> Testcase: Insert 1st of february for each year starting from 0 up to 3000. We 
> see changing the difference each leap year that is a multiple of 100, and 
> finally after the calendar reform of 1582. 
> {code}
> read 30.01.0001 00:00:00 +00:00, difference -2 days
> read 31.01.0101 00:00:00 +00:00, difference -1 days
> read 01.02.0201 00:00:00 +00:00, difference 0 days
> read 02.02.0301 00:00:00 +00:00, difference 1 days
> read 03.02.0501 00:00:00 +00:00, difference 2 days
> read 04.02.0601 00:00:00 +00:00, difference 3 days
> read 05.02.0701 00:00:00 +00:00, difference 4 days
> read 06.02.0901 00:00:00 +00:00, difference 5 days
> read 07.02.1001 00:00:00 +00:00, difference 6 days
> read 08.02.1101 00:00:00 +00:00, difference 7 days
> read 09.02.1301 00:00:00 +00:00, difference 8 days
> read 10.02.1401 00:00:00 +00:00, difference 9 days
> read 11.02.1501 00:00:00 +00:00, difference 10 days
> read 01.02.1583 00:00:00 +00:00, difference 0 days
> {code}
> So what it looks like is that there seems to be an inconsistency between 
> calendar systems. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13076) unexpected leap year differences for years between 0 and 1583

2016-12-30 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788138#comment-15788138
 ] 

Tyler Hobbs commented on CASSANDRA-13076:
-

Can you provide the script that you're using to insert the data?

> unexpected leap year differences for years between 0 and 1583
> -
>
> Key: CASSANDRA-13076
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13076
> Project: Cassandra
>  Issue Type: Bug
> Environment: cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native 
> protocol v4
>Reporter: Jens Geyer
>
> When inserting timestamps into a datetime column that are between year 0 and 
> 1583, there are unexpected differences between the CQL statement and the 
> actual data written into the field. 
> Testcase: Insert 1st of february for each year starting from 0 up to 3000. We 
> see changing the difference each leap year that is a multiple of 100, and 
> finally after the calendar reform of 1582. 
> {code}
> read 30.01.0001 00:00:00 +00:00, difference -2 days
> read 31.01.0101 00:00:00 +00:00, difference -1 days
> read 01.02.0201 00:00:00 +00:00, difference 0 days
> read 02.02.0301 00:00:00 +00:00, difference 1 days
> read 03.02.0501 00:00:00 +00:00, difference 2 days
> read 04.02.0601 00:00:00 +00:00, difference 3 days
> read 05.02.0701 00:00:00 +00:00, difference 4 days
> read 06.02.0901 00:00:00 +00:00, difference 5 days
> read 07.02.1001 00:00:00 +00:00, difference 6 days
> read 08.02.1101 00:00:00 +00:00, difference 7 days
> read 09.02.1301 00:00:00 +00:00, difference 8 days
> read 10.02.1401 00:00:00 +00:00, difference 9 days
> read 11.02.1501 00:00:00 +00:00, difference 10 days
> read 01.02.1583 00:00:00 +00:00, difference 0 days
> {code}
> So what it looks like is that there seems to be an inconsistency between 
> calendar systems. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-12-13 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15745642#comment-15745642
 ] 

Tyler Hobbs commented on CASSANDRA-8616:


+1

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12354) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_2_2_x.bug_5732_test

2016-12-09 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736707#comment-15736707
 ] 

Tyler Hobbs commented on CASSANDRA-12354:
-

So far no luck reproducing this with 30 local runs.  I'm trying out a 
multiplexer run to see if it reproduces on Jenkins.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12354
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Tyler Hobbs
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/7/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_2_2_x/bug_5732_test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733715#comment-15733715
 ] 

Tyler Hobbs commented on CASSANDRA-8616:


I've restarted the 3.0 tests, because they didn't run for some reason.  It 
looks like the only potential problems are in the trunk dtests, where 
{{offline_tools_test.TestOfflineTools.sstableupgrade_test}} and 
{{upgrade_internal_auth_test.TestAuthUpgrade.test_upgrade_legacy_table}} are 
failing.

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733064#comment-15733064
 ] 

Tyler Hobbs edited comment on CASSANDRA-13004 at 12/8/16 8:07 PM:
--

I haven't had any luck reproducing this locally yet.  I looked into the 
response that the driver had trouble decoding, and this is the breakdown:

{noformat}
\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels
  # flags and metadata

\x00\x02id\x00\x02 # col 0
\x00\x0eapplication_id\x00\x02 # col 1
\x00\x07bitrate\x00\t  # col 2
\x00\x08guild_id\x00\x02   # col 3
\x00\ticon_hash\x00\r  # col 4
\x00\x0flast_message_id\x00\x02# col 5
\x00\x12last_pin_timestamp\x00\x0b # col 6
\x00\x04name\x00\r # col 7
\x00\x08owner_id\x00\x02   # col 8
\x00\x15permission_overwrites\x00! # col 9
  \x00\x02\x000\x00\x10discord_channels\x00\x1c
  channel_permission_overwrite\x00\x04
\x00\x02id\x00\x02
\x00\x04type\x00\t
\x00\x06allow_\x00\t
\x00\x04deny\x00\t
\x00\x08position\x00\t # col 10
\x00\nrecipients\x00!  # col 11
  \x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01
  \x00\x04nick\x00\r
\x00\x05topic\x00\r# col 12
\x00\x04type\x00\x14   # col 13
\x00\nuser_limit\x00\t # col 14

\x00\x00\x00\x01   # row count

\x00\x00\x00\x08 \x03\x8a\x19\x8e\xf8\x82\x00\x01  # value 0, id 
(255044430745174017)

\xff\xff\xff\xff   # value 1, application_id 
(null)

\x00\x00\x00\x04 \x00\x00\xfa\x00  # value 2, bitrate (64000)

\x00\x00\x00\x08 \x00\x00\xfa\x00\x00\xf8G\xc5 # value 3, guild_id 
(274877923215301)

\x00\x00\x00\x00   # value 4 icon_hash (empty 
string)

\x00\x00\x00\x08 \x03\x8b\xc0\xb5nB\x00\x02# value 5 last_message_id 
(255509689347997698)

\x00\x00\x00\x08 G\xc5\xffI\x98\xc4\xb4(   # value 6 last_pin_timestamp 
(5171820438665606184)

\x00\x00\x00\x03 \x8b\xc0\xa8  # value 7 (non-UTF8 
compliant string)

\xff\xff\xff\xff   # value 8, owner_id (null)

\x00\x00\x01<  # value 9, (length 316 user 
type)

\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
 \x08\x01

\x00\x00\x00\x04 \xc4\xb4(\x00 # value 10, position 
(3300141056)
\xff\xff\xff\xff   # value 11, recipients (null)
\x00\x00\x00O  # value 12, topic (79 char 
string)
[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03
\x00\x00\x00\x01 \x00  # value 13, type (0)
\x00\x00\x00\x04 \x00\x00\x00\x00  # value 14, user_limit (0)
{noformat}

Do any of the other column values look incorrect to you?


was (Author: thobbs):
I haven't had any luck reproducing this locally yet.  I looked into the 
response that the driver had trouble decoding, and this is the breakdown:

{noformat}
\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels
  # flags and metadata

\x00\x02id\x00\x02 # col 0

[jira] [Comment Edited] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733064#comment-15733064
 ] 

Tyler Hobbs edited comment on CASSANDRA-13004 at 12/8/16 8:00 PM:
--

I haven't had any luck reproducing this locally yet.  I looked into the 
response that the driver had trouble decoding, and this is the breakdown:

{noformat}
\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels
  # flags and metadata

\x00\x02id\x00\x02 # col 0
\x00\x0eapplication_id\x00\x02 # col 1
\x00\x07bitrate\x00\t  # col 2
\x00\x08guild_id\x00\x02   # col 3
\x00\ticon_hash\x00\r  # col 4
\x00\x0flast_message_id\x00\x02# col 5
\x00\x12last_pin_timestamp\x00\x0b # col 6
\x00\x04name\x00\r # col 7
\x00\x08owner_id\x00\x02   # col 8
\x00\x15permission_overwrites\x00! # col 9
  \x00\x02\x000\x00\x10discord_channels\x00\x1c
  channel_permission_overwrite\x00\x04
\x00\x02id\x00\x02
\x00\x04type\x00\t
\x00\x06allow_\x00\t
\x00\x04deny\x00\t
\x00\x08position\x00\t # col 10
\x00\nrecipients\x00!  # col 11
  \x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01
  \x00\x04nick\x00\r
\x00\x05topic\x00\r# col 12
\x00\x04type\x00\x14   # col 13
\x00\nuser_limit\x00\t # col 14

\x00\x00\x00\x01   # row count

\x00\x00\x00\x08 \x03\x8a\x19\x8e\xf8\x82\x00\x01  # value 0, id 
(255044430745174017)

\xff\xff\xff\xff   # value 1, application_id 
(null)

\x00\x00\x00\x04 \x00\x00\xfa\x00  # value 2, bitrate (16384000)

\x00\x00\x00\x08 \x00\x00\xfa\x00\x00\xf8G\xc5 # value 3, guild_id 
(274877923215301)

\x00\x00\x00\x00   # value 4 icon_hash (empty 
string)

\x00\x00\x00\x08 \x03\x8b\xc0\xb5nB\x00\x02# value 5 last_message_id 
(255509689347997698)

\x00\x00\x00\x08 G\xc5\xffI\x98\xc4\xb4(   # value 6 last_pin_timestamp 
(5171820438665606184)

\x00\x00\x00\x03 \x8b\xc0\xa8  # value 7 (non-UTF8 
compliant string)

\xff\xff\xff\xff   # value 8, owner_id (null)

\x00\x00\x01<  # value 9, (length 316 user 
type)

\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
 \x08\x01

\x00\x00\x00\x04 \xc4\xb4(\x00 # value 10, position 
(3300141056)
\xff\xff\xff\xff   # value 11, recipients (null)
\x00\x00\x00O  # value 12, topic (79 char 
string)
[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03
\x00\x00\x00\x01 \x00  # value 13, type (0)
\x00\x00\x00\x04 \x00\x00\x00\x00  # value 14, user_limit (0)
{noformat}

Do any of the other column values look incorrect to you?


was (Author: thobbs):
I haven't had any luck reproducing this locally yet.  I looked into the 
response that the driver had trouble decoding, and this is the breakdown:

{noformat}
\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels
  # flags and metadata

\x00\x02id\x00\x02 # col 0

[jira] [Commented] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733234#comment-15733234
 ] 

Tyler Hobbs commented on CASSANDRA-13004:
-

This could also be related to the problems described in CASSANDRA-9425.  Seems 
like the changes in CASSANDRA-8099 may be exposing those issues.

> Corruption while adding a column to a table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick text
> );
> CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite (
> id bigint,
> type int,
> allow_ int,
> deny int
> );
> CREATE TABLE IF NOT EXISTS discord_channels.channels (
> id bigint,
> guild_id bigint,
> type tinyint,
> name text,
> topic text,
> position int,
> owner_id bigint,
> icon_hash text,
> recipients map,
> permission_overwrites map,
> bitrate int,
> user_limit int,
> last_pin_timestamp timestamp,
> last_message_id bigint,
> PRIMARY KEY (id)
> );
> {code}
> And then we executed the following alter.
> {code:none}
> ALTER TABLE discord_channels.channels ADD application_id bigint;
> {code}
> And one row (that we can tell) got corrupted at the same time and could no 
> longer be read from the Python driver. 
> {code:none}
> [E 161206 01:56:58 geventreactor:141] Error decoding response from Cassandra. 
> ver(4); flags(); stream(27); op(8); offset(9); len(887); buffer: 
> '\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels\x00\x02id\x00\x02\x00\x0eapplication_id\x00\x02\x00\x07bitrate\x00\t\x00\x08guild_id\x00\x02\x00\ticon_hash\x00\r\x00\x0flast_message_id\x00\x02\x00\x12last_pin_timestamp\x00\x0b\x00\x04name\x00\r\x00\x08owner_id\x00\x02\x00\x15permission_overwrites\x00!\x00\x02\x000\x00\x10discord_channels\x00\x1cchannel_permission_overwrite\x00\x04\x00\x02id\x00\x02\x00\x04type\x00\t\x00\x06allow_\x00\t\x00\x04deny\x00\t\x00\x08position\x00\t\x00\nrecipients\x00!\x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01\x00\x04nick\x00\r\x00\x05topic\x00\r\x00\x04type\x00\x14\x00\nuser_limit\x00\t\x00\x00\x00\x01\x00\x00\x00\x08\x03\x8a\x19\x8e\xf8\x82\x00\x01\xff\xff\xff\xff\x00\x00\x00\x04\x00\x00\xfa\x00\x00\x00\x00\x08\x00\x00\xfa\x00\x00\xf8G\xc5\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8b\xc0\xb5nB\x00\x02\x00\x00\x00\x08G\xc5\xffI\x98\xc4\xb4(\x00\x00\x00\x03\x8b\xc0\xa8\xff\xff\xff\xff\x00\x00\x01<\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
>  
> \x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\xff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00'
> {code}
> And then in cqlsh when trying to read the row we got this. 
> {code:none}
> /usr/bin/cqlsh.py:632: DateOverFlowWarning: Some timestamps are larger than 
> Python datetime can represent. Timestamps are displayed in milliseconds from 
> epoch.
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1301, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip/cassandra-driver-3.5.0.post0-d8d0456/cassandra/cluster.py",
>  line 3650, in result
> raise 

[jira] [Commented] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733208#comment-15733208
 ] 

Tyler Hobbs commented on CASSANDRA-13004:
-

I chatted with [~stanislav] on IRC.  Here are some details:
* This happened on two different clusters with different schemas.  In the 
cluster with only 70 writes/sec at the time, it appears that 7 rows were 
affected.  In the cluster with 7000 writes at the time, it appears that 
hundreds of rows were affected.  So, it seems like there's a window of time 
during which the alter causes problems, and the higher the throughput, the more 
rows are affected.
* Inserts are done with prepared {{INSERT}} statements across many different 
clients.  Some {{UPDATES}} and {{DELETES}} also occur.
* The cluster is three nodes, RF=3
* No other alters have been run on the tables
* In the other schema, two specific columns seemed to be affected (sometimes 
one, sometimes the other).  In the original schema posted here, it seems to 
affect a specific set of columns as well.

My guess is that we're not updating our internal metadata around the schema 
atomically/isolated somewhere, and we're writing the data with a mixture of old 
and new schema info.

> Corruption while adding a column to a table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick text
> );
> CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite (
> id bigint,
> type int,
> allow_ int,
> deny int
> );
> CREATE TABLE IF NOT EXISTS discord_channels.channels (
> id bigint,
> guild_id bigint,
> type tinyint,
> name text,
> topic text,
> position int,
> owner_id bigint,
> icon_hash text,
> recipients map,
> permission_overwrites map,
> bitrate int,
> user_limit int,
> last_pin_timestamp timestamp,
> last_message_id bigint,
> PRIMARY KEY (id)
> );
> {code}
> And then we executed the following alter.
> {code:none}
> ALTER TABLE discord_channels.channels ADD application_id bigint;
> {code}
> And one row (that we can tell) got corrupted at the same time and could no 
> longer be read from the Python driver. 
> {code:none}
> [E 161206 01:56:58 geventreactor:141] Error decoding response from Cassandra. 
> ver(4); flags(); stream(27); op(8); offset(9); len(887); buffer: 
> '\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels\x00\x02id\x00\x02\x00\x0eapplication_id\x00\x02\x00\x07bitrate\x00\t\x00\x08guild_id\x00\x02\x00\ticon_hash\x00\r\x00\x0flast_message_id\x00\x02\x00\x12last_pin_timestamp\x00\x0b\x00\x04name\x00\r\x00\x08owner_id\x00\x02\x00\x15permission_overwrites\x00!\x00\x02\x000\x00\x10discord_channels\x00\x1cchannel_permission_overwrite\x00\x04\x00\x02id\x00\x02\x00\x04type\x00\t\x00\x06allow_\x00\t\x00\x04deny\x00\t\x00\x08position\x00\t\x00\nrecipients\x00!\x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01\x00\x04nick\x00\r\x00\x05topic\x00\r\x00\x04type\x00\x14\x00\nuser_limit\x00\t\x00\x00\x00\x01\x00\x00\x00\x08\x03\x8a\x19\x8e\xf8\x82\x00\x01\xff\xff\xff\xff\x00\x00\x00\x04\x00\x00\xfa\x00\x00\x00\x00\x08\x00\x00\xfa\x00\x00\xf8G\xc5\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8b\xc0\xb5nB\x00\x02\x00\x00\x00\x08G\xc5\xffI\x98\xc4\xb4(\x00\x00\x00\x03\x8b\xc0\xa8\xff\xff\xff\xff\x00\x00\x01<\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
>  
> 

[jira] [Commented] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733064#comment-15733064
 ] 

Tyler Hobbs commented on CASSANDRA-13004:
-

I haven't had any luck reproducing this locally yet.  I looked into the 
response that the driver had trouble decoding, and this is the breakdown:

{noformat}
\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels
  # flags and metadata

\x00\x02id\x00\x02 # col 0
\x00\x0eapplication_id\x00\x02 # col 1
\x00\x07bitrate\x00\t  # col 2
\x00\x08guild_id\x00\x02   # col 3
\x00\ticon_hash\x00\r  # col 4
\x00\x0flast_message_id\x00\x02# col 5
\x00\x12last_pin_timestamp\x00\x0b # col 6
\x00\x04name\x00\r # col 7
\x00\x08owner_id\x00\x02   # col 8
\x00\x15permission_overwrites\x00! # col 9
  \x00\x02\x000\x00\x10discord_channels\x00\x1c
  channel_permission_overwrite\x00\x04
\x00\x02id\x00\x02
\x00\x04type\x00\t
\x00\x06allow_\x00\t
\x00\x04deny\x00\t
\x00\x08position\x00\t # col 10
\x00\nrecipients\x00!  # col 11
  \x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01
  \x00\x04nick\x00\r
\x00\x05topic\x00\r# col 12
\x00\x04type\x00\x14   # col 13
\x00\nuser_limit\x00\t # col 14

\x00\x00\x00\x01   # row count

\x00\x00\x00\x08 \x03\x8a\x19\x8e\xf8\x82\x00\x01  # value 0, id 
(72201598085466627)

\xff\xff\xff\xff   # value 1, application_id 
(null)

\x00\x00\x00\x04 \x00\x00\xfa\x00  # value 2, bitrate (16384000)

\x00\x00\x00\x08 \x00\x00\xfa\x00\x00\xf8G\xc5 # value 3, guild_id 
(14215603427718332416)

\x00\x00\x00\x00   # value 4 icon_hash (empty 
string)

\x00\x00\x00\x08 \x03\x8b\xc0\xb5nB\x00\x02# value 5 last_message_id 
(144188231338986243)

\x00\x00\x00\x08 G\xc5\xffI\x98\xc4\xb4(   # value 6 last_pin_timestamp 
(2933185415680607559)

\x00\x00\x00\x03 \x8b\xc0\xa8  # value 7 (non-UTF8 
compliant string)

\xff\xff\xff\xff   # value 8, owner_id (null)

\x00\x00\x01<  # value 9, (length 316 user 
type)

\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
 \x08\x01

\x00\x00\x00\x04 \xc4\xb4(\x00 # value 10, position 
(3300141056)
\xff\xff\xff\xff   # value 11, recipients (null)
\x00\x00\x00O  # value 12, nick (79 char 
string)
[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03
\x00\x00\x00\x01 \x00  # value 13, topic (0)
\x00\x00\x00\x04 \x00\x00\x00\x00  # value 14, type (0)
{noformat}

Do any of the other column values look incorrect to you?

> Corruption while adding a column to a table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick 

[jira] [Issue Comment Deleted] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-13004:

Comment: was deleted

(was: So far I've had no luck reproducing this locally.  I dug into the 
response that the driver got, and this is the breakdown of the message:

{noformat}
\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels
  # flags and metadata

\x00\x02id\x00\x02 # col 0
\x00\x0eapplication_id\x00\x02 # col 1
\x00\x07bitrate\x00\t  # col 2
\x00\x08guild_id\x00\x02   # col 3
\x00\ticon_hash\x00\r  # col 4
\x00\x0flast_message_id\x00\x02# col 5
\x00\x12last_pin_timestamp\x00\x0b # col 6
\x00\x04name\x00\r # col 7
\x00\x08owner_id\x00\x02   # col 8
\x00\x15permission_overwrites\x00! # col 9
  \x00\x02\x000\x00\x10discord_channels\x00\x1c
  channel_permission_overwrite\x00\x04
\x00\x02id\x00\x02
\x00\x04type\x00\t
\x00\x06allow_\x00\t
\x00\x04deny\x00\t
\x00\x08position\x00\t # col 10
\x00\nrecipients\x00!  # col 11
  \x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01
  \x00\x04nick\x00\r
\x00\x05topic\x00\r# col 12
\x00\x04type\x00\x14   # col 13
\x00\nuser_limit\x00\t # col 14

\x00\x00\x00\x01   # row count

\x00\x00\x00\x08 \x03\x8a\x19\x8e\xf8\x82\x00\x01  # value 0, id 
(72201598085466627)

\xff\xff\xff\xff   # value 1, application_id 
(null)

\x00\x00\x00\x04 \x00\x00\xfa\x00  # value 2, bitrate (16384000)

\x00\x00\x00\x08 \x00\x00\xfa\x00\x00\xf8G\xc5 # value 3, guild_id 
(14215603427718332416)

\x00\x00\x00\x00   # value 4 icon_hash (empty 
string)

\x00\x00\x00\x08 \x03\x8b\xc0\xb5nB\x00\x02# value 5 last_message_id 
(144188231338986243)

\x00\x00\x00\x08 G\xc5\xffI\x98\xc4\xb4(   # value 6 last_pin_timestamp 
(2933185415680607559)

\x00\x00\x00\x03 \x8b\xc0\xa8  # value 7 (non-UTF8 
compliant string)

\xff\xff\xff\xff   # value 8, owner_id (null)

\x00\x00\x01<  # absurdly large value size 
(1006698496), rest of message unreliable

\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
 
\x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\xff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00
{noformat}

Do any of those other column values look incorrect to you?)

> Corruption while adding a column to a table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick text
> );
> CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite (
> id bigint,
> type int,
> allow_ int,
> deny int
> );
> CREATE TABLE IF NOT EXISTS discord_channels.channels (
> id bigint,
> guild_id bigint,
> type 

[jira] [Comment Edited] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733022#comment-15733022
 ] 

Tyler Hobbs edited comment on CASSANDRA-13004 at 12/8/16 6:42 PM:
--

So far I've had no luck reproducing this locally.  I dug into the response that 
the driver got, and this is the breakdown of the message:

{noformat}
\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels
  # flags and metadata

\x00\x02id\x00\x02 # col 0
\x00\x0eapplication_id\x00\x02 # col 1
\x00\x07bitrate\x00\t  # col 2
\x00\x08guild_id\x00\x02   # col 3
\x00\ticon_hash\x00\r  # col 4
\x00\x0flast_message_id\x00\x02# col 5
\x00\x12last_pin_timestamp\x00\x0b # col 6
\x00\x04name\x00\r # col 7
\x00\x08owner_id\x00\x02   # col 8
\x00\x15permission_overwrites\x00! # col 9
  \x00\x02\x000\x00\x10discord_channels\x00\x1c
  channel_permission_overwrite\x00\x04
\x00\x02id\x00\x02
\x00\x04type\x00\t
\x00\x06allow_\x00\t
\x00\x04deny\x00\t
\x00\x08position\x00\t # col 10
\x00\nrecipients\x00!  # col 11
  \x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01
  \x00\x04nick\x00\r
\x00\x05topic\x00\r# col 12
\x00\x04type\x00\x14   # col 13
\x00\nuser_limit\x00\t # col 14

\x00\x00\x00\x01   # row count

\x00\x00\x00\x08 \x03\x8a\x19\x8e\xf8\x82\x00\x01  # value 0, id 
(72201598085466627)

\xff\xff\xff\xff   # value 1, application_id 
(null)

\x00\x00\x00\x04 \x00\x00\xfa\x00  # value 2, bitrate (16384000)

\x00\x00\x00\x08 \x00\x00\xfa\x00\x00\xf8G\xc5 # value 3, guild_id 
(14215603427718332416)

\x00\x00\x00\x00   # value 4 icon_hash (empty 
string)

\x00\x00\x00\x08 \x03\x8b\xc0\xb5nB\x00\x02# value 5 last_message_id 
(144188231338986243)

\x00\x00\x00\x08 G\xc5\xffI\x98\xc4\xb4(   # value 6 last_pin_timestamp 
(2933185415680607559)

\x00\x00\x00\x03 \x8b\xc0\xa8  # value 7 (non-UTF8 
compliant string)

\xff\xff\xff\xff   # value 8, owner_id (null)

\x00\x00\x01<  # absurdly large value size 
(1006698496), rest of message unreliable

\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
 
\x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\xff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00
{noformat}

Do any of those other column values look incorrect to you?


was (Author: thobbs):
So far I've had no luck reproducing this locally.  I dug into the response that 
the driver got, and this is the breakdown of the message:

{noformat}
\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels
  # flags and metadata

\x00\x02id\x00\x02 # col 0
\x00\x0eapplication_id\x00\x02 # col 1
\x00\x07bitrate\x00\t  # col 2
\x00\x08guild_id\x00\x02   # col 3
\x00\ticon_hash\x00\r  # col 4
\x00\x0flast_message_id\x00\x02   

[jira] [Commented] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733022#comment-15733022
 ] 

Tyler Hobbs commented on CASSANDRA-13004:
-

So far I've had no luck reproducing this locally.  I dug into the response that 
the driver got, and this is the breakdown of the message:

{noformat}
\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels
  # flags and metadata

\x00\x02id\x00\x02 # col 0
\x00\x0eapplication_id\x00\x02 # col 1
\x00\x07bitrate\x00\t  # col 2
\x00\x08guild_id\x00\x02   # col 3
\x00\ticon_hash\x00\r  # col 4
\x00\x0flast_message_id\x00\x02# col 5
\x00\x12last_pin_timestamp\x00\x0b # col 6
\x00\x04name\x00\r # col 7
\x00\x08owner_id\x00\x02   # col 8
\x00\x15permission_overwrites\x00! # col 9
  \x00\x02\x000\x00\x10discord_channels\x00\x1c
  channel_permission_overwrite\x00\x04
\x00\x02id\x00\x02
\x00\x04type\x00\t
\x00\x06allow_\x00\t
\x00\x04deny\x00\t
\x00\x08position\x00\t # col 10
\x00\nrecipients\x00!  # col 11
  \x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01
\x00\x04nick\x00\r # col 12
\x00\x05topic\x00\r# col 13
\x00\x04type\x00\x14   # col 14
\x00\nuser_limit\x00\t # end type codes

\x00\x00\x00\x01   # row count

\x00\x00\x00\x08 \x03\x8a\x19\x8e\xf8\x82\x00\x01  # value 0, id 
(72201598085466627)

\xff\xff\xff\xff   # value 1, application_id 
(null)

\x00\x00\x00\x04 \x00\x00\xfa\x00  # value 2, bitrate (16384000)

\x00\x00\x00\x08 \x00\x00\xfa\x00\x00\xf8G\xc5 # value 3, guild_id 
(14215603427718332416)

\x00\x00\x00\x00   # value 4 icon_hash (empty 
string)

\x00\x00\x00\x08 \x03\x8b\xc0\xb5nB\x00\x02# value 5 last_message_id 
(144188231338986243)

\x00\x00\x00\x08 G\xc5\xffI\x98\xc4\xb4(   # value 6 last_pin_timestamp 
(2933185415680607559)

\x00\x00\x00\x03 \x8b\xc0\xa8  # value 7 (non-UTF8 
compliant string)

\xff\xff\xff\xff   # value 8, owner_id (null)

\x00\x00\x01<  # absurdly large value size 
(1006698496), rest of message unreliable

\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
 
\x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\xff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00
{noformat}

Do any of those other column values look incorrect to you?

> Corruption while adding a column to a table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick text
> );
> CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite (
> id bigint,
> type int,
> allow_ int,
> deny int
> );
> CREATE TABLE IF NOT EXISTS discord_channels.channels (

[jira] [Commented] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732790#comment-15732790
 ] 

Tyler Hobbs commented on CASSANDRA-13004:
-

It's not too surprising to see that right around the time of an alter.  That 
basically just means that the node hadn't received the schema update yet when 
the mutation came in.

> Corruption while adding a column to a table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick text
> );
> CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite (
> id bigint,
> type int,
> allow_ int,
> deny int
> );
> CREATE TABLE IF NOT EXISTS discord_channels.channels (
> id bigint,
> guild_id bigint,
> type tinyint,
> name text,
> topic text,
> position int,
> owner_id bigint,
> icon_hash text,
> recipients map,
> permission_overwrites map,
> bitrate int,
> user_limit int,
> last_pin_timestamp timestamp,
> last_message_id bigint,
> PRIMARY KEY (id)
> );
> {code}
> And then we executed the following alter.
> {code:none}
> ALTER TABLE discord_channels.channels ADD application_id bigint;
> {code}
> And one row (that we can tell) got corrupted at the same time and could no 
> longer be read from the Python driver. 
> {code:none}
> [E 161206 01:56:58 geventreactor:141] Error decoding response from Cassandra. 
> ver(4); flags(); stream(27); op(8); offset(9); len(887); buffer: 
> '\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels\x00\x02id\x00\x02\x00\x0eapplication_id\x00\x02\x00\x07bitrate\x00\t\x00\x08guild_id\x00\x02\x00\ticon_hash\x00\r\x00\x0flast_message_id\x00\x02\x00\x12last_pin_timestamp\x00\x0b\x00\x04name\x00\r\x00\x08owner_id\x00\x02\x00\x15permission_overwrites\x00!\x00\x02\x000\x00\x10discord_channels\x00\x1cchannel_permission_overwrite\x00\x04\x00\x02id\x00\x02\x00\x04type\x00\t\x00\x06allow_\x00\t\x00\x04deny\x00\t\x00\x08position\x00\t\x00\nrecipients\x00!\x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01\x00\x04nick\x00\r\x00\x05topic\x00\r\x00\x04type\x00\x14\x00\nuser_limit\x00\t\x00\x00\x00\x01\x00\x00\x00\x08\x03\x8a\x19\x8e\xf8\x82\x00\x01\xff\xff\xff\xff\x00\x00\x00\x04\x00\x00\xfa\x00\x00\x00\x00\x08\x00\x00\xfa\x00\x00\xf8G\xc5\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8b\xc0\xb5nB\x00\x02\x00\x00\x00\x08G\xc5\xffI\x98\xc4\xb4(\x00\x00\x00\x03\x8b\xc0\xa8\xff\xff\xff\xff\x00\x00\x01<\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
>  
> \x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\xff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00'
> {code}
> And then in cqlsh when trying to read the row we got this. 
> {code:none}
> /usr/bin/cqlsh.py:632: DateOverFlowWarning: Some timestamps are larger than 
> Python datetime can represent. Timestamps are displayed in milliseconds from 
> epoch.
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1301, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip/cassandra-driver-3.5.0.post0-d8d0456/cassandra/cluster.py",
>  

[jira] [Commented] (CASSANDRA-13004) Corruption while adding a column to a table

2016-12-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15732587#comment-15732587
 ] 

Tyler Hobbs commented on CASSANDRA-13004:
-

I'm going to try to reproduce this locally.

> Corruption while adding a column to a table
> ---
>
> Key: CASSANDRA-13004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13004
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>
> We had the following schema in production. 
> {code:none}
> CREATE TYPE IF NOT EXISTS discord_channels.channel_recipient (
> nick text
> );
> CREATE TYPE IF NOT EXISTS discord_channels.channel_permission_overwrite (
> id bigint,
> type int,
> allow_ int,
> deny int
> );
> CREATE TABLE IF NOT EXISTS discord_channels.channels (
> id bigint,
> guild_id bigint,
> type tinyint,
> name text,
> topic text,
> position int,
> owner_id bigint,
> icon_hash text,
> recipients map,
> permission_overwrites map,
> bitrate int,
> user_limit int,
> last_pin_timestamp timestamp,
> last_message_id bigint,
> PRIMARY KEY (id)
> );
> {code}
> And then we executed the following alter.
> {code:none}
> ALTER TABLE discord_channels.channels ADD application_id bigint;
> {code}
> And one row (that we can tell) got corrupted at the same time and could no 
> longer be read from the Python driver. 
> {code:none}
> [E 161206 01:56:58 geventreactor:141] Error decoding response from Cassandra. 
> ver(4); flags(); stream(27); op(8); offset(9); len(887); buffer: 
> '\x84\x00\x00\x1b\x08\x00\x00\x03w\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x10discord_channels\x00\x08channels\x00\x02id\x00\x02\x00\x0eapplication_id\x00\x02\x00\x07bitrate\x00\t\x00\x08guild_id\x00\x02\x00\ticon_hash\x00\r\x00\x0flast_message_id\x00\x02\x00\x12last_pin_timestamp\x00\x0b\x00\x04name\x00\r\x00\x08owner_id\x00\x02\x00\x15permission_overwrites\x00!\x00\x02\x000\x00\x10discord_channels\x00\x1cchannel_permission_overwrite\x00\x04\x00\x02id\x00\x02\x00\x04type\x00\t\x00\x06allow_\x00\t\x00\x04deny\x00\t\x00\x08position\x00\t\x00\nrecipients\x00!\x00\x02\x000\x00\x10discord_channels\x00\x11channel_recipient\x00\x01\x00\x04nick\x00\r\x00\x05topic\x00\r\x00\x04type\x00\x14\x00\nuser_limit\x00\t\x00\x00\x00\x01\x00\x00\x00\x08\x03\x8a\x19\x8e\xf8\x82\x00\x01\xff\xff\xff\xff\x00\x00\x00\x04\x00\x00\xfa\x00\x00\x00\x00\x08\x00\x00\xfa\x00\x00\xf8G\xc5\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8b\xc0\xb5nB\x00\x02\x00\x00\x00\x08G\xc5\xffI\x98\xc4\xb4(\x00\x00\x00\x03\x8b\xc0\xa8\xff\xff\xff\xff\x00\x00\x01<\x00\x00\x00\x06\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x81L\xea\xfc\x82\x00\n\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1e\xe6\x8b\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x040\x07\xf8Q\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1f\x1b{\x82\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x07\xf8Q\x00\x00\x00\x04\x10\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x1fH6\x82\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x05\xe8A\x00\x00\x00\x04\x10\x02\x00\x00\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a+=\xca\xc0\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x08\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00$\x00\x00\x00\x08\x03\x8a\x8f\x979\x80\x00\n\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00
>  
> \x08\x01\x00\x00\x00\x04\xc4\xb4(\x00\xff\xff\xff\xff\x00\x00\x00O[f\x80Q\x07general\x05\xf8G\xc5\xffI\x98\xc4\xb4(\x00\xf8O[f\x80Q\x00\x00\x00\x02\x04\xf8O[f\x80Q\x00\xf8G\xc5\xffI\x98\x01\x00\x00\xf8O[f\x80Q\x00\x00\x00\x00\xf8G\xc5\xffI\x97\xc4\xb4(\x06\x00\xf8O\x7fe\x1fm\x08\x03\x00\x00\x00\x01\x00\x00\x00\x00\x04\x00\x00\x00\x00'
> {code}
> And then in cqlsh when trying to read the row we got this. 
> {code:none}
> /usr/bin/cqlsh.py:632: DateOverFlowWarning: Some timestamps are larger than 
> Python datetime can represent. Timestamps are displayed in milliseconds from 
> epoch.
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1301, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip/cassandra-driver-3.5.0.post0-d8d0456/cassandra/cluster.py",
>  line 3650, in result
> raise self._final_exception
> UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 2: 
> 

[jira] [Commented] (CASSANDRA-12768) CQL often queries static columns unnecessarily

2016-12-06 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727047#comment-15727047
 ] 

Tyler Hobbs commented on CASSANDRA-12768:
-

The latest commits and test runs look good to me, so +1 on committing those.

However, I do still want to figure out what's up with the behavior in 
{{returnStaticContentOnPartitionWithNoRows()}} in 3.x vs 3.0.  [~blerer] can 
you open a new ticket for that if needed, and if not, explain why here?

> CQL often queries static columns unnecessarily
> --
>
> Key: CASSANDRA-12768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12768
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.x
>
>
> While looking at CASSANDRA-12694 (which isn't directly related, but some of 
> the results in this ticket are explained by this), I realized that CQL was 
> always querying static columns even in cases where this is unnecessary.
> More precisely, for reasons long described elsewhere, we have to query all 
> the columns for a row (we have optimizations, see CASSANDRA-10657, but they 
> don't change that general fact) to be able to distinguish between the case 
> where a row doesn't exist from when it exists but has no values for the 
> columns selected by the query. *However*, this really only extend to 
> "regular" columns (static columns play no role in deciding whether a 
> particular row exists or not) but the implementation in 3.x, which is in 
> {{ColumnFilter}}, still always query all static columns.
> We shouldn't do that and it's arguably a performance regression from 2.x. 
> Which is why I'm tentatively marking this a bug and for the 3.0 line. It's a 
> tiny bit scary for 3.0 though so really more asking for other opinions and 
> I'd be happy to stick to 3.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12914) cqlsh describe fails with "list[i] not a string for i in..."

2016-11-29 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12914:

   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.10
   3.0.11
   2.2.9
Reproduced In: 3.8, 3.0.9, 2.2.7  (was: 2.2.7, 3.0.9, 3.8)
   Status: Resolved  (was: Patch Available)

The tests all ran perfectly, so +1 on the patch.

Committed to 2.2 as {{d38bf9faa47ebd4ea4edc9c6afa17abe48dbdc9e}} and merged up 
to 3.0, 3.11 (which will actually be the 3.10 release), 3.X, and trunk.

> cqlsh describe fails with "list[i] not a string for i in..."
> 
>
> Key: CASSANDRA-12914
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12914
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Adam Holmberg
>Assignee: Adam Holmberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.10
>
>
> repro:
> Create a keyspace and a few user types.
> {code}
> use keyspace;
> desc types;
> {code}
> This is caused by a limitation in {{cmd.Cmd.columnize}} in that it doesn't 
> accept unicode, which our identifiers are. Ending stack trace:
> {noformat}
> "...bin/cqlsh.py", ... in describe_usertypes
> cmd.Cmd.columnize(self, protect_names(ksmeta.user_types.keys()))
>   File "/Users/adamholmberg/.pyenv/versions/2.7.8/lib/python2.7/cmd.py", line 
> 363, in columnize
> ", ".join(map(str, nonstrings)))
> TypeError: list[i] not a string for i in 0, 1, 2, 3, 4
> {noformat}
> This was previously obscured because the driver was incorrectly encoding 
> identifiers in {{protect_name}}, which caused other problems for cqlsh schema 
> generation. With that 
> [change|https://github.com/datastax/python-driver/commit/0959b5dc1662d6957f2951738d0a4f053ac78f66],
>  we now must encode identifiers in cqlsh to avoid blowing up {{columnize}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-11-29 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706047#comment-15706047
 ] 

Tyler Hobbs commented on CASSANDRA-8616:


I totally forgot about this, sorry about that.  The test results do look much 
better now.  If you can rebase again and do one final test run, I'm +1 on 
committing this if the test results still look good.

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12914) cqlsh describe fails with "list[i] not a string for i in..."

2016-11-29 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12914:

Reproduced In: 3.8, 3.0.9, 2.2.7
 Priority: Minor  (was: Major)
Fix Version/s: 3.x
   3.0.x
   2.2.x
  Component/s: Tools

> cqlsh describe fails with "list[i] not a string for i in..."
> 
>
> Key: CASSANDRA-12914
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12914
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Adam Holmberg
>Assignee: Adam Holmberg
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> repro:
> Create a keyspace and a few user types.
> {code}
> use keyspace;
> desc types;
> {code}
> This is caused by a limitation in {{cmd.Cmd.columnize}} in that it doesn't 
> accept unicode, which our identifiers are. Ending stack trace:
> {noformat}
> "...bin/cqlsh.py", ... in describe_usertypes
> cmd.Cmd.columnize(self, protect_names(ksmeta.user_types.keys()))
>   File "/Users/adamholmberg/.pyenv/versions/2.7.8/lib/python2.7/cmd.py", line 
> 363, in columnize
> ", ".join(map(str, nonstrings)))
> TypeError: list[i] not a string for i in 0, 1, 2, 3, 4
> {noformat}
> This was previously obscured because the driver was incorrectly encoding 
> identifiers in {{protect_name}}, which caused other problems for cqlsh schema 
> generation. With that 
> [change|https://github.com/datastax/python-driver/commit/0959b5dc1662d6957f2951738d0a4f053ac78f66],
>  we now must encode identifiers in cqlsh to avoid blowing up {{columnize}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12914) cqlsh describe fails with "list[i] not a string for i in..."

2016-11-29 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705945#comment-15705945
 ] 

Tyler Hobbs commented on CASSANDRA-12914:
-

The patch looks good to me, thank you.  It looks like this also exists in 2.2 
and 3.0, but the patch backports cleanly.  I've started cqlsh test runs for the 
branches here:

* 
[2.2|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-12914-2.2-cqlsh-cqlsh-tests/]
* 
[3.0|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-12914-3.0-cqlsh-cqlsh-tests/]
* 
[3.X|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-12914-3.X-cqlsh-cqlsh-tests/]
* 
[trunk|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-12914-trunk-cqlsh-cqlsh-tests/]

> cqlsh describe fails with "list[i] not a string for i in..."
> 
>
> Key: CASSANDRA-12914
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12914
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Adam Holmberg
>Assignee: Adam Holmberg
>
> repro:
> Create a keyspace and a few user types.
> {code}
> use keyspace;
> desc types;
> {code}
> This is caused by a limitation in {{cmd.Cmd.columnize}} in that it doesn't 
> accept unicode, which our identifiers are. Ending stack trace:
> {noformat}
> "...bin/cqlsh.py", ... in describe_usertypes
> cmd.Cmd.columnize(self, protect_names(ksmeta.user_types.keys()))
>   File "/Users/adamholmberg/.pyenv/versions/2.7.8/lib/python2.7/cmd.py", line 
> 363, in columnize
> ", ".join(map(str, nonstrings)))
> TypeError: list[i] not a string for i in 0, 1, 2, 3, 4
> {noformat}
> This was previously obscured because the driver was incorrectly encoding 
> identifiers in {{protect_name}}, which caused other problems for cqlsh schema 
> generation. With that 
> [change|https://github.com/datastax/python-driver/commit/0959b5dc1662d6957f2951738d0a4f053ac78f66],
>  we now must encode identifiers in cqlsh to avoid blowing up {{columnize}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12768) CQL often queries static columns unnecessarily

2016-11-22 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15687479#comment-15687479
 ] 

Tyler Hobbs commented on CASSANDRA-12768:
-

Sorry for the slow review, this fell off my radar.

Overall I think the patches will be fine to apply to both 3.0 and 3.x.  I just 
have a few questions and comments on the patch:

* Why doesn't {{returnStaticContentOnPartitionWithNoRows()}} specifically 
handle secondary index queries specially like the 3.0 patch?  It looks like 
this was already the case elsewhere in 3.x, but I'd like to confirm the reason.
* We might as well rename {{isFetchAllRegulars}} to {{fetchAllRegulars}} in the 
3.x patch, since the latter feels more natural.
* Why doesn't the 3.0 patch include the change to {{ColumnFilter.newTester()}} 
that you have in 3.x?

Additionally, it looks like there's a static column related test error in the 
[3.0 
dtests|http://cassci.datastax.com/job/pcmanus-12768-3.0-dtest/lastCompletedBuild/testReport/paging_test/TestPagingData/static_columns_paging_test/].
  Can you look into that?

> CQL often queries static columns unnecessarily
> --
>
> Key: CASSANDRA-12768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12768
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.x
>
>
> While looking at CASSANDRA-12694 (which isn't directly related, but some of 
> the results in this ticket are explained by this), I realized that CQL was 
> always querying static columns even in cases where this is unnecessary.
> More precisely, for reasons long described elsewhere, we have to query all 
> the columns for a row (we have optimizations, see CASSANDRA-10657, but they 
> don't change that general fact) to be able to distinguish between the case 
> where a row doesn't exist from when it exists but has no values for the 
> columns selected by the query. *However*, this really only extend to 
> "regular" columns (static columns play no role in deciding whether a 
> particular row exists or not) but the implementation in 3.x, which is in 
> {{ColumnFilter}}, still always query all static columns.
> We shouldn't do that and it's arguably a performance regression from 2.x. 
> Which is why I'm tentatively marking this a bug and for the 3.0 line. It's a 
> tiny bit scary for 3.0 though so really more asking for other opinions and 
> I'd be happy to stick to 3.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12916) Broken UDT muitations loading from CommitLog

2016-11-18 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677829#comment-15677829
 ] 

Tyler Hobbs commented on CASSANDRA-12916:
-

The patch (with the Thrift followup fix) looks correct to me.

> Broken UDT muitations loading from CommitLog
> 
>
> Key: CASSANDRA-12916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12916
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Cassandra 3.9
>Reporter: Sergey Dobrodey
>Assignee: Sam Tunnicliffe
>Priority: Critical
>  Labels: patch
> Fix For: 3.x
>
> Attachments: patch.diff, udt.cql
>
>
> UDT mutatitions seems to be broken. Simple example is attached. After steps 
> from it, restart cassandra and during commit log reading it will fail with 
> error:
> ERROR 09:34:46 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /tmp/mutation6087238241614604390dat.  This may be caused by replaying a 
> mutation against a table with the same name but incompatible schema.  
> Exception follows: org.apache.cassandra.serializers.MarshalException: Not 
> enough bytes to read 0th field data
> I resolved this problem, so my patch is in attachment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string

2016-11-17 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15675019#comment-15675019
 ] 

Tyler Hobbs commented on CASSANDRA-10145:
-

The new patch looks good, so I've started a CI test run:

||branch||testall||dtest||
|[CASSANDRA-10145-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-10145-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-10145-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-10145-trunk-dtest]|

> Change protocol to allow sending key space independent of query string
> --
>
> Key: CASSANDRA-10145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10145
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: Sandeep Tamhankar
> Fix For: 3.x
>
> Attachments: 10145-trunk.txt
>
>
> Currently keyspace is either embedded in the query string or set through "use 
> keyspace" on a connection by client driver. 
> There are practical use cases where client user has query and keyspace 
> independently. In order for that scenario to work, they will have to create 
> one client session per keyspace or have to resort to some string replace 
> hackery.
> It will be nice if protocol allowed sending keyspace separately from the 
> query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string

2016-11-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10145:

Status: Awaiting Feedback  (was: Open)

> Change protocol to allow sending key space independent of query string
> --
>
> Key: CASSANDRA-10145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10145
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: Sandeep Tamhankar
> Fix For: 3.x
>
> Attachments: 10145-trunk.txt
>
>
> Currently keyspace is either embedded in the query string or set through "use 
> keyspace" on a connection by client driver. 
> There are practical use cases where client user has query and keyspace 
> independently. In order for that scenario to work, they will have to create 
> one client session per keyspace or have to resort to some string replace 
> hackery.
> It will be nice if protocol allowed sending keyspace separately from the 
> query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string

2016-11-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10145:

Status: Open  (was: Patch Available)

> Change protocol to allow sending key space independent of query string
> --
>
> Key: CASSANDRA-10145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10145
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: Sandeep Tamhankar
> Fix For: 3.x
>
> Attachments: 10145-trunk.txt
>
>
> Currently keyspace is either embedded in the query string or set through "use 
> keyspace" on a connection by client driver. 
> There are practical use cases where client user has query and keyspace 
> independently. In order for that scenario to work, they will have to create 
> one client session per keyspace or have to resort to some string replace 
> hackery.
> It will be nice if protocol allowed sending keyspace separately from the 
> query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string

2016-11-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668258#comment-15668258
 ] 

Tyler Hobbs commented on CASSANDRA-10145:
-

Thanks for the patch, [~stamhankar999].  Overall I think it's pretty good, I 
just have a couple of review comments:

* As you commented in {{native_protocol_v5.spec}}, we can go with a {{}} 
field after the query string in {{PREPARE}} messages.  That would be more in 
line with how we handle optional fields in other messages.
* We should probably rename {{ClientState.withKeyspace()}} to 
{{maybeOverrideWithKeyspaceFromOptions()}}, and update the javadoc to clarify 
the behavior when null is passed in.
* I'm not sure that I fully follow your comment in {{CFStatement}}.  If you 
think you can make a clear improvement around that behavior, would you mind 
making a separate patch with those changes for me to take a look at?

Besides that, I think everything else looks good so far.  After you made the 
above changes, I'll set up CI test runs.

> Change protocol to allow sending key space independent of query string
> --
>
> Key: CASSANDRA-10145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10145
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: Sandeep Tamhankar
> Fix For: 3.x
>
> Attachments: 10145-trunk.txt
>
>
> Currently keyspace is either embedded in the query string or set through "use 
> keyspace" on a connection by client driver. 
> There are practical use cases where client user has query and keyspace 
> independently. In order for that scenario to work, they will have to create 
> one client session per keyspace or have to resort to some string replace 
> hackery.
> It will be nice if protocol allowed sending keyspace separately from the 
> query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10145) Change protocol to allow sending key space independent of query string

2016-11-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10145:

Reviewer: Tyler Hobbs

> Change protocol to allow sending key space independent of query string
> --
>
> Key: CASSANDRA-10145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10145
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vishy Kasar
>Assignee: Sandeep Tamhankar
> Fix For: 3.x
>
> Attachments: 10145-trunk.txt
>
>
> Currently keyspace is either embedded in the query string or set through "use 
> keyspace" on a connection by client driver. 
> There are practical use cases where client user has query and keyspace 
> independently. In order for that scenario to work, they will have to create 
> one client session per keyspace or have to resort to some string replace 
> hackery.
> It will be nice if protocol allowed sending keyspace separately from the 
> query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12531) dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3

2016-11-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668071#comment-15668071
 ] 

Tyler Hobbs commented on CASSANDRA-12531:
-

+1 on the analysis and dtest PR fix.

> dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3
> --
>
> Key: CASSANDRA-12531
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12531
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Sam Tunnicliffe
>  Labels: dtest
> Fix For: 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v3
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v4
> {code}
> Error Message
> ReadTimeout not raised
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-swJYMH
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 90, in 
> test_tombstone_failure_v3
> self._perform_cql_statement(session, "SELECT value FROM tombstonefailure")
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 63, in 
> _perform_cql_statement
> session.execute(statement)
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> "ReadTimeout not raised\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-swJYMH\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12531) dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3

2016-11-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12531:

Status: Ready to Commit  (was: Patch Available)

> dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3
> --
>
> Key: CASSANDRA-12531
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12531
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Sam Tunnicliffe
>  Labels: dtest
> Fix For: 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v3
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v4
> {code}
> Error Message
> ReadTimeout not raised
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-swJYMH
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 90, in 
> test_tombstone_failure_v3
> self._perform_cql_statement(session, "SELECT value FROM tombstonefailure")
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 63, in 
> _perform_cql_statement
> session.execute(statement)
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> "ReadTimeout not raised\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-swJYMH\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8502) Static columns returning null for pages after first

2016-11-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668005#comment-15668005
 ] 

Tyler Hobbs commented on CASSANDRA-8502:


It looks like I made a mistake and only committed the follow up patch to 2.1+, 
when it should have been applied to 2.0+.  Since 2.0 has been EOL for a long 
time and won't have any more releases, I won't commit a fix to 2.0 at this 
time.  For anybody that needs it, you should be able to pretty easily backport 
{{f1662b1479c64213c06ac921631f7e1186619698}} to the {{cassandra-2.0}} branch.

> Static columns returning null for pages after first
> ---
>
> Key: CASSANDRA-8502
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8502
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Flavien Charlon
>Assignee: Tyler Hobbs
> Fix For: 2.0.16, 2.1.6, 2.2.0 rc1
>
> Attachments: 8502-2.0-v2.txt, 8502-2.0.txt, 8502-2.1-v2.txt, 
> null-static-column.txt
>
>
> When paging is used for a query containing a static column, the first page 
> contains the right value for the static column, but subsequent pages have 
> null null for the static column instead of the expected value.
> Repro steps:
> - Create a table with a static column
> - Create a partition with 500 cells
> - Using cqlsh, query that partition
> Actual result:
> - You will see that first, the static column appears as expected, but if you 
> press a key after "---MORE---", the static columns will appear as null.
> See the attached file for a repro of the output.
> I am using a single node cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7254) NPE on startup if another Cassandra instance is already running

2016-11-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667988#comment-15667988
 ] 

Tyler Hobbs commented on CASSANDRA-7254:


If you're using a version of Cassandra that includes CASSANDRA-10091, you can 
safely remove this check and still get a good error message (as confirmed in 
CASSANDRA-12074).  Otherwise, this check is purely for cosmetic reasons, and 
you can remove it if it's causing problems.

> NPE on startup if another Cassandra instance is already running
> ---
>
> Key: CASSANDRA-7254
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7254
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tyler Hobbs
>Assignee: Brandon Williams
>Priority: Minor
> Fix For: 2.0.10, 2.1 rc3
>
> Attachments: 7254.txt
>
>
> After CASSANDRA-7087, if you try to start cassandra while another instance is 
> already running, you'll see something like this:
> {noformat}
> $ bin/cassandra -f
> Error: Exception thrown by the agent : java.lang.NullPointerException
> {noformat}
> This is probably a JVM bug, but we should confirm that, open a JVM ticket, 
> and see if we can give a more useful error message on the C* side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12689) All MutationStage threads blocked, kills server

2016-10-28 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616518#comment-15616518
 ] 

Tyler Hobbs commented on CASSANDRA-12689:
-

The new test results look good, so +1, committed to 3.0 as 
{{d38a732ce15caab57ce6dddb3c0d6a436506db29}} and merged up to 3.X and trunk.  
Thanks!

> All MutationStage threads blocked, kills server
> ---
>
> Key: CASSANDRA-12689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12689
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Assignee: Benjamin Roth
>Priority: Critical
> Fix For: 3.0.10, 3.10
>
>
> Under heavy load (e.g. due to repair during normal operations), a lot of 
> NullPointerExceptions occur in MutationStage. Unfortunately, the log is not 
> very chatty, trace is missing:
> {noformat}
> 2016-09-22T06:29:47+00:00 cas6 [MutationStage-1] 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService Uncaught 
> exception on thread Thread[MutationStage-1,5,main]: {}
> 2016-09-22T06:29:47+00:00 cas6 #011java.lang.NullPointerException: null
> {noformat}
> Then, after some time, in most cases ALL threads in MutationStage pools are 
> completely blocked. This leads to piling up pending tasks until server runs 
> OOM and is completely unresponsive due to GC. Threads will NEVER unblock 
> until server restart. Even if load goes completely down, all hints are 
> paused, and no compaction or repair is running. Only restart helps.
> I can understand that pending tasks in MutationStage may pile up under heavy 
> load, but tasks should be processed and dequeud after load goes down. This is 
> definitively not the case. This looks more like a an unhandled exception 
> leading to a stuck lock.
> Stack trace from jconsole, all Threads in MutationStage show same trace.
> {noformat}
> Name: MutationStage-48
> State: WAITING on java.util.concurrent.CompletableFuture$Signaller@fcc8266
> Total blocked: 137  Total waited: 138.513
> {noformat}
> Stack trace: 
> {noformat}
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:137)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> org.apache.cassandra.hints.Hint.apply(Hint.java:96)
> org.apache.cassandra.hints.HintVerbHandler.doVerb(HintVerbHandler.java:91)
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
> java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12689) All MutationStage threads blocked, kills server

2016-10-28 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-12689.
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.10
   3.0.10

> All MutationStage threads blocked, kills server
> ---
>
> Key: CASSANDRA-12689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12689
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Assignee: Benjamin Roth
>Priority: Critical
> Fix For: 3.0.10, 3.10
>
>
> Under heavy load (e.g. due to repair during normal operations), a lot of 
> NullPointerExceptions occur in MutationStage. Unfortunately, the log is not 
> very chatty, trace is missing:
> {noformat}
> 2016-09-22T06:29:47+00:00 cas6 [MutationStage-1] 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService Uncaught 
> exception on thread Thread[MutationStage-1,5,main]: {}
> 2016-09-22T06:29:47+00:00 cas6 #011java.lang.NullPointerException: null
> {noformat}
> Then, after some time, in most cases ALL threads in MutationStage pools are 
> completely blocked. This leads to piling up pending tasks until server runs 
> OOM and is completely unresponsive due to GC. Threads will NEVER unblock 
> until server restart. Even if load goes completely down, all hints are 
> paused, and no compaction or repair is running. Only restart helps.
> I can understand that pending tasks in MutationStage may pile up under heavy 
> load, but tasks should be processed and dequeud after load goes down. This is 
> definitively not the case. This looks more like a an unhandled exception 
> leading to a stuck lock.
> Stack trace from jconsole, all Threads in MutationStage show same trace.
> {noformat}
> Name: MutationStage-48
> State: WAITING on java.util.concurrent.CompletableFuture$Signaller@fcc8266
> Total blocked: 137  Total waited: 138.513
> {noformat}
> Stack trace: 
> {noformat}
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:137)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> org.apache.cassandra.hints.Hint.apply(Hint.java:96)
> org.apache.cassandra.hints.HintVerbHandler.doVerb(HintVerbHandler.java:91)
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
> java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12689) All MutationStage threads blocked, kills server

2016-10-27 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613256#comment-15613256
 ] 

Tyler Hobbs commented on CASSANDRA-12689:
-

Okay, I believe I fixed the problem, and I've restarted the 3.0 tests.

> All MutationStage threads blocked, kills server
> ---
>
> Key: CASSANDRA-12689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12689
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Assignee: Benjamin Roth
>Priority: Critical
> Fix For: 3.0.x, 3.x
>
>
> Under heavy load (e.g. due to repair during normal operations), a lot of 
> NullPointerExceptions occur in MutationStage. Unfortunately, the log is not 
> very chatty, trace is missing:
> {noformat}
> 2016-09-22T06:29:47+00:00 cas6 [MutationStage-1] 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService Uncaught 
> exception on thread Thread[MutationStage-1,5,main]: {}
> 2016-09-22T06:29:47+00:00 cas6 #011java.lang.NullPointerException: null
> {noformat}
> Then, after some time, in most cases ALL threads in MutationStage pools are 
> completely blocked. This leads to piling up pending tasks until server runs 
> OOM and is completely unresponsive due to GC. Threads will NEVER unblock 
> until server restart. Even if load goes completely down, all hints are 
> paused, and no compaction or repair is running. Only restart helps.
> I can understand that pending tasks in MutationStage may pile up under heavy 
> load, but tasks should be processed and dequeud after load goes down. This is 
> definitively not the case. This looks more like a an unhandled exception 
> leading to a stuck lock.
> Stack trace from jconsole, all Threads in MutationStage show same trace.
> {noformat}
> Name: MutationStage-48
> State: WAITING on java.util.concurrent.CompletableFuture$Signaller@fcc8266
> Total blocked: 137  Total waited: 138.513
> {noformat}
> Stack trace: 
> {noformat}
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:137)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> org.apache.cassandra.hints.Hint.apply(Hint.java:96)
> org.apache.cassandra.hints.HintVerbHandler.doVerb(HintVerbHandler.java:91)
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
> java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12689) All MutationStage threads blocked, kills server

2016-10-27 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613199#comment-15613199
 ] 

Tyler Hobbs commented on CASSANDRA-12689:
-

It looks like there are some problems with the existing dtests in my 3.0 
backport, I'll need to spend a bit of time fixing those.

> All MutationStage threads blocked, kills server
> ---
>
> Key: CASSANDRA-12689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12689
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Assignee: Benjamin Roth
>Priority: Critical
> Fix For: 3.0.x, 3.x
>
>
> Under heavy load (e.g. due to repair during normal operations), a lot of 
> NullPointerExceptions occur in MutationStage. Unfortunately, the log is not 
> very chatty, trace is missing:
> {noformat}
> 2016-09-22T06:29:47+00:00 cas6 [MutationStage-1] 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService Uncaught 
> exception on thread Thread[MutationStage-1,5,main]: {}
> 2016-09-22T06:29:47+00:00 cas6 #011java.lang.NullPointerException: null
> {noformat}
> Then, after some time, in most cases ALL threads in MutationStage pools are 
> completely blocked. This leads to piling up pending tasks until server runs 
> OOM and is completely unresponsive due to GC. Threads will NEVER unblock 
> until server restart. Even if load goes completely down, all hints are 
> paused, and no compaction or repair is running. Only restart helps.
> I can understand that pending tasks in MutationStage may pile up under heavy 
> load, but tasks should be processed and dequeud after load goes down. This is 
> definitively not the case. This looks more like a an unhandled exception 
> leading to a stuck lock.
> Stack trace from jconsole, all Threads in MutationStage show same trace.
> {noformat}
> Name: MutationStage-48
> State: WAITING on java.util.concurrent.CompletableFuture$Signaller@fcc8266
> Total blocked: 137  Total waited: 138.513
> {noformat}
> Stack trace: 
> {noformat}
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:137)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> org.apache.cassandra.hints.Hint.apply(Hint.java:96)
> org.apache.cassandra.hints.HintVerbHandler.doVerb(HintVerbHandler.java:91)
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
> java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12689) All MutationStage threads blocked, kills server

2016-10-27 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12689:

Status: In Progress  (was: Awaiting Feedback)

> All MutationStage threads blocked, kills server
> ---
>
> Key: CASSANDRA-12689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12689
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Roth
>Assignee: Benjamin Roth
>Priority: Critical
> Fix For: 3.0.x, 3.x
>
>
> Under heavy load (e.g. due to repair during normal operations), a lot of 
> NullPointerExceptions occur in MutationStage. Unfortunately, the log is not 
> very chatty, trace is missing:
> {noformat}
> 2016-09-22T06:29:47+00:00 cas6 [MutationStage-1] 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService Uncaught 
> exception on thread Thread[MutationStage-1,5,main]: {}
> 2016-09-22T06:29:47+00:00 cas6 #011java.lang.NullPointerException: null
> {noformat}
> Then, after some time, in most cases ALL threads in MutationStage pools are 
> completely blocked. This leads to piling up pending tasks until server runs 
> OOM and is completely unresponsive due to GC. Threads will NEVER unblock 
> until server restart. Even if load goes completely down, all hints are 
> paused, and no compaction or repair is running. Only restart helps.
> I can understand that pending tasks in MutationStage may pile up under heavy 
> load, but tasks should be processed and dequeud after load goes down. This is 
> definitively not the case. This looks more like a an unhandled exception 
> leading to a stuck lock.
> Stack trace from jconsole, all Threads in MutationStage show same trace.
> {noformat}
> Name: MutationStage-48
> State: WAITING on java.util.concurrent.CompletableFuture$Signaller@fcc8266
> Total blocked: 137  Total waited: 138.513
> {noformat}
> Stack trace: 
> {noformat}
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:137)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> org.apache.cassandra.hints.Hint.apply(Hint.java:96)
> org.apache.cassandra.hints.HintVerbHandler.doVerb(HintVerbHandler.java:91)
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
> java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12443) Remove alter type support

2016-10-26 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609103#comment-15609103
 ] 

Tyler Hobbs commented on CASSANDRA-12443:
-

bq. I think we should add the change to the 3.4.3 version for 3.X and trunk and 
do nothing for the 3.0 branch. Tyler Hobbs Do you have an other suggestion?

Unfortunately, I do not.  I suppose that's what we need to do.

> Remove alter type support
> -
>
> Key: CASSANDRA-12443
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12443
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
> Fix For: 3.0.x
>
>
> Currently, we allow altering of types. However, because we no longer store 
> the length for all types anymore, switching from a fixed-width to 
> variable-width type causes issues. commitlog playback breaking startup, 
> queries currently in flight getting back bad results, and special casing 
> required to handle the changes. In addition, this would solve 
> CASSANDRA-10309, as there is no possibility of the types changing while an 
> SSTableReader is open.
> For fixed-length, compatible types, the alter also doesn't add much over a 
> cast, so users could use that in order to retrieve the altered type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11415) dtest failure in jmx_test.TestJMX.cfhistograms_test

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11415:

Component/s: Testing

> dtest failure in jmx_test.TestJMX.cfhistograms_test
> ---
>
> Key: CASSANDRA-11415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11415
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Yuki Morishita
>  Labels: dtest
> Fix For: 2.1.14
>
>
> We are seeing the following stacktrace when running nodetool cfhistograms
> {code}
> java.lang.AssertionError
>   at 
> org.apache.cassandra.utils.EstimatedHistogram.(EstimatedHistogram.java:66)
>   at 
> org.apache.cassandra.tools.NodeProbe.metricPercentilesAsArray(NodeProbe.java:1260)
>   at 
> org.apache.cassandra.tools.NodeTool$CfHistograms.execute(NodeTool.java:1109)
>   at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:292)
>   at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:206)
> {code}
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/437/testReport/jmx_test/TestJMX/cfhistograms_test
> Failed on CassCI build cassandra-2.1_dtest #437
> This doesn't appear to be flaky. I can repro locally. It seems like a product 
> issue, but if someone could confirm if it happens out of the test, that would 
> be great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11048) JSON queries are not thread safe

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11048:

Component/s: Coordination

> JSON queries are not thread safe
> 
>
> Key: CASSANDRA-11048
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11048
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Sergio Bossa
>Assignee: Tyler Hobbs
>Priority: Critical
>  Labels: easyfix, newbie, patch
> Fix For: 2.2.6, 3.0.4, 3.4
>
> Attachments: 
> 0001-Fix-thread-unsafe-usage-of-JsonStringEncoder-see-CAS.patch
>
>
> {{org.apache.cassandra.cql3.Json}} uses a shared instance of 
> {{JsonStringEncoder}} which is not thread safe (see 1), while 
> {{JsonStringEncoder#getInstance()}} should be used (see 2).
> As a consequence, concurrent {{select JSON}} queries often produce wrong 
> (sometimes unreadable) results.
> 1. 
> http://grepcode.com/file/repo1.maven.org/maven2/org.codehaus.jackson/jackson-core-asl/1.9.2/org/codehaus/jackson/io/JsonStringEncoder.java
> 2. 
> http://grepcode.com/file/repo1.maven.org/maven2/org.codehaus.jackson/jackson-core-asl/1.9.2/org/codehaus/jackson/io/JsonStringEncoder.java#JsonStringEncoder.getInstance%28%29



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10609) MV performance regression

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10609:

Component/s: Local Write-Read Paths

> MV performance regression
> -
>
> Key: CASSANDRA-10609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10609
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: EC2
>Reporter: Alan Boudreault
>Assignee: Tyler Hobbs
>Priority: Critical
>
> I've noticed an important MV performance regression that has been introduced 
> in 3.0.0-rc1. The issue has been introduced by CASSANDRA-9664.
> * I'm using mvbench to test with RF=3
> * I confirm it's not a driver issue.
> {code}
> EC2 RF=3 (i2.2xlarge, also tried on i2.4xlarge)
> mvn exec:java -Dexec.args="--num-users 10 --num-songs 100 
> --num-artists 1 -n 50 --endpoint node1"
> 3.0.0-beta2 (alpha2 java driver)
> ---
> total
>  count = 461601
>  mean rate = 1923.21 calls/second
>  1-minute rate = 1937.82 calls/second
>  5-minute rate = 1424.09 calls/second
> 15-minute rate = 1058.28 calls/second
>min = 1.90 milliseconds
>max = 3707.76 milliseconds
>   mean = 516.42 milliseconds
> stddev = 457.41 milliseconds
> median = 390.07 milliseconds
>   75% <= 775.95 milliseconds
>   95% <= 1417.67 milliseconds
>   98% <= 1728.05 milliseconds
>   99% <= 1954.55 milliseconds
> 99.9% <= 2566.91 milliseconds
> 3.0.0-rc1 (alpha3 java driver)
> -
> total
>  count = 310373
>  mean rate = 272.25 calls/second
>  1-minute rate = 0.00 calls/second
>  5-minute rate = 45.47 calls/second
> 15-minute rate = 295.94 calls/second
>min = 1.05 milliseconds
>max = 10468.98 milliseconds
>   mean = 492.99 milliseconds
> stddev = 510.42 milliseconds
> median = 281.02 milliseconds
>   75% <= 696.25 milliseconds
>   95% <= 1434.45 milliseconds
>   98% <= 1820.33 milliseconds
>   99% <= 2080.37 milliseconds
> 99.9% <= 4362.08 milliseconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9664) Allow MV's select statements to be more complex

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9664:
---
Component/s: CQL

> Allow MV's select statements to be more complex
> ---
>
> Key: CASSANDRA-9664
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9664
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Carl Yeksigian
>Assignee: Tyler Hobbs
>  Labels: client-impacting, doc-impacting
> Fix For: 3.0.0 rc1
>
>
> [Materialized Views|https://issues.apache.org/jira/browse/CASSANDRA-6477] add 
> support for a syntax which includes a {{SELECT}} statement, but only allows 
> selection of direct columns, and does not allow any filtering to take place.
> We should add support to the MV {{SELECT}} statement to bring better parity 
> with the normal CQL {{SELECT}} statement, specifically simple functions in 
> the selected columns, as well as specifying a {{WHERE}} clause.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11609) Nested UDTs cause error when migrating 2.x schema to trunk

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11609:

Component/s: Distributed Metadata

> Nested UDTs cause error when migrating 2.x schema to trunk
> --
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
> Fix For: 3.6
>
>
> This was found in the upgrades user_types_test.
> Can also be repro'd with ccm.
> To repro using ccm:
> Create a 1 node cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12315) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_client_warnings

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12315:

Component/s: Testing

> dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_client_warnings
> ---
>
> Key: CASSANDRA-12315
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12315
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Tyler Hobbs
>  Labels: cqlsh, dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1317/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_client_warnings
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 
> 1424, in test_client_warnings
> self.assertEqual(len(stderr), 0, "Failed to execute cqlsh: 
> {}".format(stderr))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> 'Failed to execute cqlsh: :3:OperationTimedOut: errors={\'127.0.0.1\': 
> \'Client request timeout. See Session.execute[_async](timeout)\'}, 
> last_host=127.0.0.1\n:5:InvalidRequest: Error from server: code=2200 
> [Invalid query] message="Keyspace \'client_warnings\' does not 
> exist"\n:7:InvalidRequest: Error from server: code=2200 [Invalid 
> query] message="No keyspace has been specified. USE a keyspace, or explicitly 
> specify keyspace.tablename"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11560) dtest failure in user_types_test.TestUserTypes.udt_subfield_test

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11560:

Component/s: Testing

> dtest failure in user_types_test.TestUserTypes.udt_subfield_test
> 
>
> Key: CASSANDRA-11560
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11560
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Michael Shuler
>Assignee: Tyler Hobbs
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1125/testReport/user_types_test/TestUserTypes/udt_subfield_test
> Failed on CassCI build trunk_dtest #1125
> Appears to be a test problem:
> {noformat}
> Error Message
> 'NoneType' object is not iterable
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-Kzg9Sk
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 253, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/user_types_test.py", line 767, in 
> udt_subfield_test
> self.assertEqual(listify(rows[0]), [[None]])
>   File "/home/automaton/cassandra-dtest/user_types_test.py", line 25, in 
> listify
> for i in item:
> "'NoneType' object is not iterable\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-Kzg9Sk\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11473) Clustering column value is zeroed out in some query results

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11473:

Component/s: Local Write-Read Paths

> Clustering column value is zeroed out in some query results
> ---
>
> Key: CASSANDRA-11473
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11473
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: debian jessie patch current with Cassandra 3.0.4
>Reporter: Jason Kania
>Assignee: Tyler Hobbs
>
> As per a discussion on the mailing list, 
> http://www.mail-archive.com/user@cassandra.apache.org/msg46902.html, we are 
> encountering inconsistent query results when the following query is run:
> {noformat}
> select "subscriberId","sensorUnitId","sensorId","time" from 
> "sensorReadingIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND 
> "sensorId"=0 ORDER BY "time" LIMIT 10;
> {noformat}
> Invalid Query Results
> {noformat}
> subscriberIdsensorUnitIdsensorIdtime
> JASKAN002015-05-24 2:09
> JASKAN001969-12-31 19:00
> JASKAN002016-01-21 2:10
> JASKAN002016-01-21 2:10
> JASKAN002016-01-21 2:10
> JASKAN002016-01-21 2:11
> JASKAN002016-01-21 2:22
> JASKAN002016-01-21 2:22
> JASKAN002016-01-21 2:22
> JASKAN002016-01-21 2:22
> {noformat}
> Valid Query Results
> {noformat}
> subscriberIdsensorUnitIdsensorIdtime
> JASKAN002015-05-24 2:09
> JASKAN002015-05-24 2:09
> JASKAN002015-05-24 2:10
> JASKAN002015-05-24 2:10
> JASKAN002015-05-24 2:10
> JASKAN002015-05-24 2:10
> JASKAN002015-05-24 2:11
> JASKAN002015-05-24 2:13
> JASKAN002015-05-24 2:13
> JASKAN002015-05-24 2:14
> {noformat}
> Running the following yields no rows indicating that the 1969... timestamp is 
> invalid.
> {noformat}
> select "subscriberId","sensorUnitId","sensorId","time" FROM 
> "edgeTransitionIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND 
> "sensorId"=0 and time='1969-12-31 19:00:00-0500';
> {noformat}
> The schema is as follows:
> {noformat}
> CREATE TABLE sensorReading."sensorReadingIndex" (
> "subscriberId" text,
> "sensorUnitId" int,
> "sensorId" int,
> time timestamp,
> "classId" int,
> correlation float,
> PRIMARY KEY (("subscriberId", "sensorUnitId", "sensorId"), time)
> ) WITH CLUSTERING ORDER BY (time ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE INDEX classSecondaryIndex ON sensorReading."sensorReadingIndex" 
> ("classId");
> {noformat}
> We were asked to provide our sstables as well but these are very large and 
> would require some data obfuscation. We are able to run code or scripts 
> against the data on our servrers if that is option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11570) Concurrent execution of prepared statement returns invalid JSON as result

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11570:

Component/s: Local Write-Read Paths

> Concurrent execution of prepared statement returns invalid JSON as result
> -
>
> Key: CASSANDRA-11570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11570
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Cassandra 3.2, C++ or C# driver
>Reporter: Alexander Ryabets
>Assignee: Tyler Hobbs
> Attachments: CassandraPreparedStatementsTest.zip, broken_output.txt, 
> test_neptunao.cql, valid_output.txt
>
>
> When I use prepared statement for async execution of multiple statements I 
> get JSON with broken data. Keys got totally corrupted when values seems to be 
> normal though.
> First I encoutered this issue when I were performing stress testing of our 
> project using custom script. We are using DataStax C++ driver and execute 
> statements from different fibers.
> Then I was trying to isolate problem and wrote simple C# program which starts 
> multiple Tasks in a loop. Each task uses the once created prepared statement 
> to read data from the base. As you can see results are totally mess.
> I 've attached archive with console C# project (1 cs file) which just print 
> resulting JSON to user. 
> Here is the main part of C# code.
> {noformat}
> static void Main(string[] args)
> {
>   const int task_count = 300;
>   using(var cluster = Cluster.Builder().AddContactPoints(/*contact points 
> here*/).Build())
>   {
> using(var session = cluster.Connect())
> {
>   var prepared = session.Prepare("select json * from test_neptunao.ubuntu 
> where id=?");
>   var tasks = new Task[task_count];
>   for(int i = 0; i < task_count; i++)
>   {
> tasks[i] = Query(prepared, session);
>   }
>   Task.WaitAll(tasks);
> }
>   }
>   Console.ReadKey();
> }
> private static Task Query(PreparedStatement prepared, ISession session)
> {
>   string id = GetIdOfRandomRow();
>   var stmt = prepared.Bind(id);
>   stmt.SetConsistencyLevel(ConsistencyLevel.One);
>   return session.ExecuteAsync(stmt).ContinueWith(tr =>
>   {
> foreach(var row in tr.Result)
> {
>   var value = row.GetValue(0);
>   //some kind of output
> }
>   });
> }
> {noformat}
> I also attached cql script with test DB schema.
> {noformat}
> CREATE KEYSPACE IF NOT EXISTS test_neptunao
> WITH replication = {
>   'class' : 'SimpleStrategy',
>   'replication_factor' : 3
> };
> use test_neptunao;
> create table if not exists ubuntu (
>   id timeuuid PRIMARY KEY,
>   precise_pangolin text,
>   trusty_tahr text,
>   wily_werewolf text, 
>   vivid_vervet text,
>   saucy_salamander text,
>   lucid_lynx text
> );
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11760) dtest failure in TestCQLNodes3RF3_Upgrade_current_2_2_x_To_next_3_x.more_user_types_test

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11760:

Component/s: Local Write-Read Paths

> dtest failure in 
> TestCQLNodes3RF3_Upgrade_current_2_2_x_To_next_3_x.more_user_types_test
> 
>
> Key: CASSANDRA-11760
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11760
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.6
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log
>
>
> example failure:
> http://cassci.datastax.com/view/Parameterized/job/upgrade_tests-all-custom_branch_runs/12/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_next_3_x/user_types_test/
> I've attached the logs. The test upgrades from 2.2.5 to 3.6. The relevant 
> failure stack trace extracted here:
> {code}
> ERROR [MessagingService-Incoming-/127.0.0.1] 2016-05-11 17:08:31,33
> 4 CassandraDaemon.java:185 - Exception in thread Thread[MessagingSe
> rvice-Incoming-/127.0.0.1,5,main]
> java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:99)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:382)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:366)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$5.deserialize(AbstractCellNameType.java:117)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.composites.AbstractCellNameType$5.deserialize(AbstractCellNameType.java:109)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:106)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:101)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:109)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:322)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:302)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:330)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:272)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) 
> ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:200)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:177)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:91)
>  ~[apache-cassandra-2.2.6.jar:2.2.6]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11613) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11613:

Component/s: Local Write-Read Paths

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test
> --
>
> Key: CASSANDRA-11613
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11613
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.6
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/8/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk/more_user_types_test
> Failed on CassCI build upgrade_tests-all-custom_branch_runs #8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12192) Retry all internode messages once after reopening connections

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12192:

Component/s: Core

> Retry all internode messages once after reopening connections
> -
>
> Key: CASSANDRA-12192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12192
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sean McCarthy
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.10
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 3668, in map_keys_indexing_test
> cursor.execute("TRUNCATE test")
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> '
> {code}
> Related failure: 
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11799) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_syntax_error

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11799:

Component/s: Testing

> dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_syntax_error
> 
>
> Key: CASSANDRA-11799
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11799
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: cqlsh, dtest
> Fix For: 2.2.7, 3.0.7, 3.6
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/703/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_syntax_error
> Failed on CassCI build cassandra-3.0_dtest #703
> Also failing is 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error
> The relevant failure is
> {code}
> 'ascii' codec can't encode character u'\xe4' in position 12: ordinal not in 
> range(128)
> {code}
> These are failing on 2.2, 3.0 and trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12249) dtest failure in upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12249:

Component/s: Coordination

> dtest failure in 
> upgrade_tests.paging_test.TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.basic_paging_test
> ---
>
> Key: CASSANDRA-12249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12249
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.0.9, 3.8
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.8_dtest_upgrade/1/testReport/upgrade_tests.paging_test/TestPagingDataNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/basic_paging_test
> Failed on CassCI build cassandra-3.8_dtest_upgrade #1
> This is on a mixed version cluster, one node is 3.0.8 and the other is 
> 3.8-tentative.
> Stack trace looks like:
> {code}
> ERROR [MessagingService-Incoming-/127.0.0.1] 2016-07-20 04:51:02,836 
> CassandraDaemon.java:201 - Exception in thread 
> Thread[MessagingService-Incoming-/127.0.0.1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:1042)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyPagedRangeCommandSerializer.deserialize(ReadCommand.java:964)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> {code}
> This trace is from the 3.0.8 node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12070) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_data_validation_on_read_template

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12070:

Component/s: Tools

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_data_validation_on_read_template
> -
>
> Key: CASSANDRA-12070
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12070
> Project: Cassandra
>  Issue Type: Test
>  Components: Tools
>Reporter: Sean McCarthy
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.0.8, 3.8
>
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/262/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_data_validation_on_read_template
> Failed on CassCI build trunk_offheap_dtest #262
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 1608, in test_data_validation_on_read_template
> self.assertFalse(err)
>   File "/usr/lib/python2.7/unittest/case.py", line 416, in assertFalse
> raise self.failureException(msg)
> '\'Process ImportProcess-3:\\nTraceback (most recent call last):\\n  File 
> "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap\\n   
>  self.run()\\n  File 
> "/home/automaton/cassandra/bin/../pylib/cqlshlib/copyutil.py", line 2205, in 
> run\\nself.report_error(exc)\\nTypeError: report_error() takes at least 3 
> arguments (2 given)\\n\' is not false
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12605) Timestamp-order searching of sstables does not handle non-frozen UDTs, frozen collections correctly

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12605:

Component/s: Local Write-Read Paths

> Timestamp-order searching of sstables does not handle non-frozen UDTs, frozen 
> collections correctly
> ---
>
> Key: CASSANDRA-12605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12605
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 3.0.10, 3.10
>
>
> {{SinglePartitionReadCommand.queryNeitherCountersNorCollections()}} is used 
> to determine whether we can search sstables in timestamp order.  We cannot 
> use this optimization when there are multicell values (such as unfrozen 
> collections or UDTs).  However, this method only checks 
> {{column.type.isCollection() || column.type.isCounter()}}.  Instead, it 
> should check {{column.type.isMulticell() || column.type.isCounter()}}.
> This has two implications:
> * We are using timestamp-order searching when querying non-frozen UDTs, which 
> can lead to incorrect/stale results being returned.
> * We are not taking advantage of this optimization when querying frozen 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12123) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_next_2_1_x_To_current_3_x.cql3_non_compound_range_tombstones_test

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12123:

Component/s: Coordination

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_next_2_1_x_To_current_3_x.cql3_non_compound_range_tombstones_test
> ---
>
> Key: CASSANDRA-12123
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12123
> Project: Cassandra
>  Issue Type: Test
>  Components: Coordination
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.0.9, 3.8
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/37/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_next_2_1_x_To_current_3_x/cql3_non_compound_range_tombstones_test
> Failed on CassCI build upgrade_tests-all-custom_branch_runs #37
> Failing here:
> {code}
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1667, in cql3_non_compound_range_tombstones_test
> self.assertEqual(6, len(row), row)
> {code}
> As we see, the row has more data returned. This implies that data isn't 
> properly being shadowed by the tombstone. As such, I'm filing this directly 
> as a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12527) Stack Overflow returned to queries while upgrading

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12527:

Component/s: Local Write-Read Paths

> Stack Overflow returned to queries while upgrading
> --
>
> Key: CASSANDRA-12527
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12527
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Centos 7 x64
>Reporter: Steve Severance
>Assignee: Sylvain Lebresne
> Fix For: 3.0.9, 3.10
>
>
> I am currently upgrading our cluster from 2.2.5 to 3.0.8.
> Some queries (not sure which) appear to be triggering a stack overflow:
> ERROR [SharedPool-Worker-2] 2016-08-24 04:34:52,464 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x5ccb2627, 
> /10.0.2.5:42925 => /10.0.2.10:9042]
> java.lang.StackOverflowError: null
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:131)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$LegacyBoundComparator.compare(LegacyLayout.java:1761)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.add(LegacyLayout.java:1835)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$LegacyRangeTombstoneList.addAll(LegacyLayout.java:1900)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:709) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
> at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext(LegacyLayout.java:711) 
> 

[jira] [Updated] (CASSANDRA-11820) Altering a column's type causes EOF

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11820:

Component/s: Local Write-Read Paths

> Altering a column's type causes EOF
> ---
>
> Key: CASSANDRA-11820
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11820
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Carl Yeksigian
>Assignee: Sylvain Lebresne
> Fix For: 3.0.9, 3.8
>
>
> While working on CASSANDRA-10309, I was testing altering columns' types. This 
> series of operations fails:
> {code}
> CREATE TABLE test (a int PRIMARY KEY, b int)
> INSERT INTO test (a, b) VALUES (1, 1)
> ALTER TABLE test ALTER b TYPE BLOB
> SELECT * FROM test WHERE a = 1
> {code}
> Tried this on 3.0 and trunk, both fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12007) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12007:

Component/s: Observability

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-12007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12007
> Project: Cassandra
>  Issue Type: Test
>  Components: Observability
>Reporter: Sean McCarthy
>Assignee: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node2.log, node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest_jdk8/229/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_dtest_jdk8 #229
> Logs are attached.
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cql_tracing_test.py", line 104, in 
> tracing_simple_test
> self.trace(session)
>   File "/home/automaton/cassandra-dtest/cql_tracing_test.py", line 92, in 
> trace
> self.assertIn(' 127.0.0.2 ', out)
>   File "/usr/lib/python2.7/unittest/case.py", line 803, in assertIn
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "' 127.0.0.2 ' not found
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12554) updateJobs in PendingRangeCalculatorService should be decremented in finally block

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12554:

Component/s: Distributed Metadata

> updateJobs in PendingRangeCalculatorService should be decremented in finally 
> block
> --
>
> Key: CASSANDRA-12554
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12554
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
> Fix For: 2.2.8, 3.0.10, 3.10
>
> Attachments: CASSANDRA_12554_3.0.txt
>
>
> We fixed an issue in CASSANDRA-7390 with MoveTests by adding a count for 
> running jobs. While looking at the code, I can see that decrement of this 
> counter should be done in finally block. 
> Also we dont need to change the setRejectedExecutionHandler in CASSANDRA-7390 
> as we can change the order of calling increment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12528) Fix eclipse-warning problems

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12528:

Component/s: Core

> Fix eclipse-warning problems
> 
>
> Key: CASSANDRA-12528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Joel Knighton
>Assignee: Sam Tunnicliffe
> Fix For: 2.2.8, 3.0.9, 3.10
>
>
> The {{ant eclipse-warning}} target has accumulated some failures again. 
> Locally, I'm seeing 3 errors on 2.2, 5 errors on 3.0, 23 errors on 3.9, and 
> 33 errors on trunk.
> Depending on the amount of overlap between these errors, it may make sense to 
> split this into sub-issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9613) Omit (de)serialization of state variable in UDAs

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9613:
---
Component/s: CQL

> Omit (de)serialization of state variable in UDAs
> 
>
> Key: CASSANDRA-9613
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9613
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.10
>
>
> Currently the result of each UDA's state function call is serialized and then 
> deserialized for the next state-function invocation and optionally final 
> function invocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11269) Improve UDF compilation error messages

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11269:

Component/s: CQL

> Improve UDF compilation error messages
> --
>
> Key: CASSANDRA-11269
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11269
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.8
>
>
> When UDF exception fails, the error message will just mention the top-level 
> exception and none of the causes. This is fine for usual compilation errors 
> but makes it essentially very difficult to identify the root cause.
> So, this ticket's about to improve the error messages at the end of the 
> constructor of {{JavaBasedUDFunction}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11033) Prevent logging in sandboxed state

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11033:

Component/s: CQL

> Prevent logging in sandboxed state
> --
>
> Key: CASSANDRA-11033
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11033
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.0.4, 3.4
>
>
> logback will re-read its configuration file regularly. So it is possible that 
> logback tries to reload the configuration while we log from a sandboxed UDF, 
> which will fail due to the restricted access privileges for UDFs. UDAs are 
> also affected as these use UDFs.
> /cc [~doanduyhai]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11309) Generic Java UDF types broken for RETURNS NULL ON NULL INPUT

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11309:

Component/s: CQL

> Generic Java UDF types broken for RETURNS NULL ON NULL INPUT
> 
>
> Key: CASSANDRA-11309
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11309
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.7
>
>
> The Java source generated for Java UDFs as introduced by CASSANDRA-10819 is 
> broken for {{RETURNS NULL ON NULL INPUT}} (not for {{CALLED ON NULL INPUT}}). 
> This means that the generic types are lost for RETURNS NULL ON NULL INPUT but 
> work as expected for CALLED ON NULL INPUT.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11444) Upgrade ohc to 0.4.3

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11444:

Component/s: Core

> Upgrade ohc to 0.4.3
> 
>
> Key: CASSANDRA-11444
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11444
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Trivial
> Fix For: 3.0.5, 3.5
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10953) Make all timeouts configurable via nodetool and jmx

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10953:

Component/s: Configuration

> Make all timeouts configurable via nodetool and jmx
> ---
>
> Key: CASSANDRA-10953
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10953
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Sebastian Estevez
>Assignee: Jeremy Hanna
>  Labels: docs-impacting
> Fix For: 3.4
>
> Attachments: 10953-2.1-v2.txt, 10953-2.1.txt
>
>
> Specifically I was interested in being able to monitor and set 
> stream_socket_timeout_in_ms from either (or both) nodetool and JMX. 
> Chatting with [~thobbs] and [~jeromatron] we suspect it would also be useful 
> to be able to view and edit other C* timeouts via nodetool and JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11664) Tab completion in cqlsh doesn't work for capitalized letters

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11664:

Component/s: Tools

> Tab completion in cqlsh doesn't work for capitalized letters
> 
>
> Key: CASSANDRA-11664
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11664
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: J.B. Langston
>Assignee: Mahdi Mohammadi
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.7, 3.0.7, 3.7
>
>
> Tab completion in cqlsh doesn't work for capitalized letters, either in 
> keyspace names or table names. Typing quotes and a corresponding capital 
> letter should complete the table/keyspace name and the closing quote.
> {code}
> cqlsh> create keyspace "Test" WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> cqlsh> use "Tes
> cqlsh> use tes
> cqlsh> use Test;
> InvalidRequest: code=2200 [Invalid query] message="Keyspace 'test' does not 
> exist"
> cqlsh> use "Test";
> cqlsh:Test> drop keyspace "Test"
> cqlsh:Test> create table "TestTable" (a text primary key, b text);
> cqlsh:Test> select * from "TestTable";
>  a | b
> ---+---
> (0 rows)
> cqlsh:Test> select * from "Test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12256) Count entire coordinated request against timeout

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12256:

Component/s: Coordination

> Count entire coordinated request against timeout
> 
>
> Key: CASSANDRA-12256
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12256
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Sylvain Lebresne
>Assignee: Geoffrey Yu
> Fix For: 3.10
>
> Attachments: 12256-trunk-v1v2.diff, 12256-trunk-v2.txt, 
> 12256-trunk.txt
>
>
> We have a number of {{request_timeout_*}} option, that probably every user 
> expect to be an upper bound on how long the coordinator will wait before 
> timeouting a request, but it's actually not always the case, especially for 
> read requests.
> I believe we don't respect those timeout properly in at least the following 
> cases:
> * On a digest mismatch: in that case, we reset the timeout for the data 
> query, which means the overall query might take up to twice the configured 
> timeout before timeouting.
> * On a range query: the timeout is reset for every sub-range that is queried. 
> With many nodes and vnodes, a range query could span tons of sub-range and so 
> a range query could take pretty much arbitrary long before actually 
> timeouting for the user.
> * On short reads: we also reset the timeout for every short reads "retries".
> It's also worth noting that even outside those, the timeouts don't take most 
> of the processing done by the coordinator (query parsing and CQL handling for 
> instance) into account.
> Now, in all fairness, the reason this is this way is that the timeout 
> currently are *not* timeout for the full user request, but rather how long a 
> coordinator should wait on any given replica for any given internal query 
> before giving up. *However*, I'm pretty sure this is not what user 
> intuitively expect and want, *especially* in the context of CASSANDRA-2848 
> where the goal is explicitely to have an upper bound on the query from the 
> user point of view.
> So I'm suggesting we change how those timeouts are handled to really be 
> timeouts on the whole user query.
> And by that I basically just mean that we'd mark the start of each query as 
> soon as possible in the processing, and use that starting time as base in 
> {{ReadCallback.await}} and {{AbstractWriteResponseHandler.get()}}. It won't 
> be perfect in the sense that we'll still only possibly timeout during 
> "blocking" operations, so typically if parsing a query takes more than your 
> timeout, you still won't timeout until that query is sent, but I think that's 
> probably fine in practice because 1) if you timeouts are small enough that 
> this matter, you're probably doing it wrong and 2) we can totally improve on 
> that later if needs be.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12311) Propagate TombstoneOverwhelmingException to the client

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12311:

Component/s: Observability

> Propagate TombstoneOverwhelmingException to the client
> --
>
> Key: CASSANDRA-12311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12311
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
>  Labels: client-impacting, doc-impacting
> Fix For: 3.10
>
> Attachments: 12311-dtest.txt, 12311-trunk-v2.txt, 12311-trunk-v3.txt, 
> 12311-trunk-v4.txt, 12311-trunk-v5.txt, 12311-trunk.txt
>
>
> Right now if a data node fails to perform a read because it ran into a 
> {{TombstoneOverwhelmingException}}, it only responds back to the coordinator 
> node with a generic failure. Under this scheme, the coordinator won't be able 
> to know exactly why the request failed and subsequently the client only gets 
> a generic {{ReadFailureException}}. It would be useful to inform the client 
> that their read failed because we read too many tombstones. We should have 
> the data nodes reply with a failure type so the coordinator can pass this 
> information to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12178) Add prefixes to the name of snapshots created before a truncate or drop

2016-10-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12178:

Component/s: Observability

> Add prefixes to the name of snapshots created before a truncate or drop
> ---
>
> Key: CASSANDRA-12178
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12178
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.10
>
> Attachments: 12178-3.0.txt, 12178-trunk.txt
>
>
> It would be useful to be able to identify snapshots that are taken because a 
> table was truncated or dropped. We can do this by prepending a prefix to 
> snapshot names for snapshots that are created before a truncate/drop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   9   10   >