[jira] [Commented] (CASSANDRA-13021) Nodetool compactionstats fails with NullPointerException

2017-08-18 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16133319#comment-16133319
 ] 

Sotirios Delimanolis commented on CASSANDRA-13021:
--

Can you please backport this to 2.2?

> Nodetool compactionstats fails with NullPointerException
> 
>
> Key: CASSANDRA-13021
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13021
> Project: Cassandra
>  Issue Type: Bug
> Environment: 3.0.10
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 3.0.11
>
> Attachments: 13021-3.0.txt, 13021-3.0-update2.txt, 
> 13021-3.0-update.txt
>
>
> Found in 3.0.10:
> {code}
> $ nodetool compactionstats
> pending tasks: 2
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> at 
> org.apache.cassandra.tools.nodetool.CompactionStats.addLine(CompactionStats.java:102)
> at 
> org.apache.cassandra.tools.nodetool.CompactionStats.execute(CompactionStats.java:70)
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:247)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:161)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is being processed

2017-08-16 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128917#comment-16128917
 ] 

Sotirios Delimanolis commented on CASSANDRA-13137:
--

Version 0.3.9 was 
[released|http://search.maven.org/#search%7Cga%7C1%7Cthrift-server].  Can we 
make a decision?

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> request is being processed
> --
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each 
> gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker 
> threads (the {{RPC-Thread}} threads). As the server starts receiving 
> requests, each selector thread adds events to its {{RingBuffer}} and the 
> worker threads process them. 
> The _events_ are 
> [{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
>  instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} 
> joins on the selector threads, waiting for them to die. It then iterates 
> through all the {{SelectorThread}} objects and calls their {{shutdown}} 
> method which attempts to drain their corresponding {{RingBuffer}}. The [drain 
> ({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
>  works by letting the worker pool "consumer" threads catch up to the 
> "producer" index, ie. the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
> {{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to 
> {{true}}. When the selector threads see that, they break from their 
> {{select()}} loop, and clean up their resources, ie. the {{Message}} objects 
> they've created and their buffers. *However*, if one of those {{Message}} 
> objects is currently being used by a worker pool thread to process a request, 
> if it calls [this piece of 
> code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
>  you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
> handleEventException
> SEVERE: Exception processing: 633124 
> com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
> at 
> com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> That fails because it tries to dereference one of the {{Message}} "cleaned 
> up", ie. {{null}}, buffers.
> Because that call is outside the {{try}} block, the exception escapes and 
> basically kills the worker pool thread. This has the side effect of 
> "discarding" one of the consumers of a selector's {{RingBuffer}}. 
> *That* has the side effect of preventing the {{ThriftServerThread}} from 
> draining the {{RingBuffer}} (and dying) since the consumers never catch up to 
> the stopped producer. And that finally has the effect of preventing the 
> {{nodetool disablethrift}} from proceeding since it's trying to {{join}} the 
> {{ThriftServerThread}}. Deadlock!
> The {{ThriftServerThread}} thread looks like
> {noformat}
> "Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
> [0x7f4729174000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Thread.yield(Native Method)
> at com.lmax.disruptor.WorkerPool.drainAndHalt(WorkerPool.java:147)
> at 
> 

[jira] [Commented] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is being processed

2017-07-18 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091819#comment-16091819
 ] 

Sotirios Delimanolis commented on CASSANDRA-13137:
--

The PR is merged. Pavel is making a release soon. Can we have it included in 
2.2+?

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> request is being processed
> --
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each 
> gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker 
> threads (the {{RPC-Thread}} threads). As the server starts receiving 
> requests, each selector thread adds events to its {{RingBuffer}} and the 
> worker threads process them. 
> The _events_ are 
> [{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
>  instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} 
> joins on the selector threads, waiting for them to die. It then iterates 
> through all the {{SelectorThread}} objects and calls their {{shutdown}} 
> method which attempts to drain their corresponding {{RingBuffer}}. The [drain 
> ({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
>  works by letting the worker pool "consumer" threads catch up to the 
> "producer" index, ie. the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
> {{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to 
> {{true}}. When the selector threads see that, they break from their 
> {{select()}} loop, and clean up their resources, ie. the {{Message}} objects 
> they've created and their buffers. *However*, if one of those {{Message}} 
> objects is currently being used by a worker pool thread to process a request, 
> if it calls [this piece of 
> code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
>  you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
> handleEventException
> SEVERE: Exception processing: 633124 
> com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
> at 
> com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> That fails because it tries to dereference one of the {{Message}} "cleaned 
> up", ie. {{null}}, buffers.
> Because that call is outside the {{try}} block, the exception escapes and 
> basically kills the worker pool thread. This has the side effect of 
> "discarding" one of the consumers of a selector's {{RingBuffer}}. 
> *That* has the side effect of preventing the {{ThriftServerThread}} from 
> draining the {{RingBuffer}} (and dying) since the consumers never catch up to 
> the stopped producer. And that finally has the effect of preventing the 
> {{nodetool disablethrift}} from proceeding since it's trying to {{join}} the 
> {{ThriftServerThread}}. Deadlock!
> The {{ThriftServerThread}} thread looks like
> {noformat}
> "Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
> [0x7f4729174000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Thread.yield(Native Method)
> at com.lmax.disruptor.WorkerPool.drainAndHalt(WorkerPool.java:147)
> at 
> 

[jira] [Comment Edited] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is being processed

2017-07-05 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075184#comment-16075184
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-13137 at 7/5/17 6:20 PM:
--

If anyone wants to review, I've submitted a PR 
[here|https://github.com/xedin/disruptor_thrift_server/pull/14]. Each selector 
thread checks if it has any messages currently reading or writing and only 
stops itself if it doesn't.


was (Author: s_delima):
If anyone wants to review, I've submitted a PR 
[here|https://github.com/xedin/disruptor_thrift_server/pull/14].

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> request is being processed
> --
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each 
> gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker 
> threads (the {{RPC-Thread}} threads). As the server starts receiving 
> requests, each selector thread adds events to its {{RingBuffer}} and the 
> worker threads process them. 
> The _events_ are 
> [{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
>  instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} 
> joins on the selector threads, waiting for them to die. It then iterates 
> through all the {{SelectorThread}} objects and calls their {{shutdown}} 
> method which attempts to drain their corresponding {{RingBuffer}}. The [drain 
> ({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
>  works by letting the worker pool "consumer" threads catch up to the 
> "producer" index, ie. the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
> {{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to 
> {{true}}. When the selector threads see that, they break from their 
> {{select()}} loop, and clean up their resources, ie. the {{Message}} objects 
> they've created and their buffers. *However*, if one of those {{Message}} 
> objects is currently being used by a worker pool thread to process a request, 
> if it calls [this piece of 
> code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
>  you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
> handleEventException
> SEVERE: Exception processing: 633124 
> com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
> at 
> com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> That fails because it tries to dereference one of the {{Message}} "cleaned 
> up", ie. {{null}}, buffers.
> Because that call is outside the {{try}} block, the exception escapes and 
> basically kills the worker pool thread. This has the side effect of 
> "discarding" one of the consumers of a selector's {{RingBuffer}}. 
> *That* has the side effect of preventing the {{ThriftServerThread}} from 
> draining the {{RingBuffer}} (and dying) since the consumers never catch up to 
> the stopped producer. And that finally has the effect of preventing the 
> {{nodetool disablethrift}} from proceeding since it's trying to {{join}} the 
> {{ThriftServerThread}}. Deadlock!
> The {{ThriftServerThread}} thread looks like
> {noformat}
> 

[jira] [Commented] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is being processed

2017-07-05 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075184#comment-16075184
 ] 

Sotirios Delimanolis commented on CASSANDRA-13137:
--

If anyone wants to review, I've submitted a PR 
[here|https://github.com/xedin/disruptor_thrift_server/pull/14].

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> request is being processed
> --
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each 
> gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker 
> threads (the {{RPC-Thread}} threads). As the server starts receiving 
> requests, each selector thread adds events to its {{RingBuffer}} and the 
> worker threads process them. 
> The _events_ are 
> [{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
>  instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} 
> joins on the selector threads, waiting for them to die. It then iterates 
> through all the {{SelectorThread}} objects and calls their {{shutdown}} 
> method which attempts to drain their corresponding {{RingBuffer}}. The [drain 
> ({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
>  works by letting the worker pool "consumer" threads catch up to the 
> "producer" index, ie. the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
> {{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to 
> {{true}}. When the selector threads see that, they break from their 
> {{select()}} loop, and clean up their resources, ie. the {{Message}} objects 
> they've created and their buffers. *However*, if one of those {{Message}} 
> objects is currently being used by a worker pool thread to process a request, 
> if it calls [this piece of 
> code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
>  you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
> handleEventException
> SEVERE: Exception processing: 633124 
> com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
> at 
> com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> That fails because it tries to dereference one of the {{Message}} "cleaned 
> up", ie. {{null}}, buffers.
> Because that call is outside the {{try}} block, the exception escapes and 
> basically kills the worker pool thread. This has the side effect of 
> "discarding" one of the consumers of a selector's {{RingBuffer}}. 
> *That* has the side effect of preventing the {{ThriftServerThread}} from 
> draining the {{RingBuffer}} (and dying) since the consumers never catch up to 
> the stopped producer. And that finally has the effect of preventing the 
> {{nodetool disablethrift}} from proceeding since it's trying to {{join}} the 
> {{ThriftServerThread}}. Deadlock!
> The {{ThriftServerThread}} thread looks like
> {noformat}
> "Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
> [0x7f4729174000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Thread.yield(Native Method)
> at com.lmax.disruptor.WorkerPool.drainAndHalt(WorkerPool.java:147)
> at 
> 

[jira] [Commented] (CASSANDRA-12100) Compactions are stuck after TRUNCATE

2017-02-23 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880853#comment-15880853
 ] 

Sotirios Delimanolis commented on CASSANDRA-12100:
--

Thank you!

> Compactions are stuck after TRUNCATE
> 
>
> Key: CASSANDRA-12100
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12100
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Stefano Ortolani
>Assignee: Stefania
> Fix For: 2.2.10, 3.0.9, 3.8
>
> Attachments: node3_jstack.log
>
>
> Hi,
> since the upgrade to C* 3.0.7 I see compaction tasks getting stuck when 
> truncating the column family. I verified this on all nodes of the cluster.
> Pending compactions seem to disappear after restarting the node.
> {noformat}
> root@node10:~# nodetool -h localhost compactionstats
> pending tasks: 6
>  id   compaction type  
> keyspacetable   completed  totalunit   progress
>24e1ad30-3cac-11e6-870d-5de740693258Compaction  
> schema  table_1   0   57558382   bytes  0.00%
>2be2e3b0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_2   0   65063705   bytes  0.00%
>54de38f0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_3   0 187031   bytes  0.00%
>31926ce0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_4   0   42951119   bytes  0.00%
>3911ad00-3cac-11e6-870d-5de740693258Compaction  
> schema  table_5   0   25918949   bytes  0.00%
>3e6a8ab0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_6   0   65466210   bytes  0.00%
> Active compaction remaining time :   0h00m15s
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12100) Compactions are stuck after TRUNCATE

2017-02-22 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878636#comment-15878636
 ] 

Sotirios Delimanolis commented on CASSANDRA-12100:
--

Yes, please.

> Compactions are stuck after TRUNCATE
> 
>
> Key: CASSANDRA-12100
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12100
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Stefano Ortolani
>Assignee: Stefania
> Fix For: 3.0.9, 3.8
>
> Attachments: node3_jstack.log
>
>
> Hi,
> since the upgrade to C* 3.0.7 I see compaction tasks getting stuck when 
> truncating the column family. I verified this on all nodes of the cluster.
> Pending compactions seem to disappear after restarting the node.
> {noformat}
> root@node10:~# nodetool -h localhost compactionstats
> pending tasks: 6
>  id   compaction type  
> keyspacetable   completed  totalunit   progress
>24e1ad30-3cac-11e6-870d-5de740693258Compaction  
> schema  table_1   0   57558382   bytes  0.00%
>2be2e3b0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_2   0   65063705   bytes  0.00%
>54de38f0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_3   0 187031   bytes  0.00%
>31926ce0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_4   0   42951119   bytes  0.00%
>3911ad00-3cac-11e6-870d-5de740693258Compaction  
> schema  table_5   0   25918949   bytes  0.00%
>3e6a8ab0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_6   0   65466210   bytes  0.00%
> Active compaction remaining time :   0h00m15s
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-11961) Nonfatal NPE in CompactionMetrics

2017-02-13 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864334#comment-15864334
 ] 

Sotirios Delimanolis commented on CASSANDRA-11961:
--

Why is it possible for {{CFMetaData}} to be null there?

> Nonfatal NPE in CompactionMetrics
> -
>
> Key: CASSANDRA-11961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11961
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Achal Shah
>Priority: Minor
>  Labels: lhf
> Fix For: 3.8
>
>
> Just saw the following NPE on trunk. Means, that {{metaData}} from 
> {{CFMetaData metaData = compaction.getCompactionInfo().getCFMetaData();}} is 
> {{null}}. A simple {{if (metaData == null) continue;}} should fix this.
> {code}
> Caused by: java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$2.getValue(CompactionMetrics.java:103)
>  ~[main/:na]
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$2.getValue(CompactionMetrics.java:78)
>  ~[main/:na]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-12100) Compactions are stuck after TRUNCATE

2017-01-20 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832261#comment-15832261
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-12100 at 1/20/17 7:13 PM:
---

Any chance you can backport this to 2.2.9? (Referring to [this 
commit|https://github.com/apache/cassandra/commit/05483a962c64c350315fc738c697980b22361cc3].)


was (Author: s_delima):
Any chance you can backport this to 2.2.9?

> Compactions are stuck after TRUNCATE
> 
>
> Key: CASSANDRA-12100
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12100
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Stefano Ortolani
>Assignee: Stefania
> Fix For: 3.0.9, 3.8
>
> Attachments: node3_jstack.log
>
>
> Hi,
> since the upgrade to C* 3.0.7 I see compaction tasks getting stuck when 
> truncating the column family. I verified this on all nodes of the cluster.
> Pending compactions seem to disappear after restarting the node.
> {noformat}
> root@node10:~# nodetool -h localhost compactionstats
> pending tasks: 6
>  id   compaction type  
> keyspacetable   completed  totalunit   progress
>24e1ad30-3cac-11e6-870d-5de740693258Compaction  
> schema  table_1   0   57558382   bytes  0.00%
>2be2e3b0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_2   0   65063705   bytes  0.00%
>54de38f0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_3   0 187031   bytes  0.00%
>31926ce0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_4   0   42951119   bytes  0.00%
>3911ad00-3cac-11e6-870d-5de740693258Compaction  
> schema  table_5   0   25918949   bytes  0.00%
>3e6a8ab0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_6   0   65466210   bytes  0.00%
> Active compaction remaining time :   0h00m15s
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12100) Compactions are stuck after TRUNCATE

2017-01-20 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832261#comment-15832261
 ] 

Sotirios Delimanolis commented on CASSANDRA-12100:
--

Any chance you can backport this to 2.2.9?

> Compactions are stuck after TRUNCATE
> 
>
> Key: CASSANDRA-12100
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12100
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Stefano Ortolani
>Assignee: Stefania
> Fix For: 3.0.9, 3.8
>
> Attachments: node3_jstack.log
>
>
> Hi,
> since the upgrade to C* 3.0.7 I see compaction tasks getting stuck when 
> truncating the column family. I verified this on all nodes of the cluster.
> Pending compactions seem to disappear after restarting the node.
> {noformat}
> root@node10:~# nodetool -h localhost compactionstats
> pending tasks: 6
>  id   compaction type  
> keyspacetable   completed  totalunit   progress
>24e1ad30-3cac-11e6-870d-5de740693258Compaction  
> schema  table_1   0   57558382   bytes  0.00%
>2be2e3b0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_2   0   65063705   bytes  0.00%
>54de38f0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_3   0 187031   bytes  0.00%
>31926ce0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_4   0   42951119   bytes  0.00%
>3911ad00-3cac-11e6-870d-5de740693258Compaction  
> schema  table_5   0   25918949   bytes  0.00%
>3e6a8ab0-3cac-11e6-870d-5de740693258Compaction  
> schema  table_6   0   65466210   bytes  0.00%
> Active compaction remaining time :   0h00m15s
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is being processed

2017-01-19 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830856#comment-15830856
 ] 

Sotirios Delimanolis commented on CASSANDRA-13137:
--

My opinion is that a proper stop/drain would stop the selector threads' 
{{select()}} loop (or really just the read part), but would wait to clean up 
its resources until after the ring buffer was drained.

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> request is being processed
> --
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each 
> gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker 
> threads (the {{RPC-Thread}} threads). As the server starts receiving 
> requests, each selector thread adds events to its {{RingBuffer}} and the 
> worker threads process them. 
> The _events_ are 
> [{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
>  instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} 
> joins on the selector threads, waiting for them to die. It then iterates 
> through all the {{SelectorThread}} objects and calls their {{shutdown}} 
> method which attempts to drain their corresponding {{RingBuffer}}. The [drain 
> ({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
>  works by letting the worker pool "consumer" threads catch up to the 
> "producer" index, ie. the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
> {{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to 
> {{true}}. When the selector threads see that, they break from their 
> {{select()}} loop, and clean up their resources, ie. the {{Message}} objects 
> they've created and their buffers. *However*, if one of those {{Message}} 
> objects is currently being used by a worker pool thread to process a request, 
> if it calls [this piece of 
> code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
>  you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
> handleEventException
> SEVERE: Exception processing: 633124 
> com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
> at 
> com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> That fails because it tries to dereference one of the {{Message}} "cleaned 
> up", ie. {{null}}, buffers.
> Because that call is outside the {{try}} block, the exception escapes and 
> basically kills the worker pool thread. This has the side effect of 
> "discarding" one of the consumers of a selector's {{RingBuffer}}. 
> *That* has the side effect of preventing the {{ThriftServerThread}} from 
> draining the {{RingBuffer}} (and dying) since the consumers never catch up to 
> the stopped producer. And that finally has the effect of preventing the 
> {{nodetool disablethrift}} from proceeding since it's trying to {{join}} the 
> {{ThriftServerThread}}. Deadlock!
> The {{ThriftServerThread}} thread looks like
> {noformat}
> "Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
> [0x7f4729174000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Thread.yield(Native Method)
> at 

[jira] [Updated] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is being processed

2017-01-19 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-13137:
-
Summary: nodetool disablethrift deadlocks if THsHaDisruptorServer is 
stopped while a request is being processed  (was: nodetool disablethrift 
deadlocks if THsHaDisruptorServer is stopped while a read is going on)

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> request is being processed
> --
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each 
> gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker 
> threads (the {{RPC-Thread}} threads). As the server starts receiving 
> requests, each selector thread adds events to its {{RingBuffer}} and the 
> worker threads process them. 
> The _events_ are 
> [{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
>  instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} 
> joins on the selector threads, waiting for them to die. It then iterates 
> through all the {{SelectorThread}} objects and calls their {{shutdown}} 
> method which attempts to drain their corresponding {{RingBuffer}}. The [drain 
> ({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
>  works by letting the worker pool "consumer" threads catch up to the 
> "producer" index, ie. the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
> {{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to 
> {{true}}. When the selector threads see that, they break from their 
> {{select()}} loop, and clean up their resources, ie. the {{Message}} objects 
> they've created and their buffers. *However*, if one of those {{Message}} 
> objects is currently being used by a worker pool thread to process a request, 
> if it calls [this piece of 
> code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
>  you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
> handleEventException
> SEVERE: Exception processing: 633124 
> com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
> at 
> com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> That fails because it tries to dereference one of the {{Message}} "cleaned 
> up", ie. {{null}}, buffers.
> Because that call is outside the {{try}} block, the exception escapes and 
> basically kills the worker pool thread. This has the side effect of 
> "discarding" one of the consumers of a selector's {{RingBuffer}}. 
> *That* has the side effect of preventing the {{ThriftServerThread}} from 
> draining the {{RingBuffer}} (and dying) since the consumers never catch up to 
> the stopped producer. And that finally has the effect of preventing the 
> {{nodetool disablethrift}} from proceeding since it's trying to {{join}} the 
> {{ThriftServerThread}}. Deadlock!
> The {{ThriftServerThread}} thread looks like
> {noformat}
> "Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
> [0x7f4729174000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Thread.yield(Native Method)
> at 

[jira] [Commented] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a read is going on

2017-01-19 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830766#comment-15830766
 ] 

Sotirios Delimanolis commented on CASSANDRA-13137:
--

There's also this {{NullPointerException}} possible

{noformat}
ERROR [RPC-Thread:68] 2017-01-18 18:28:50,879 Message.java:324 - Unexpected 
throwable while invoking!
java.lang.NullPointerException: null
at com.thinkaurelius.thrift.util.mem.Buffer.size(Buffer.java:83) 
~[thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.expand(FastMemoryOutputTransport.java:84)
 ~[thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.write(FastMemoryOutputTransport.java:167)
 ~[thrift-server-0.3.7.jar:na]
at 
org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156) 
~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:55) 
~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
~[libthrift-0.9.2.jar:0.9.2]
at com.thinkaurelius.thrift.Message.invoke(Message.java:314) 
~[thrift-server-0.3.7.jar:na]
at com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90) 
[thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
 [thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
 [thrift-server-0.3.7.jar:na]
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112) 
[disruptor-3.0.1.jar:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_102]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
{noformat}

But that happens within the {{invoke}} method's try/catch block which 
essentially just swallows it, but doesn't "kill" the current thread.

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> read is going on
> 
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each 
> gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker 
> threads (the {{RPC-Thread}} threads). As the server starts receiving 
> requests, each selector thread adds events to its {{RingBuffer}} and the 
> worker threads process them. 
> The _events_ are 
> [{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
>  instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} 
> joins on the selector threads, waiting for them to die. It then iterates 
> through all the {{SelectorThread}} objects and calls their {{shutdown}} 
> method which attempts to drain their corresponding {{RingBuffer}}. The [drain 
> ({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
>  works by letting the worker pool "consumer" threads catch up to the 
> "producer" index, ie. the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
> {{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to 
> {{true}}. When the selector threads see that, they break from their 
> {{select()}} loop, and clean up their resources, ie. the {{Message}} objects 
> they've created and their buffers. *However*, if one of those {{Message}} 
> objects is currently being used by a worker pool thread to process a request, 
> if it calls [this piece of 
> code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
>  you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
> handleEventException
> SEVERE: Exception processing: 633124 
> com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
> at 
> 

[jira] [Comment Edited] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a read is going on

2017-01-19 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830766#comment-15830766
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-13137 at 1/19/17 10:57 PM:


It can also cause this {{NullPointerException}} 

{noformat}
ERROR [RPC-Thread:68] 2017-01-18 18:28:50,879 Message.java:324 - Unexpected 
throwable while invoking!
java.lang.NullPointerException: null
at com.thinkaurelius.thrift.util.mem.Buffer.size(Buffer.java:83) 
~[thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.expand(FastMemoryOutputTransport.java:84)
 ~[thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.write(FastMemoryOutputTransport.java:167)
 ~[thrift-server-0.3.7.jar:na]
at 
org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156) 
~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:55) 
~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
~[libthrift-0.9.2.jar:0.9.2]
at com.thinkaurelius.thrift.Message.invoke(Message.java:314) 
~[thrift-server-0.3.7.jar:na]
at com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90) 
[thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
 [thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
 [thrift-server-0.3.7.jar:na]
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112) 
[disruptor-3.0.1.jar:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_102]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
{noformat}

But that happens within the {{invoke}} method's try/catch block which 
essentially just swallows it, but doesn't "kill" the current thread.


was (Author: s_delima):
There's also this {{NullPointerException}} possible

{noformat}
ERROR [RPC-Thread:68] 2017-01-18 18:28:50,879 Message.java:324 - Unexpected 
throwable while invoking!
java.lang.NullPointerException: null
at com.thinkaurelius.thrift.util.mem.Buffer.size(Buffer.java:83) 
~[thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.expand(FastMemoryOutputTransport.java:84)
 ~[thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.write(FastMemoryOutputTransport.java:167)
 ~[thrift-server-0.3.7.jar:na]
at 
org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156) 
~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:55) 
~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
~[libthrift-0.9.2.jar:0.9.2]
at com.thinkaurelius.thrift.Message.invoke(Message.java:314) 
~[thrift-server-0.3.7.jar:na]
at com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90) 
[thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
 [thrift-server-0.3.7.jar:na]
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
 [thrift-server-0.3.7.jar:na]
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112) 
[disruptor-3.0.1.jar:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_102]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
{noformat}

But that happens within the {{invoke}} method's try/catch block which 
essentially just swallows it, but doesn't "kill" the current thread.

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a 
> read is going on
> 
>
> Key: CASSANDRA-13137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: 2.2.9
>Reporter: Sotirios Delimanolis
>
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
> {{THsHaDisruptorServer}} which is a subclass of 
> [{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].
> 

[jira] [Updated] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a read is going on

2017-01-19 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-13137:
-
Description: 
We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
{{THsHaDisruptorServer}} which is a subclass of 
[{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].

Internally, this spawns {{number_of_cores}} number of selector threads. Each 
gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker threads 
(the {{RPC-Thread}} threads). As the server starts receiving requests, each 
selector thread adds events to its {{RingBuffer}} and the worker threads 
process them. 

The _events_ are 
[{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
 instances, which have preallocated buffers for eventual IO.

When the thrift server starts up, the corresponding {{ThriftServerThread}} 
joins on the selector threads, waiting for them to die. It then iterates 
through all the {{SelectorThread}} objects and calls their {{shutdown}} method 
which attempts to drain their corresponding {{RingBuffer}}. The [drain 
({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
 works by letting the worker pool "consumer" threads catch up to the "producer" 
index, ie. the selector thread.

When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
{{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to {{true}}. 
When the selector threads see that, they break from their {{select()}} loop, 
and clean up their resources, ie. the {{Message}} objects they've created and 
their buffers. *However*, if one of those {{Message}} objects is currently 
being used by a worker pool thread to process a request, if it calls [this 
piece of 
code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
 you'll get the following {{NullPointerException}}

{noformat}
Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
handleEventException
SEVERE: Exception processing: 633124 
com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
java.lang.NullPointerException
at com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
at com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

That fails because it tries to dereference one of the {{Message}} "cleaned up", 
ie. {{null}}, buffers.

Because that call is outside the {{try}} block, the exception escapes and 
basically kills the worker pool thread. This has the side effect of 
"discarding" one of the consumers of a selector's {{RingBuffer}}. 

That has the side effect of preventing the {{ThriftServerThread}} from draining 
the {{RingBuffer}} (and dying) since the consumers never catch up to the 
stopped producer. And that finally has the effect of preventing the {{nodetool 
disablethrift}} from proceeding since it's trying to {{join}} the 
{{ThriftServerThread}}. Deadlock!

The {{ThriftServerThread}} thread looks like

{noformat}
"Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
[0x7f4729174000]
   java.lang.Thread.State: RUNNABLE
at java.lang.Thread.yield(Native Method)
at com.lmax.disruptor.WorkerPool.drainAndHalt(WorkerPool.java:147)
at 
com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.shutdown(TDisruptorServer.java:633)
at 
com.thinkaurelius.thrift.TDisruptorServer.gracefullyShutdownInvokerPool(TDisruptorServer.java:301)
at 
com.thinkaurelius.thrift.TDisruptorServer.waitForShutdown(TDisruptorServer.java:280)
at 
org.apache.thrift.server.AbstractNonblockingServer.serve(AbstractNonblockingServer.java:95)
at 
org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.run(ThriftServer.java:137)
{noformat}

The {{nodetool disablethrift}} thread looks like

{noformat}
"RMI TCP Connection(18183)-127.0.0.1" #12121 daemon prio=5 os_prio=0 
tid=0x7f4ac2c61000 nid=0x5805 in Object.wait() [0x7f4aab7ec000]
   java.lang.Thread.State: WAITING (on object monitor)
at 

[jira] [Updated] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a read is going on

2017-01-19 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-13137:
-
Description: 
We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
{{THsHaDisruptorServer}} which is a subclass of 
[{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].

Internally, this spawns {{number_of_cores}} number of selector threads. Each 
gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker threads 
(the {{RPC-Thread}} threads). As the server starts receiving requests, each 
selector thread adds events to its {{RingBuffer}} and the worker threads 
process them. 

The _events_ are 
[{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
 instances, which have preallocated buffers for eventual IO.

When the thrift server starts up, the corresponding {{ThriftServerThread}} 
joins on the selector threads, waiting for them to die. It then iterates 
through all the {{SelectorThread}} objects and calls their {{shutdown}} method 
which attempts to drain their corresponding {{RingBuffer}}. The [drain 
({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
 works by letting the worker pool "consumer" threads catch up to the "producer" 
index, ie. the selector thread.

When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the 
{{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to {{true}}. 
When the selector threads see that, they break from their {{select()}} loop, 
and clean up their resources, ie. the {{Message}} objects they've created and 
their buffers. *However*, if one of those {{Message}} objects is currently 
being used by a worker pool thread to process a request, if it calls [this 
piece of 
code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
 you'll get the following {{NullPointerException}}

{noformat}
Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
handleEventException
SEVERE: Exception processing: 633124 
com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
java.lang.NullPointerException
at com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
at com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

That fails because it tries to dereference one of the {{Message}} "cleaned up", 
ie. {{null}}, buffers.

Because that call is outside the {{try}} block, the exception escapes and 
basically kills the worker pool thread. This has the side effect of 
"discarding" one of the consumers of a selector's {{RingBuffer}}. 

*That* has the side effect of preventing the {{ThriftServerThread}} from 
draining the {{RingBuffer}} (and dying) since the consumers never catch up to 
the stopped producer. And that finally has the effect of preventing the 
{{nodetool disablethrift}} from proceeding since it's trying to {{join}} the 
{{ThriftServerThread}}. Deadlock!

The {{ThriftServerThread}} thread looks like

{noformat}
"Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
[0x7f4729174000]
   java.lang.Thread.State: RUNNABLE
at java.lang.Thread.yield(Native Method)
at com.lmax.disruptor.WorkerPool.drainAndHalt(WorkerPool.java:147)
at 
com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.shutdown(TDisruptorServer.java:633)
at 
com.thinkaurelius.thrift.TDisruptorServer.gracefullyShutdownInvokerPool(TDisruptorServer.java:301)
at 
com.thinkaurelius.thrift.TDisruptorServer.waitForShutdown(TDisruptorServer.java:280)
at 
org.apache.thrift.server.AbstractNonblockingServer.serve(AbstractNonblockingServer.java:95)
at 
org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.run(ThriftServer.java:137)
{noformat}

The {{nodetool disablethrift}} thread looks like

{noformat}
"RMI TCP Connection(18183)-127.0.0.1" #12121 daemon prio=5 os_prio=0 
tid=0x7f4ac2c61000 nid=0x5805 in Object.wait() [0x7f4aab7ec000]
   java.lang.Thread.State: WAITING (on object monitor)
at 

[jira] [Updated] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a read is going on

2017-01-19 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-13137:
-
Description: 
We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
{{THsHaDisruptorServer}} which is a subclass of 
[{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].

Internally, this spawns {{number_of_cores}} number of selector threads. Each 
gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker threads 
(the {{RPC-Thread}} threads). As the server starts receiving requests, each 
selector thread adds events to its {{RingBuffer}} and the worker threads 
process them. 

The _events_ are 
[{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
 instances, which have preallocated buffers for eventual IO.

When the thrift server starts up, the corresponding {{ThriftServerThread}} 
joins on the selector threads, waiting for them to die. It then iterates 
through all the {{SelectorThread}} objects and calls their {{shutdown}} method 
which attempts to drain their corresponding {{RingBuffer}}. The [drain 
({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
 works by letting the worker pool "consumer" threads catch up to the "producer" 
index, ie. the selector thread.

When we execute a {{nodetool disablethrift}}, this attempts to {{stop}} the 
{{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to {{true}}. 
When the selector threads see that, they break from their {{select()}} loop, 
and clean up their resources, ie. the {{Message}} objects they've created and 
their buffers. *However*, if one of those {{Message}} objects is currently 
being used by a worker pool thread to process a request, if it calls [this 
piece of 
code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
 you'll get the following {{NullPointerException}}

{noformat}
Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
handleEventException
SEVERE: Exception processing: 633124 
com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
java.lang.NullPointerException
at com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
at com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

That fails because it tries to dereference one of the {{Message}} "cleaned up", 
ie. {{null}}, buffers.

Because that call is outside the {{try}} block, the exception escapes and 
basically kills the worker pool thread. This has the side effect of 
"discarding" one of the consumers of a selector's {{RingBuffer}}. 

That has the side effect of preventing the {{ThriftServerThread}} from draining 
the {{RingBuffer}} (and dying) since the consumers never catch up to the 
stopped producer. And that finally has the effect of preventing the {{nodetool 
disablethrift}} from proceeding since it's trying to {{join}} the 
{{ThriftServerThread}}. Deadlock!

The {{ThriftServerThread}} thread looks like

{noformat}
"Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
[0x7f4729174000]
   java.lang.Thread.State: RUNNABLE
at java.lang.Thread.yield(Native Method)
at com.lmax.disruptor.WorkerPool.drainAndHalt(WorkerPool.java:147)
at 
com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.shutdown(TDisruptorServer.java:633)
at 
com.thinkaurelius.thrift.TDisruptorServer.gracefullyShutdownInvokerPool(TDisruptorServer.java:301)
at 
com.thinkaurelius.thrift.TDisruptorServer.waitForShutdown(TDisruptorServer.java:280)
at 
org.apache.thrift.server.AbstractNonblockingServer.serve(AbstractNonblockingServer.java:95)
at 
org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.run(ThriftServer.java:137)
{noformat}

The {{nodetool disablethrift}} thread looks like

{noformat}
"RMI TCP Connection(18183)-127.0.0.1" #12121 daemon prio=5 os_prio=0 
tid=0x7f4ac2c61000 nid=0x5805 in Object.wait() [0x7f4aab7ec000]
   java.lang.Thread.State: WAITING (on object monitor)
at 

[jira] [Updated] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a read is going on

2017-01-19 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-13137:
-
Description: 
We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
{{THsHaDisruptorServer}} which is a subclass of 
[{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].

Internally, this spawns {{number_of_cores}} number of selector threads. Each 
gets a {{RingBuffer}} and {{rpc_max_threads / cores}} number of worker threads 
(the {{RPC-Thread}} threads). As the server starts receiving requests, each 
selector thread adds events to its {{RingBuffer}} and the worker threads 
process them. 

The _events_ are 
[{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
 instances, which have preallocated buffers for eventual IO.

When the thrift server starts up, the corresponding {{ThriftServerThread}} 
joins on the selector threads, waiting for them to die. It then iterates 
through all the {{SelectorThread}} objects and calls their {{shutdown}} method 
which attempts to drain their corresponding {{RingBuffer}}. The [drain 
({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
 works by letting the worker pool threads catch up to the "producer" index, ie. 
the selector thread.

When we execute a {{nodetool disablethrift}}, this attempts to {{stop}} the 
{{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to {{true}}. 
When the selector threads see that, they break from their {{select()}} loop, 
and clean up their resources, ie. the {{Message}} objects they've created and 
their buffers. *However*, if one of those {{Message}} objects is currently 
being used by a worker pool thread to process a request, if it calls [this 
piece of 
code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
 you'll get the following {{NullPointerException}}

{noformat}
Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
handleEventException
SEVERE: Exception processing: 633124 
com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
java.lang.NullPointerException
at com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
at com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

That fails because it tries to dereference one of the {{Message}} "cleaned up", 
ie. {{null}}, buffers.

Because that call is outside the {{try}} block, the exception escapes and 
basically kills the worker pool thread. This has the side effect of 
"discarding" one of the consumers of a selector's {{RingBuffer}}. 

That has the side effect of preventing the {{ThriftServerThread}} from draining 
the {{RingBuffer}} (and dying) since the consumers never catch up to the 
stopped producer. And that finally has the effect of preventing the {{nodetool 
disablethrift}} from proceeding since it's trying to {{join}} the 
{{ThriftServerThread}}. Deadlock!

The {{ThriftServerThread}} thread looks like

{noformat}
"Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
[0x7f4729174000]
   java.lang.Thread.State: RUNNABLE
at java.lang.Thread.yield(Native Method)
at com.lmax.disruptor.WorkerPool.drainAndHalt(WorkerPool.java:147)
at 
com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.shutdown(TDisruptorServer.java:633)
at 
com.thinkaurelius.thrift.TDisruptorServer.gracefullyShutdownInvokerPool(TDisruptorServer.java:301)
at 
com.thinkaurelius.thrift.TDisruptorServer.waitForShutdown(TDisruptorServer.java:280)
at 
org.apache.thrift.server.AbstractNonblockingServer.serve(AbstractNonblockingServer.java:95)
at 
org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.run(ThriftServer.java:137)
{noformat}

The {{nodetool disablethrift}} thread looks like

{noformat}
"RMI TCP Connection(18183)-127.0.0.1" #12121 daemon prio=5 os_prio=0 
tid=0x7f4ac2c61000 nid=0x5805 in Object.wait() [0x7f4aab7ec000]
   java.lang.Thread.State: WAITING (on object monitor)
at 

[jira] [Created] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a read is going on

2017-01-19 Thread Sotirios Delimanolis (JIRA)
Sotirios Delimanolis created CASSANDRA-13137:


 Summary: nodetool disablethrift deadlocks if THsHaDisruptorServer 
is stopped while a read is going on
 Key: CASSANDRA-13137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.2.9
Reporter: Sotirios Delimanolis


We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a 
{{THsHaDisruptorServer}} which is a subclass of 
[{{TDisruptorServer}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/TDisruptorServer.java].

Internally, this spawns {{cores}} number of selector threads. Each gets a 
{{RingBuffer}} and {{rpc_max_threads / cores}} number of worker threads (the 
{{RPC-Thread}} threads). As the server starts receiving requests, each selector 
thread adds events to its {{RingBuffer}} and the worker threads process them. 

The _events_ are 
[{{Message}}|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java]
 instances, which have preallocated buffers for eventual IO.

When the thrift server starts up, the corresponding {{ThriftServerThread}} 
joins on the selector threads, waiting for them to die. It then iterates 
through all the {{SelectorThread}} objects and calls their {{shutdown}} method 
which attempts to drain their corresponding {{RingBuffer}}. The [drain 
({{drainAndHalt}})|https://github.com/LMAX-Exchange/disruptor/blob/master/src/main/java/com/lmax/disruptor/WorkerPool.java#L147]
 works by letting the worker pool threads catch up to the "producer" index, ie. 
the selector thread.

When we execute a {{nodetool disablethrift}}, this attempts to {{stop}} the 
{{THsHaDisruptorServer}}. That works by setting a {{stopped}} flag to {{true}}. 
When the selector threads see that, they break from their {{select()}} loop, 
and clean up their resources, ie. the {{Message}} objects they've created and 
their buffers. *However*, if one of those {{Message}} objects is currently 
being used by a worker pool thread to process a request, if it calls [this 
piece of 
code|https://github.com/xedin/disruptor_thrift_server/blob/master/src/main/java/com/thinkaurelius/thrift/Message.java#L317],
 you'll get the following {{NullPointerException}}

{noformat}
Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler 
handleEventException
SEVERE: Exception processing: 633124 
com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
java.lang.NullPointerException
at com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338)
at com.thinkaurelius.thrift.Message.invoke(Message.java:308)
at com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
at 
com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

That fails because it tries to dereference one of the {{Message}} "cleaned up", 
ie. {{null}}, buffers.

Because that call is outside the {{try}} block, the exception escapes and 
basically kills the worker pool thread. This has the side effect of 
"discarding" one of the consumers of a selector's {{RingBuffer}}. 

That has the side effect of preventing the {{ThriftServerThread}} from draining 
the {{RingBuffer}} (and dying) since the consumers never catch up to the 
stopped producer. And that finally has the effect of preventing the {{nodetool 
disablethrift}} from proceeding since it's trying to {{join}} the 
{{ThriftServerThread}}. Deadlock!

The {{ThriftServerThread}} thread looks like

{noformat}
"Thread-1" #2234 prio=5 os_prio=0 tid=0x7f4ae6ff1000 nid=0x2eb6 runnable 
[0x7f4729174000]
   java.lang.Thread.State: RUNNABLE
at java.lang.Thread.yield(Native Method)
at com.lmax.disruptor.WorkerPool.drainAndHalt(WorkerPool.java:147)
at 
com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.shutdown(TDisruptorServer.java:633)
at 
com.thinkaurelius.thrift.TDisruptorServer.gracefullyShutdownInvokerPool(TDisruptorServer.java:301)
at 
com.thinkaurelius.thrift.TDisruptorServer.waitForShutdown(TDisruptorServer.java:280)
at 
org.apache.thrift.server.AbstractNonblockingServer.serve(AbstractNonblockingServer.java:95)
at 
org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.run(ThriftServer.java:137)
{noformat}

The {{nodetool disablethrift}} thread looks like


[jira] [Comment Edited] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-12-06 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726613#comment-15726613
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-12979 at 12/6/16 8:29 PM:
---

+1

We hit this issue recently. A huge set of sstables couldn't get compacted. 
We've been running a version of this patch (in 2.2 and just some added logging) 
in production for a couple of days and it unblocks these compactions.

I suggest you open a separate ticket for the {{RuntimeException}}, though. 
Nothing is currently set up to handle it right now. It doesn't even get logged. 
I assume that's why this issue wasn't identified sooner.


was (Author: s_delima):
+1

We hit this issue recently. A huge set of sstables couldn't get compacted. 
We've been running a version of this patch (just more logging) in production 
for a couple of days and it unblocks these compactions.

I suggest you open a separate ticket for the {{RuntimeException}}, though. 
Nothing is currently set up to handle it right now. It doesn't even get logged. 
I assume that's why this issue wasn't identified sooner.

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
> Fix For: 2.2.9, 3.0.11, 4.0, 3.x
>
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-12-06 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726613#comment-15726613
 ] 

Sotirios Delimanolis commented on CASSANDRA-12979:
--

+1

We hit this issue recently. A huge set of sstables couldn't get compacted. 
We've been running a version of this patch (just more logging) in production 
for a couple of days and it unblocks these compactions.

I suggest you open a separate ticket for the {{RuntimeException}}, though. 
Nothing is currently set up to handle it right now. It doesn't even get logged. 
I assume that's why this issue wasn't identified sooner.

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
> Fix For: 2.2.9, 3.0.11, 4.0, 3.x
>
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-11-30 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710203#comment-15710203
 ] 

Sotirios Delimanolis commented on CASSANDRA-12979:
--

Slightly related: that {{RuntimeException}} seems to get ignored and eventually 
swallowed by the current executor.

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11429) DROP TABLE IF EXISTS fails against table with similar name

2016-03-24 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211042#comment-15211042
 ] 

Sotirios Delimanolis commented on CASSANDRA-11429:
--

Right. We make all our changes sequentially from a single source, relying on 
the driver to wait for schema change agreement before proceeding to the next 
change. Nothing is done concurrently, at least not how I'm understanding it.

Unfortunately, I got rid of all the evidence. But I assume 
{{b8b40ed0-f194-11e5-b481-d944f7ad0ce3}} was the {{cf_id}} of the old table 
("uploads"). If it's not, I'm even more confused. Why is Cassandra mentioning 
it at all in relation to the {{cf_id}} of the new table? Is that what's going 
on?

> DROP TABLE IF EXISTS fails against table with similar name
> --
>
> Key: CASSANDRA-11429
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11429
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sotirios Delimanolis
>
> We had a table named {{our_keyspace.native_address_book_uploads_cache}} (note 
> the uploads*) which we dropped. We then created a new table named 
> {{our_keyspace.native_address_book_upload_cache}} (note the upload*).
> We have a patching component that applies commands to prepare the schema 
> using the C# driver. When we deploy, it tries to execute
> {noformat}
> DROP TABLE IF NOT EXISTS our_keyspace.native_address_book_uploads_cache;
> {noformat}
> This fails with
> {noformat}
> Caught an exception Cassandra.ServerErrorException: 
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
> mismatch (found c712a590-f194-11e5-891d-2d7ca98597ba; expected 
> b8b40ed0-f194-11e5-b481-d944f7ad0ce3)
> {noformat}
> showing the Cassandra Java exception through the C# driver. Note the 
> {{found}} cf_id of {{c712a590-f194-11e5-891d-2d7ca98597ba}}.
> I can reproduce this with {{cqlsh}}.
> {noformat}
> selimanolis$ cqlsh
> Connected to Default Cluster at hostname:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.13-SNAPSHOT | CQL spec 3.2.1 | Native protocol 
> v3]
> Use HELP for help.
> cqlsh> SELECT cf_id from system.schema_columnfamilies  where keyspace_name = 
> 'our_keyspace' and columnfamily_name ='native_address_book_uploads_cache';
>  keyspace_name | columnfamily_name | bloom_filter_fp_chance | caching | cf_id 
> | column_aliases | comment | compaction_strategy_class | 
> compaction_strategy_options | comparator | compression_parameters | 
> default_time_to_live | default_validator | dropped_columns | gc_grace_seconds 
> | index_interval | is_dense | key_aliases | key_validator | 
> local_read_repair_chance | max_compaction_threshold | max_index_interval | 
> memtable_flush_period_in_ms | min_compaction_threshold | min_index_interval | 
> read_repair_chance | speculative_retry | subcomparator | type | value_alias
> ---+---++-+---++-+---+-+++--+---+-+--++--+-+---+--+--++-+--+++---+---+--+-
> (0 rows)
> cqlsh> SELECT cf_id from system.schema_columnfamilies  where keyspace_name = 
> 'our_keyspace' and columnfamily_name ='native_address_book_upload_cache';
>  cf_id
> --
>  c712a590-f194-11e5-891d-2d7ca98597ba
> (1 rows)
> cqlsh> drop TABLE IF EXISTS our_keyspace.native_address_book_uploads_cache;
> InvalidRequest: code=2200 [Invalid query] message="No keyspace has been 
> specified. USE a keyspace, or explicitly specify keyspace.tablename"
> cqlsh> drop TABLE IF EXISTS our_keyspace.native_address_book_uploads_cache;
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
> mismatch (found c712a590-f194-11e5-891d-2d7ca98597ba; expected 
> b8b40ed0-f194-11e5-b481-d944f7ad0ce3)">
> cqlsh> 
> {noformat}
> The table doesn't exist. A table that has a similar name does. You'll notice 
> that the new table has same {{cf_id}} found in the error message above. Why 
> does Cassandra confuse the two?
> Our expectation is for the {{DROP TABLE IF EXISTS}} to silently succeed.
> Similarly, we expect a {{DROP TABLE}} to fail because the table doesn't 
> exist. That's not what happens if you see below

[jira] [Commented] (CASSANDRA-11429) DROP TABLE IF EXISTS fails against table with similar name

2016-03-24 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210950#comment-15210950
 ] 

Sotirios Delimanolis commented on CASSANDRA-11429:
--

It's not. The deployment process starts and blocks on a single host in the 
cluster. So all those CREATE IF NOT EXISTS/DROP/ALTER are only run on one node, 
sequentially. I haven't seen anything that suggests that this process isn't 
working as intended.

Why is Cassandra associating the old table with the found new table {{cf_id}}, 
{{c712a590-f194-11e5-891d-2d7ca98597ba}}? Please clarify "Cassandra isn't 
confusing the two.".

Only restarting the nodes didn't help. I had to actually clear those cache 
files.

> DROP TABLE IF EXISTS fails against table with similar name
> --
>
> Key: CASSANDRA-11429
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11429
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sotirios Delimanolis
>
> We had a table named {{our_keyspace.native_address_book_uploads_cache}} (note 
> the uploads*) which we dropped. We then created a new table named 
> {{our_keyspace.native_address_book_upload_cache}} (note the upload*).
> We have a patching component that applies commands to prepare the schema 
> using the C# driver. When we deploy, it tries to execute
> {noformat}
> DROP TABLE IF NOT EXISTS our_keyspace.native_address_book_uploads_cache;
> {noformat}
> This fails with
> {noformat}
> Caught an exception Cassandra.ServerErrorException: 
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
> mismatch (found c712a590-f194-11e5-891d-2d7ca98597ba; expected 
> b8b40ed0-f194-11e5-b481-d944f7ad0ce3)
> {noformat}
> showing the Cassandra Java exception through the C# driver. Note the 
> {{found}} cf_id of {{c712a590-f194-11e5-891d-2d7ca98597ba}}.
> I can reproduce this with {{cqlsh}}.
> {noformat}
> selimanolis$ cqlsh
> Connected to Default Cluster at hostname:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.13-SNAPSHOT | CQL spec 3.2.1 | Native protocol 
> v3]
> Use HELP for help.
> cqlsh> SELECT cf_id from system.schema_columnfamilies  where keyspace_name = 
> 'our_keyspace' and columnfamily_name ='native_address_book_uploads_cache';
>  keyspace_name | columnfamily_name | bloom_filter_fp_chance | caching | cf_id 
> | column_aliases | comment | compaction_strategy_class | 
> compaction_strategy_options | comparator | compression_parameters | 
> default_time_to_live | default_validator | dropped_columns | gc_grace_seconds 
> | index_interval | is_dense | key_aliases | key_validator | 
> local_read_repair_chance | max_compaction_threshold | max_index_interval | 
> memtable_flush_period_in_ms | min_compaction_threshold | min_index_interval | 
> read_repair_chance | speculative_retry | subcomparator | type | value_alias
> ---+---++-+---++-+---+-+++--+---+-+--++--+-+---+--+--++-+--+++---+---+--+-
> (0 rows)
> cqlsh> SELECT cf_id from system.schema_columnfamilies  where keyspace_name = 
> 'our_keyspace' and columnfamily_name ='native_address_book_upload_cache';
>  cf_id
> --
>  c712a590-f194-11e5-891d-2d7ca98597ba
> (1 rows)
> cqlsh> drop TABLE IF EXISTS our_keyspace.native_address_book_uploads_cache;
> InvalidRequest: code=2200 [Invalid query] message="No keyspace has been 
> specified. USE a keyspace, or explicitly specify keyspace.tablename"
> cqlsh> drop TABLE IF EXISTS our_keyspace.native_address_book_uploads_cache;
> ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
> mismatch (found c712a590-f194-11e5-891d-2d7ca98597ba; expected 
> b8b40ed0-f194-11e5-b481-d944f7ad0ce3)">
> cqlsh> 
> {noformat}
> The table doesn't exist. A table that has a similar name does. You'll notice 
> that the new table has same {{cf_id}} found in the error message above. Why 
> does Cassandra confuse the two?
> Our expectation is for the {{DROP TABLE IF EXISTS}} to silently succeed.
> Similarly, we expect a {{DROP TABLE}} to fail because the table doesn't 
> exist. That's not what happens if you see below
> {noformat}
> 

[jira] [Commented] (CASSANDRA-11429) DROP TABLE IF EXISTS fails against table with similar name

2016-03-24 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210875#comment-15210875
 ] 

Sotirios Delimanolis commented on CASSANDRA-11429:
--

I should mention that the following logs appeared at startup 

{noformat}
INFO  [pool-2-thread-1] 2016-03-24 19:18:04,806 AutoSavingCache.java:240 - 
Harmless error reading saved cache 
/home/var/cassandra/saved_caches/KeyCache-ba.db
java.lang.RuntimeException: Cache schema version 
22506978-06de-3af5-811c-509b6cef245f does not match current schema version 
c9f76283-941e-3485-819a-816bbbde3d4f
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:188) 
~[apache-cassandra-2.1.13.jar:2.1.13-SNAPSHOT]
at 
org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:148) 
[apache-cassandra-2.1.13.jar:2.1.13-SNAPSHOT]
at 
org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:144) 
[apache-cassandra-2.1.13.jar:2.1.13-SNAPSHOT]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_72]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_72]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_72]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
INFO  [pool-3-thread-1] 2016-03-24 19:18:04,806 AutoSavingCache.java:240 - 
Harmless error reading saved cache 
/home/var/cassandra/saved_caches/RowCache-ba.db
java.lang.RuntimeException: Cache schema version 
22506978-06de-3af5-811c-509b6cef245f does not match current schema version 
c9f76283-941e-3485-819a-816bbbde3d4f
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:188) 
~[apache-cassandra-2.1.13.jar:2.1.13-SNAPSHOT]
at 
org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:148) 
[apache-cassandra-2.1.13.jar:2.1.13-SNAPSHOT]
at 
org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:144) 
[apache-cassandra-2.1.13.jar:2.1.13-SNAPSHOT]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_72]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_72]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_72]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
{noformat}

After clearing that {{saved_caches}} folder across the cluster and restarting 
all nodes, the error went away. How did we get into this situation?

 

> DROP TABLE IF EXISTS fails against table with similar name
> --
>
> Key: CASSANDRA-11429
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11429
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sotirios Delimanolis
>
> We had a table named {{our_keyspace.native_address_book_uploads_cache}} (note 
> the uploads*) which we dropped. We then created a new table named 
> {{our_keyspace.native_address_book_upload_cache}} (note the upload*).
> We have a patching component that applies commands to prepare the schema 
> using the C# driver. When we deploy, it tries to execute
> {noformat}
> DROP TABLE IF NOT EXISTS our_keyspace.native_address_book_uploads_cache;
> {noformat}
> This fails with
> {noformat}
> Caught an exception Cassandra.ServerErrorException: 
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
> mismatch (found c712a590-f194-11e5-891d-2d7ca98597ba; expected 
> b8b40ed0-f194-11e5-b481-d944f7ad0ce3)
> {noformat}
> showing the Cassandra Java exception through the C# driver. Note the 
> {{found}} cf_id of {{c712a590-f194-11e5-891d-2d7ca98597ba}}.
> I can reproduce this with {{cqlsh}}.
> {noformat}
> selimanolis$ cqlsh
> Connected to Default Cluster at hostname:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.13-SNAPSHOT | CQL spec 3.2.1 | Native protocol 
> v3]
> Use HELP for help.
> cqlsh> SELECT cf_id from system.schema_columnfamilies  where keyspace_name = 
> 'our_keyspace' and columnfamily_name ='native_address_book_uploads_cache';
>  keyspace_name | columnfamily_name | bloom_filter_fp_chance | caching | cf_id 
> | column_aliases | comment | compaction_strategy_class | 
> compaction_strategy_options | comparator | compression_parameters | 
> default_time_to_live | default_validator | dropped_columns | gc_grace_seconds 
> | index_interval | is_dense | key_aliases | key_validator | 
> local_read_repair_chance | max_compaction_threshold | max_index_interval | 
> memtable_flush_period_in_ms | min_compaction_threshold | min_index_interval | 
> read_repair_chance | speculative_retry | subcomparator | 

[jira] [Updated] (CASSANDRA-11429) DROP TABLE IF EXISTS fails against table with similar name

2016-03-24 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-11429:
-
Description: 
We had a table named {{our_keyspace.native_address_book_uploads_cache}} (note 
the uploads*) which we dropped. We then created a new table named 
{{our_keyspace.native_address_book_upload_cache}} (note the upload*).

We have a patching component that applies commands to prepare the schema using 
the C# driver. When we deploy, it tries to execute

{noformat}
DROP TABLE IF NOT EXISTS our_keyspace.native_address_book_uploads_cache;
{noformat}

This fails with

{noformat}
Caught an exception Cassandra.ServerErrorException: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
mismatch (found c712a590-f194-11e5-891d-2d7ca98597ba; expected 
b8b40ed0-f194-11e5-b481-d944f7ad0ce3)
{noformat}

showing the Cassandra Java exception through the C# driver. Note the {{found}} 
cf_id of {{c712a590-f194-11e5-891d-2d7ca98597ba}}.

I can reproduce this with {{cqlsh}}.

{noformat}
selimanolis$ cqlsh
Connected to Default Cluster at hostname:9042.
[cqlsh 5.0.1 | Cassandra 2.1.13-SNAPSHOT | CQL spec 3.2.1 | Native protocol v3]
Use HELP for help.
cqlsh> SELECT cf_id from system.schema_columnfamilies  where keyspace_name = 
'our_keyspace' and columnfamily_name ='native_address_book_uploads_cache';

 keyspace_name | columnfamily_name | bloom_filter_fp_chance | caching | cf_id | 
column_aliases | comment | compaction_strategy_class | 
compaction_strategy_options | comparator | compression_parameters | 
default_time_to_live | default_validator | dropped_columns | gc_grace_seconds | 
index_interval | is_dense | key_aliases | key_validator | 
local_read_repair_chance | max_compaction_threshold | max_index_interval | 
memtable_flush_period_in_ms | min_compaction_threshold | min_index_interval | 
read_repair_chance | speculative_retry | subcomparator | type | value_alias
---+---++-+---++-+---+-+++--+---+-+--++--+-+---+--+--++-+--+++---+---+--+-

(0 rows)
cqlsh> SELECT cf_id from system.schema_columnfamilies  where keyspace_name = 
'our_keyspace' and columnfamily_name ='native_address_book_upload_cache';

 cf_id
--
 c712a590-f194-11e5-891d-2d7ca98597ba

(1 rows)
cqlsh> drop TABLE IF EXISTS our_keyspace.native_address_book_uploads_cache;
InvalidRequest: code=2200 [Invalid query] message="No keyspace has been 
specified. USE a keyspace, or explicitly specify keyspace.tablename"
cqlsh> drop TABLE IF EXISTS our_keyspace.native_address_book_uploads_cache;
ServerError: 
cqlsh> 
{noformat}

The table doesn't exist. A table that has a similar name does. You'll notice 
that the new table has same {{cf_id}} found in the error message above. Why 
does Cassandra confuse the two?

Our expectation is for the {{DROP TABLE IF EXISTS}} to silently succeed.

Similarly, we expect a {{DROP TABLE}} to fail because the table doesn't exist. 
That's not what happens if you see below

{noformat}
cqlsh> DROP TABLE our_keyspace.native_address_book_uploads_cache;
ServerError: 
cqlsh> DROP TABLE our_keyspace.native_address_book_uploads_cacheadsfasdf;
InvalidRequest: code=2200 [Invalid query] message="unconfigured columnfamily 
native_address_book_uploads_cacheadsfasdf"
{noformat}



I cannot reproduce the problem with entirely new tables.

  was:
We had a table named {{our_keyspace.native_address_book_uploads_cache}} (note 
the uploads*) which we dropped. We then created a new table named 
{{our_keyspace.native_address_book_upload_cache}} (note the upload*).

We have a patching component that applies commands to prepare the schema using 
the C# driver. When we deploy, it tries to execute

{noformat}
DROP TABLE IF NOT EXISTS our_keyspace.native_address_book_uploads_cache;
{noformat}

This fails with

{noformat}
Caught an exception Cassandra.ServerErrorException: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
mismatch (found c712a590-f194-11e5-891d-2d7ca98597ba; expected 
b8b40ed0-f194-11e5-b481-d944f7ad0ce3)
{noformat}

showing the Cassandra Java exception through the C# driver. Note the {{found}} 
cf_id of {{c712a590-f194-11e5-891d-2d7ca98597ba}}.

I can 

[jira] [Created] (CASSANDRA-11429) DROP TABLE IF EXISTS fails against table with similar name

2016-03-24 Thread Sotirios Delimanolis (JIRA)
Sotirios Delimanolis created CASSANDRA-11429:


 Summary: DROP TABLE IF EXISTS fails against table with similar name
 Key: CASSANDRA-11429
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11429
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sotirios Delimanolis


We had a table named {{our_keyspace.native_address_book_uploads_cache}} (note 
the uploads*) which we dropped. We then created a new table named 
{{our_keyspace.native_address_book_upload_cache}} (note the upload*).

We have a patching component that applies commands to prepare the schema using 
the C# driver. When we deploy, it tries to execute

{noformat}
DROP TABLE IF NOT EXISTS our_keyspace.native_address_book_uploads_cache;
{noformat}

This fails with

{noformat}
Caught an exception Cassandra.ServerErrorException: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
mismatch (found c712a590-f194-11e5-891d-2d7ca98597ba; expected 
b8b40ed0-f194-11e5-b481-d944f7ad0ce3)
{noformat}

showing the Cassandra Java exception through the C# driver. Note the {{found}} 
cf_id of {{c712a590-f194-11e5-891d-2d7ca98597ba}}.

I can reproduce this with {{cqlsh}}.

{noformat}
selimanolis$ cqlsh
Connected to Default Cluster at hostname:9042.
[cqlsh 5.0.1 | Cassandra 2.1.13-SNAPSHOT | CQL spec 3.2.1 | Native protocol v3]
Use HELP for help.
cqlsh> SELECT cf_id from system.schema_columnfamilies  where keyspace_name = 
'our_keyspace' and columnfamily_name ='native_address_book_uploads_cache';

 keyspace_name | columnfamily_name | bloom_filter_fp_chance | caching | cf_id | 
column_aliases | comment | compaction_strategy_class | 
compaction_strategy_options | comparator | compression_parameters | 
default_time_to_live | default_validator | dropped_columns | gc_grace_seconds | 
index_interval | is_dense | key_aliases | key_validator | 
local_read_repair_chance | max_compaction_threshold | max_index_interval | 
memtable_flush_period_in_ms | min_compaction_threshold | min_index_interval | 
read_repair_chance | speculative_retry | subcomparator | type | value_alias
---+---++-+---++-+---+-+++--+---+-+--++--+-+---+--+--++-+--+++---+---+--+-

(0 rows)
cqlsh> SELECT cf_id from system.schema_columnfamilies  where keyspace_name = 
'our_keyspace' and columnfamily_name ='native_address_book_upload_cache';

 cf_id
--
 c712a590-f194-11e5-891d-2d7ca98597ba

(1 rows)
cqlsh> drop TABLE IF EXISTS our_keyspace.native_address_book_uploads_cache;
InvalidRequest: code=2200 [Invalid query] message="No keyspace has been 
specified. USE a keyspace, or explicitly specify keyspace.tablename"
cqlsh> drop TABLE IF EXISTS our_keyspace.native_address_book_uploads_cache;
ServerError: 
cqlsh> 
{noformat}

The table doesn't exist. A table that has a similar name does. You'll notice 
that the new table has same {{cf_id}} found in the error message above. Why 
does Cassandra confuse the two?

Our expectation is for the {{DROP TABLE IF EXISTS}} to silently succeed.

Similarly, we expect a {{DROP TABLE}} to fail because the table doesn't exist. 
That's not what happens if you see below

{noformat}
cqlsh> DROP TABLE our_keyspace.native_address_book_uploads_cache;
ServerError: 
cqlsh> DROP TABLE our_keyspace.native_address_book_uploads_cacheadsfasdf;
InvalidRequest: code=2200 [Invalid query] message="unconfigured columnfamily 
native_address_book_uploads_cacheadsfasdf"
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8818) Creating keyspace then table fails with non-prepared query

2015-02-17 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325307#comment-14325307
 ] 

Sotirios Delimanolis commented on CASSANDRA-8818:
-

Shouldn't the ConsistencyLevel of All account for that?


 Creating keyspace then table fails with non-prepared query
 --

 Key: CASSANDRA-8818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8818
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers (now out of tree)
Reporter: Jonathan New

 Hi, I'm not sure if this is a driver or cassandra issue, so please feel free 
 to move to the appropriate component. I'm using C# on mono (linux), and the 
 2.5.0 cassandra driver for C#.  We have a cluster of 3 nodes, and we noticed 
 that when we created a keyspace, then a table for that keyspace in quick 
 succession it would fail frequently. I put our approximate code below.
 Additionally, we noticed that if we did a prepared statement instead of just 
 executing the query, it would succeed. It also appeared that running the 
 queries from a .cql file (outside of our C# program) would succeed as well. 
 In this case with tracing on, we saw that it was Preparing statement.
 Please let me know if you need additional details. Thanks!
 {noformat}
 var pooling = new PoolingOptions ()
 .SetMaxConnectionsPerHost (HostDistance.Remote, 24) 
 .SetHeartBeatInterval (1000);
 var queryOptions = new QueryOptions ()
 .SetConsistencyLevel(ConsistencyLevel.ALL);
 var builder = Cluster.Builder ()
 .AddContactPoints (contactPoints)
 .WithPort (9042)
 .WithPoolingOptions (pooling)
 .WithQueryOptions (queryOptions)
 .WithQueryTimeout (15000);
 String keyspaceQuery = CREATE KEYSPACE IF NOT EXISTS metadata WITH 
 replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  AND 
 durable_writes = true;;
 String tableQuery = CREATE TABLE IF NOT EXISTS  metadata.patch_history (
   metadata_key text,
   patch_version int,
   applied_date timestamp,
   patch_file text,
 PRIMARY KEY (metadata_key, patch_version)
 ) WITH CLUSTERING ORDER BY (patch_version DESC)
   AND bloom_filter_fp_chance = 0.01
   AND caching = 'KEYS_ONLY'
   AND comment = ''
   AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
   AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
   AND dclocal_read_repair_chance = 0.1
   AND default_time_to_live = 0
   AND gc_grace_seconds = 864000
   AND memtable_flush_period_in_ms = 0
   AND read_repair_chance = 0.0
   AND speculative_retry = '99.0PERCENTILE';;
 using (var session = cluster.Connect ()) {
   session.Execute(keyspaceQuery);
   session.Execute(tableQuery);
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8818) Creating keyspace then table fails with non-prepared query

2015-02-17 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325307#comment-14325307
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-8818 at 2/18/15 2:06 AM:
--

Shouldn't the ConsistencyLevel of ALL account for that?



was (Author: s_delima):
Shouldn't the ConsistencyLevel of All account for that?


 Creating keyspace then table fails with non-prepared query
 --

 Key: CASSANDRA-8818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8818
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers (now out of tree)
Reporter: Jonathan New

 Hi, I'm not sure if this is a driver or cassandra issue, so please feel free 
 to move to the appropriate component. I'm using C# on mono (linux), and the 
 2.5.0 cassandra driver for C#.  We have a cluster of 3 nodes, and we noticed 
 that when we created a keyspace, then a table for that keyspace in quick 
 succession it would fail frequently. I put our approximate code below.
 Additionally, we noticed that if we did a prepared statement instead of just 
 executing the query, it would succeed. It also appeared that running the 
 queries from a .cql file (outside of our C# program) would succeed as well. 
 In this case with tracing on, we saw that it was Preparing statement.
 Please let me know if you need additional details. Thanks!
 {noformat}
 var pooling = new PoolingOptions ()
 .SetMaxConnectionsPerHost (HostDistance.Remote, 24) 
 .SetHeartBeatInterval (1000);
 var queryOptions = new QueryOptions ()
 .SetConsistencyLevel(ConsistencyLevel.ALL);
 var builder = Cluster.Builder ()
 .AddContactPoints (contactPoints)
 .WithPort (9042)
 .WithPoolingOptions (pooling)
 .WithQueryOptions (queryOptions)
 .WithQueryTimeout (15000);
 String keyspaceQuery = CREATE KEYSPACE IF NOT EXISTS metadata WITH 
 replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  AND 
 durable_writes = true;;
 String tableQuery = CREATE TABLE IF NOT EXISTS  metadata.patch_history (
   metadata_key text,
   patch_version int,
   applied_date timestamp,
   patch_file text,
 PRIMARY KEY (metadata_key, patch_version)
 ) WITH CLUSTERING ORDER BY (patch_version DESC)
   AND bloom_filter_fp_chance = 0.01
   AND caching = 'KEYS_ONLY'
   AND comment = ''
   AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
   AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
   AND dclocal_read_repair_chance = 0.1
   AND default_time_to_live = 0
   AND gc_grace_seconds = 864000
   AND memtable_flush_period_in_ms = 0
   AND read_repair_chance = 0.0
   AND speculative_retry = '99.0PERCENTILE';;
 using (var session = cluster.Connect ()) {
   session.Execute(keyspaceQuery);
   session.Execute(tableQuery);
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8638) CQLSH -f option should ignore BOM in files

2015-01-19 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283416#comment-14283416
 ] 

Sotirios Delimanolis commented on CASSANDRA-8638:
-

See the wikipedia article here: http://en.wikipedia.org/wiki/Byte_order_mark

1. It's just a few bytes added at the beginning of a (text) file's content. 
These bytes are typically added when the file's content is meant to be 
exchanged between environments to compensate for different endianness. 

In my case, I was developing on monodevelop and the IDE seemed to introduce a 
UTF-8 BOM for regular files. I've seen other IDEs like Eclipse do the same 
thing (ex: for XML files). 

2-3. The wikipedia article shows some of the BOMs for various encodings. 
Special care should to be taken for when these characters appear in the middle 
of the content as opposed to the start.


 CQLSH -f option should ignore BOM in files
 --

 Key: CASSANDRA-8638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8638
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
 Environment: Red Hat linux
Reporter: Sotirios Delimanolis
Priority: Trivial
  Labels: cqlsh, lhf
 Fix For: 2.1.3


 I fell in byte order mark trap trying to execute a CQL script through CQLSH. 
 The file contained the simple (plus BOM)
 {noformat}
 CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 
 -- and another CREATE TABLE bucket_flags query
 {noformat}
 I executed the script
 {noformat}
 [~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
 /home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
 /home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
 test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 
 '3'}  AND durable_writes = true; 
 /home/selimanolis/Schema/patches/setup.cql:2:  ^
 /home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
 ErrorMessage code=2300 [Query invalid because of configuration issue] 
 message=Cannot add column family 'bucket_flags' to non existing keyspace 
 'test'.
 {noformat}
 I realized much later that the file had a BOM which was seemingly screwing 
 with how CQLSH parsed the file.
 It would be nice to have CQLSH ignore the BOM when processing files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8638) CQLSH -f option should ignore BOM in files

2015-01-16 Thread Sotirios Delimanolis (JIRA)
Sotirios Delimanolis created CASSANDRA-8638:
---

 Summary: CQLSH -f option should ignore BOM in files
 Key: CASSANDRA-8638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8638
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Drivers (now out of tree)
 Environment: Red Hat linux
Reporter: Sotirios Delimanolis
Priority: Trivial


I fell in byte order mark trap trying to execute a CQL script through CQLSH. 

The file contained the simple (plus BOM)

{noformat}
CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 

-- and another CREATE TABLE bucket_flags query
{noformat}

I executed the script

{noformat}
[~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
/home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
/home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  
AND durable_writes = true; 
/home/selimanolis/Schema/patches/setup.cql:2:  ^
/home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
ErrorMessage code=2300 [Query invalid because of configuration issue] 
message=Cannot add column family 'bucket_flags' to non existing keyspace 
'test'.
{noformat}

I realized much later that the file had a BOM which was seemingly screwing with 
how CQLSH parsed the file.

It would be nice to have CQLSH ignore the BOM when processing files.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8638) CQLSH -f option should ignore BOM in files

2015-01-16 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8638:

Description: 
I fell in byte order mark trap trying to execute a CQL script through CQLSH. 

The file contained the simple (plus BOM)

{noformat}
CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 

-- and another CREATE TABLE bucket_flags query
{noformat}

I executed the script

{noformat}
[~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
/home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
/home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  
AND durable_writes = true; 
/home/selimanolis/Schema/patches/setup.cql:2:  ^
/home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
ErrorMessage code=2300 [Query invalid because of configuration issue] 
message=Cannot add column family 'bucket_flags' to non existing keyspace 
'test'.
{noformat}

I realized much later that the file had a BOM which was seemingly screwing with 
how CQLSH parsed the file.

It would be nice to have CQLSH ignore the BOM when processing files.

(The C# driver also failed when executing the content of the script

{noformat}
var session = cluster.Connect ();
string script = File.ReadAllText (schemaLocation);
session.Execute (script);
{noformat}

but this can be avoided by ignoring the BOM application-side.)

  was:
I fell in byte order mark trap trying to execute a CQL script through CQLSH. 

The file contained the simple (plus BOM)

{noformat}
CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 

-- and another CREATE TABLE bucket_flags query
{noformat}

I executed the script

{noformat}
[~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
/home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
/home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  
AND durable_writes = true; 
/home/selimanolis/Schema/patches/setup.cql:2:  ^
/home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
ErrorMessage code=2300 [Query invalid because of configuration issue] 
message=Cannot add column family 'bucket_flags' to non existing keyspace 
'test'.
{noformat}

I realized much later that the file had a BOM which was seemingly screwing with 
how CQLSH parsed the file.

It would be nice to have CQLSH ignore the BOM when processing files.



 CQLSH -f option should ignore BOM in files
 --

 Key: CASSANDRA-8638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8638
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Drivers (now out of tree)
 Environment: Red Hat linux
Reporter: Sotirios Delimanolis
Priority: Trivial

 I fell in byte order mark trap trying to execute a CQL script through CQLSH. 
 The file contained the simple (plus BOM)
 {noformat}
 CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 
 -- and another CREATE TABLE bucket_flags query
 {noformat}
 I executed the script
 {noformat}
 [~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
 /home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
 /home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
 test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 
 '3'}  AND durable_writes = true; 
 /home/selimanolis/Schema/patches/setup.cql:2:  ^
 /home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
 ErrorMessage code=2300 [Query invalid because of configuration issue] 
 message=Cannot add column family 'bucket_flags' to non existing keyspace 
 'test'.
 {noformat}
 I realized much later that the file had a BOM which was seemingly screwing 
 with how CQLSH parsed the file.
 It would be nice to have CQLSH ignore the BOM when processing files.
 (The C# driver also failed when executing the content of the script
 {noformat}
 var session = cluster.Connect ();
 string script = File.ReadAllText (schemaLocation);
 session.Execute (script);
 {noformat}
 but this can be avoided by ignoring the BOM application-side.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8638) CQLSH -f option should ignore BOM in files

2015-01-16 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8638:

Description: 
I fell in byte order mark trap trying to execute a CQL script through CQLSH. 

The file contained the simple (plus BOM)

{noformat}
CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 

-- and another CREATE TABLE bucket_flags query
{noformat}

I executed the script

{noformat}
[~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
/home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
/home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  
AND durable_writes = true; 
/home/selimanolis/Schema/patches/setup.cql:2:  ^
/home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
ErrorMessage code=2300 [Query invalid because of configuration issue] 
message=Cannot add column family 'bucket_flags' to non existing keyspace 
'test'.
{noformat}

I realized much later that the file had a BOM which was seemingly screwing with 
how CQLSH parsed the file.

It would be nice to have CQLSH ignore the BOM when processing files.


  was:
I fell in byte order mark trap trying to execute a CQL script through CQLSH. 

The file contained the simple (plus BOM)

{noformat}
CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 

-- and another CREATE TABLE bucket_flags query
{noformat}

I executed the script

{noformat}
[~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
/home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
/home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  
AND durable_writes = true; 
/home/selimanolis/Schema/patches/setup.cql:2:  ^
/home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
ErrorMessage code=2300 [Query invalid because of configuration issue] 
message=Cannot add column family 'bucket_flags' to non existing keyspace 
'test'.
{noformat}

I realized much later that the file had a BOM which was seemingly screwing with 
how CQLSH parsed the file.

It would be nice to have CQLSH ignore the BOM when processing files.

(The C# driver also failed when executing the content of the script

{noformat}
var session = cluster.Connect ();
string script = File.ReadAllText (schemaLocation);
session.Execute (script);
{noformat}

but this can be avoided by ignoring the BOM application-side.)


 CQLSH -f option should ignore BOM in files
 --

 Key: CASSANDRA-8638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8638
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Drivers (now out of tree)
 Environment: Red Hat linux
Reporter: Sotirios Delimanolis
Priority: Trivial
  Labels: cqlsh, lhf

 I fell in byte order mark trap trying to execute a CQL script through CQLSH. 
 The file contained the simple (plus BOM)
 {noformat}
 CREATE KEYSPACE IF NOT EXISTS xobni WITH replication = {'class': 
 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true; 
 -- and another CREATE TABLE bucket_flags query
 {noformat}
 I executed the script
 {noformat}
 [~]$ cqlsh --file /home/selimanolis/Schema/patches/setup.cql 
 /home/selimanolis/Schema/patches/setup.cql:2:Invalid syntax at char 1
 /home/selimanolis/Schema/patches/setup.cql:2:  CREATE KEYSPACE IF NOT EXISTS 
 test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 
 '3'}  AND durable_writes = true; 
 /home/selimanolis/Schema/patches/setup.cql:2:  ^
 /home/selimanolis/Schema/patches/setup.cql:22:ConfigurationException: 
 ErrorMessage code=2300 [Query invalid because of configuration issue] 
 message=Cannot add column family 'bucket_flags' to non existing keyspace 
 'test'.
 {noformat}
 I realized much later that the file had a BOM which was seemingly screwing 
 with how CQLSH parsed the file.
 It would be nice to have CQLSH ignore the BOM when processing files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8585:

Description: 
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 


[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 10800
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and cells_per_row_to_cache = '0'
  and default_time_to_live = 0
  and speculative_retry = 'NONE'
  and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'};

but

[default@MyKeyspace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor

Note how the default column value validator and cell sorting is UTF8Type rather 
than the BytesType reported earlier.

If I populate the column family and list its rows, I get

[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.

I can't see the row. I can temporarily fix this by setting the 
default_column_validator

[default@MyKeyspace] update column family SomeColumnFamily with 
default_validation_class = BytesType;
0fba13e4-aac6-3963-ad65-ba354d99ebdc
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
= (name=some name, value=some value, timestamp=635540144263687300)
---
[More RowKeys]

If I do a DESCRIBE again, though, LIST stops working again.

[default@KeySpace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.

I can access the column family rows with other clients, the C# driver for 
example.


  was:
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

code
[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and 

[jira] [Updated] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8585:

Description: 
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

code
[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 10800
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and cells_per_row_to_cache = '0'
  and default_time_to_live = 0
  and speculative_retry = 'NONE'
  and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'};
code
but

[default@MyKeyspace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor

Note how the default column value validator and cell sorting is UTF8Type rather 
than the BytesType reported earlier.

If I populate the column family and list its rows, I get

[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.

I can't see the row. I can temporarily fix this by setting the 
default_column_validator

[default@MyKeyspace] update column family SomeColumnFamily with 
default_validation_class = BytesType;
0fba13e4-aac6-3963-ad65-ba354d99ebdc
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
= (name=some name, value=some value, timestamp=635540144263687300)
---
[More RowKeys]

If I do a DESCRIBE again, though, LIST stops working again.

[default@KeySpace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.

I can access the column family rows with other clients, the C# driver for 
example.


  was:
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and 

[jira] [Commented] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269964#comment-14269964
 ] 

Sotirios Delimanolis commented on CASSANDRA-8585:
-

{noformat}
cqlsh:MyKeyspace describe table SomeColumnFamily;

CREATE TABLE MyKeyspace.SomeColumnFamily (
key blob,
column1 blob,
value blob,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC)
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'sstable_size_in_mb': '200', 'max_threshold': '32', 
'min_threshold': '4', 'tombstone_compaction_interval': '300', 
'tombstone_threshold': '0.1', 'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 10800
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = 'NONE';
{noformat}


 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Priority: Minor
 Fix For: 2.1.3


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables 

[jira] [Comment Edited] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269964#comment-14269964
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-8585 at 1/8/15 8:01 PM:
-

Everything is a blob.

{noformat}
cqlsh:MyKeyspace describe table SomeColumnFamily;

CREATE TABLE MyKeyspace.SomeColumnFamily (
key blob,
column1 blob,
value blob,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC)
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'sstable_size_in_mb': '200', 'max_threshold': '32', 
'min_threshold': '4', 'tombstone_compaction_interval': '300', 
'tombstone_threshold': '0.1', 'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 10800
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = 'NONE';
{noformat}



was (Author: s_delima):
{noformat}
cqlsh:MyKeyspace describe table SomeColumnFamily;

CREATE TABLE MyKeyspace.SomeColumnFamily (
key blob,
column1 blob,
value blob,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC)
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'sstable_size_in_mb': '200', 'max_threshold': '32', 
'min_threshold': '4', 'tombstone_compaction_interval': '300', 
'tombstone_threshold': '0.1', 'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 10800
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = 'NONE';
{noformat}


 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Priority: Minor
 Fix For: 2.1.3


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: 

[jira] [Commented] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269974#comment-14269974
 ] 

Sotirios Delimanolis commented on CASSANDRA-8585:
-

It seems like this: http://wiki.apache.org/cassandra/ThriftExamples#C.23 

I'll get back soon with more details. I have to find the dev that originally 
chose it.

 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Assignee: Philip Thompson
Priority: Minor
 Fix For: 2.1.3


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 

[jira] [Commented] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269978#comment-14269978
 ] 

Sotirios Delimanolis commented on CASSANDRA-8585:
-

Then it would seem the issue is with CLI.

 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Assignee: Philip Thompson
Priority: Minor
 Fix For: 2.1.3


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 

[jira] [Updated] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8585:

Description: 
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

{noformat}
[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 10800
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and cells_per_row_to_cache = '0'
  and default_time_to_live = 0
  and speculative_retry = 'NONE'
  and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'};
{noformat}

but

{noformat}
[default@MyKeyspace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
{noformat}

Note how the default column value validator and cell sorting is UTF8Type rather 
than the BytesType reported earlier.

If I populate the column family and list its rows, I get

{noformat}
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.
{noformat}

I can't see the row. I can temporarily fix this by setting the 
default_column_validator

{noformat}
[default@MyKeyspace] update column family SomeColumnFamily with 
default_validation_class = BytesType;
0fba13e4-aac6-3963-ad65-ba354d99ebdc
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
= (name=some name, value=some value, timestamp=635540144263687300)
---
[More RowKeys]
{noformat}

If I do a DESCRIBE again, though, LIST stops working again.

{noformat}
[default@KeySpace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.
{noformat}

I can access the column family rows with other clients, the C# driver for 
example.

With C#
{code:c#}
var keyspaceDef = pool.DescribeKeyspace (MyKeyspace);
var cfDefs = keyspaceDef.Cf_defs;
foreach (CfDef cfDef in cfDefs) {
if (cfDef.Name == SomeColumnFamily) {
Console.WriteLine (Default 

[jira] [Commented] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270036#comment-14270036
 ] 

Sotirios Delimanolis commented on CASSANDRA-8585:
-

That's definitely good to know. I'll make sure that the one responsible is 
aware (I did my upgrade locally, nothing serious).  Cheers!

 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Assignee: Philip Thompson
Priority: Minor
 Fix For: 2.1.3

 Attachments: schema_columnfamilies.out, schema_columns.out


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction 

[jira] [Updated] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8585:

Description: 
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

{noformat}
[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 10800
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and cells_per_row_to_cache = '0'
  and default_time_to_live = 0
  and speculative_retry = 'NONE'
  and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'};
{noformat}

but

[default@MyKeyspace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor

Note how the default column value validator and cell sorting is UTF8Type rather 
than the BytesType reported earlier.

If I populate the column family and list its rows, I get

[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.

I can't see the row. I can temporarily fix this by setting the 
default_column_validator

[default@MyKeyspace] update column family SomeColumnFamily with 
default_validation_class = BytesType;
0fba13e4-aac6-3963-ad65-ba354d99ebdc
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
= (name=some name, value=some value, timestamp=635540144263687300)
---
[More RowKeys]

If I do a DESCRIBE again, though, LIST stops working again.

[default@KeySpace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.

I can access the column family rows with other clients, the C# driver for 
example.


  was:
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 


[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  

[jira] [Updated] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8585:

Description: 
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

{noformat}
[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 10800
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and cells_per_row_to_cache = '0'
  and default_time_to_live = 0
  and speculative_retry = 'NONE'
  and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'};
{noformat}

but

{noformat}
[default@MyKeyspace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
{noformat}

Note how the default column value validator and cell sorting is UTF8Type rather 
than the BytesType reported earlier.

If I populate the column family and list its rows, I get

{noformat}
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.
{noformat}

I can't see the row. I can temporarily fix this by setting the 
default_column_validator

{noformat}
[default@MyKeyspace] update column family SomeColumnFamily with 
default_validation_class = BytesType;
0fba13e4-aac6-3963-ad65-ba354d99ebdc
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
= (name=some name, value=some value, timestamp=635540144263687300)
---
[More RowKeys]
{noformat}

If I do a DESCRIBE again, though, LIST stops working again.

{noformat}
[default@KeySpace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.
{noformat}

I can access the column family rows with other clients, the C# driver for 
example.


  was:
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

{noformat}
[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with 

[jira] [Commented] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269970#comment-14269970
 ] 

Sotirios Delimanolis commented on CASSANDRA-8585:
-

The C# driver (not CQL) works for querying data perfectly. I don't know if it's 
native or thrift based either. (The driver is old). 

 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Assignee: Philip Thompson
Priority: Minor
 Fix For: 2.1.3


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy

[jira] [Updated] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8585:

Attachment: schema_columns.out
schema_columnfamilies.out

 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Assignee: Philip Thompson
Priority: Minor
 Fix For: 2.1.3

 Attachments: schema_columnfamilies.out, schema_columns.out


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 

[jira] [Commented] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270015#comment-14270015
 ] 

Sotirios Delimanolis commented on CASSANDRA-8585:
-

Updated description and attached files. Let me know if anything is missing.

 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Assignee: Philip Thompson
Priority: Minor
 Fix For: 2.1.3

 Attachments: schema_columnfamilies.out, schema_columns.out


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 

[jira] [Commented] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270023#comment-14270023
 ] 

Sotirios Delimanolis commented on CASSANDRA-8585:
-

Yeah, I discovered this while starting to migrate to CQL. I was just worried 
that it might affect more than the cassandra-cli, but it seems that's not the 
case. Thanks for checking. I don't plan on providing a fix either. Thank you!

 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Assignee: Philip Thompson
Priority: Minor
 Fix For: 2.1.3

 Attachments: schema_columnfamilies.out, schema_columns.out


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   

[jira] [Comment Edited] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269970#comment-14269970
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-8585 at 1/8/15 8:09 PM:
-

The C# driver (not CQL) works for querying data perfectly. It seems like the 
driver uses Thrift. (The driver is old). 


was (Author: s_delima):
The C# driver (not CQL) works for querying data perfectly. I don't know if it's 
native or thrift based either. (The driver is old). 

 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Assignee: Philip Thompson
Priority: Minor
 Fix For: 2.1.3


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP 

[jira] [Created] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)
Sotirios Delimanolis created CASSANDRA-8585:
---

 Summary: Thrift CLI client reporting inconsistent column family 
structure after upgrade to Cassandra 2.1.2
 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis


After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 10800
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and cells_per_row_to_cache = '0'
  and default_time_to_live = 0
  and speculative_retry = 'NONE'
  and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'};

but

[default@MyKeyspace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor

Note how the default column value validator and cell sorting is UTF8Type rather 
than the BytesType reported earlier.

If I populate the column family and list its rows, I get

[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.

I can't see the row. I can temporarily fix this by setting the 
default_column_validator

[default@MyKeyspace] update column family SomeColumnFamily with 
default_validation_class = BytesType;
0fba13e4-aac6-3963-ad65-ba354d99ebdc
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
= (name=some name, value=some value, timestamp=635540144263687300)
---
[More RowKeys]

If I do a DESCRIBE again, though, LIST stops working again.

[default@KeySpace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.

I can access the column family rows with other clients, the C# driver for 
example.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269970#comment-14269970
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-8585 at 1/8/15 8:09 PM:
-

The C# driver (not CQL) works for querying data perfectly. It seems like the 
driver uses Thrift because it has a reference to a library called Thrift. (The 
driver is old). 


was (Author: s_delima):
The C# driver (not CQL) works for querying data perfectly. It seems like the 
driver uses Thrift. (The driver is old). 

 Thrift CLI client reporting inconsistent column family structure after 
 upgrade to Cassandra 2.1.2
 -

 Key: CASSANDRA-8585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8585
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Sotirios Delimanolis
Assignee: Philip Thompson
Priority: Minor
 Fix For: 2.1.3


 After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the 
 Thrift CLI client started reporting wrong default_validation_class for a 
 Column Family.
 For example, 
 {noformat}
 [default@MyKeyspace] show schema;
 [...]
 create column family SomeColumnFamily
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 10800
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and cells_per_row_to_cache = '0'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
 'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 {noformat}
 but
 {noformat}
 [default@MyKeyspace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: default
   Index interval: default
   Speculative Retry: NONE
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 tombstone_compaction_interval: 300
 sstable_size_in_mb: 200
 tombstone_threshold: 0.1
   Compression Options:
 sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
 {noformat}
 Note how the default column value validator and cell sorting is UTF8Type 
 rather than the BytesType reported earlier.
 If I populate the column family and list its rows, I get
 {noformat}
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 String didn't validate.
 {noformat}
 I can't see the row. I can temporarily fix this by setting the 
 default_column_validator
 {noformat}
 [default@MyKeyspace] update column family SomeColumnFamily with 
 default_validation_class = BytesType;
 0fba13e4-aac6-3963-ad65-ba354d99ebdc
 [default@MyKeyspace] list SomeColumnFamily;
 Using default limit of 100
 Using default cell limit of 100
 ---
 RowKey: SomeRowKey
 = (name=some name, value=some value, timestamp=635540144263687300)
 ---
 [More RowKeys]
 {noformat}
 If I do a DESCRIBE again, though, LIST stops working again.
 {noformat}
 [default@KeySpace] describe SomeColumnFamily;
 WARNING: CQL3 tables are intentionally omitted from 'describe' output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.
 ColumnFamily: SomeColumnFamily
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 10800
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Caching: KEYS_ONLY
   

[jira] [Updated] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8585:

Description: 
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

{noformat}
[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 10800
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and cells_per_row_to_cache = '0'
  and default_time_to_live = 0
  and speculative_retry = 'NONE'
  and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'};
{noformat}

but

{noformat}
[default@MyKeyspace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
{noformat}

Note how the default column value validator and cell sorting is UTF8Type rather 
than the BytesType reported earlier.

If I populate the column family and list its rows, I get

{noformat}
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.
{noformat}

I can't see the row. I can temporarily fix this by setting the 
default_column_validator

{noformat}
[default@MyKeyspace] update column family SomeColumnFamily with 
default_validation_class = BytesType;
0fba13e4-aac6-3963-ad65-ba354d99ebdc
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
= (name=some name, value=some value, timestamp=635540144263687300)
---
[More RowKeys]
{noformat}

If I do a DESCRIBE again, though, LIST stops working again.

{noformat}
[default@KeySpace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.
{noformat}

I can access the column family rows with other clients, the C# driver for 
example.

With C#

{code:}
var keyspaceDef = pool.DescribeKeyspace (MyKeyspace);
var cfDefs = keyspaceDef.Cf_defs;
foreach (CfDef cfDef in cfDefs) {
if (cfDef.Name == SomeColumnFamily) {
Console.WriteLine (Default validation class:  + 
cfDef.Default_validation_class);

[jira] [Updated] (CASSANDRA-8585) Thrift CLI client reporting inconsistent column family structure after upgrade to Cassandra 2.1.2

2015-01-08 Thread Sotirios Delimanolis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sotirios Delimanolis updated CASSANDRA-8585:

Description: 
After upgrading from Cassandra 2.0.6.4 to Cassandra 2.1.2-SNAPSHOT, the Thrift 
CLI client started reporting wrong default_validation_class for a Column Family.

For example, 

{noformat}
[default@MyKeyspace] show schema;
[...]
create column family SomeColumnFamily
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 10800
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and cells_per_row_to_cache = '0'
  and default_time_to_live = 0
  and speculative_retry = 'NONE'
  and compaction_strategy_options = {'tombstone_compaction_interval' : '300', 
'sstable_size_in_mb' : '200', 'tombstone_threshold' : '0.1'}
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'};
{noformat}

but

{noformat}
[default@MyKeyspace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
{noformat}

Note how the default column value validator and cell sorting is UTF8Type rather 
than the BytesType reported earlier.

If I populate the column family and list its rows, I get

{noformat}
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.
{noformat}

I can't see the row. I can temporarily fix this by setting the 
default_column_validator

{noformat}
[default@MyKeyspace] update column family SomeColumnFamily with 
default_validation_class = BytesType;
0fba13e4-aac6-3963-ad65-ba354d99ebdc
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
= (name=some name, value=some value, timestamp=635540144263687300)
---
[More RowKeys]
{noformat}

If I do a DESCRIBE again, though, LIST stops working again.

{noformat}
[default@KeySpace] describe SomeColumnFamily;

WARNING: CQL3 tables are intentionally omitted from 'describe' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

ColumnFamily: SomeColumnFamily
  Key Validation Class: org.apache.cassandra.db.marshal.BytesType
  Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 10800
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.1
  DC Local Read repair chance: 0.0
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: default
  Index interval: default
  Speculative Retry: NONE
  Built indexes: []
  Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Compaction Strategy Options:
tombstone_compaction_interval: 300
sstable_size_in_mb: 200
tombstone_threshold: 0.1
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
[default@MyKeyspace] list SomeColumnFamily;
Using default limit of 100
Using default cell limit of 100
---
RowKey: SomeRowKey
String didn't validate.
{noformat}

I can access the column family rows with other clients, the C# driver for 
example.

With C#

{code:c#}
var keyspaceDef = pool.DescribeKeyspace (MyKeyspace);
var cfDefs = keyspaceDef.Cf_defs;
foreach (CfDef cfDef in cfDefs) {
if (cfDef.Name == SomeColumnFamily) {
Console.WriteLine (Default