[jira] [Updated] (CASSANDRA-13955) NullPointerException when using CqlBulkOutputFormat

2017-10-13 Thread Anish (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anish updated CASSANDRA-13955:
--
Description: 
I am able to insert data of reducers by using CqlOutputFormat. Due to 
performance reasons (I have large amount of inserts), I want to switch over to 
CqlBulkOutputFormat. As per the documentation, everything remains same except 
changing the format for reducers. But I get null pointer exception on line 
below of CqlBulkRecordWriter

{code}
DatabaseDescriptor.setStreamThroughputOutboundMegabitsPerSec(Integer.parseInt(conf.get(STREAM_THROTTLE_MBITS,
 "0")));
{code}

This is because "conf" is null in DatabaseDescriptor. I don't see any calls 
where this would get initialized by reducer's CqlBulkOutputFormat

Unfortunately, I could not find any documentation or samples of using 
CqlBulkOutputFormat.

  was:
I am able to insert data of reducers by using CqlOutputFormat. Due to 
performance reasons (I have large amount of inserts), I want to switch over to 
CqlBulkOutputFormat. As per the documentation, everything remains same except 
changing the format for reducers. But I get null pointer exception on line 
below of CqlBulkRecordWriter

{code}
DatabaseDescriptor.setStreamThroughputOutboundMegabitsPerSec(Integer.parseInt(conf.get(STREAM_THROTTLE_MBITS,
 "0")));
{code}

This is because "conf" is null in DatabaseDescriptor. I don't see any calls 
where this would get initialized.

Unfortunately, I could not find any documentation or samples of using 
CqlBulkOutputFormat.


> NullPointerException when using CqlBulkOutputFormat
> ---
>
> Key: CASSANDRA-13955
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13955
> Project: Cassandra
>  Issue Type: Bug
> Environment: casandra-all:3.11.0
>Reporter: Anish
>
> I am able to insert data of reducers by using CqlOutputFormat. Due to 
> performance reasons (I have large amount of inserts), I want to switch over 
> to CqlBulkOutputFormat. As per the documentation, everything remains same 
> except changing the format for reducers. But I get null pointer exception on 
> line below of CqlBulkRecordWriter
> {code}
> DatabaseDescriptor.setStreamThroughputOutboundMegabitsPerSec(Integer.parseInt(conf.get(STREAM_THROTTLE_MBITS,
>  "0")));
> {code}
> This is because "conf" is null in DatabaseDescriptor. I don't see any calls 
> where this would get initialized by reducer's CqlBulkOutputFormat
> Unfortunately, I could not find any documentation or samples of using 
> CqlBulkOutputFormat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13955) NullPointerException when using CqlBulkOutputFormat

2017-10-13 Thread Anish (JIRA)
Anish created CASSANDRA-13955:
-

 Summary: NullPointerException when using CqlBulkOutputFormat
 Key: CASSANDRA-13955
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13955
 Project: Cassandra
  Issue Type: Bug
 Environment: casandra-all:3.11.0
Reporter: Anish


I am able to insert data of reducers by using CqlOutputFormat. Due to 
performance reasons (I have large amount of inserts), I want to switch over to 
CqlBulkOutputFormat. As per the documentation, everything remains same except 
changing the format for reducers. But I get null pointer exception on line 
below of CqlBulkRecordWriter

{code}
DatabaseDescriptor.setStreamThroughputOutboundMegabitsPerSec(Integer.parseInt(conf.get(STREAM_THROTTLE_MBITS,
 "0")));
{code}

This is because "conf" is null in DatabaseDescriptor. I don't see any calls 
where this would get initialized.

Unfortunately, I could not find any documentation or samples of using 
CqlBulkOutputFormat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13949) java.lang.ArrayIndexOutOfBoundsException while executing query

2017-10-13 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204196#comment-16204196
 ] 

Jason Brown commented on CASSANDRA-13949:
-

I've created a simple patch which just updates the jackson jars:

||3.11||trunk||
|[branch|https://github.com/jasobrown/cassandra/tree/13949-3.11]|[branch|https://github.com/jasobrown/cassandra/tree/13949-trunk]|
|[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/370/]|[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/371/]|
|[utests|https://circleci.com/gh/jasobrown/cassandra/tree/13949-3.11]|[utests|https://circleci.com/gh/jasobrown/cassandra/tree/13949-trunk]|


> java.lang.ArrayIndexOutOfBoundsException while executing query
> --
>
> Key: CASSANDRA-13949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13949
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Setup of 3 servers y using docker image 
> [https://github.com/docker-library/cassandra/blob/ca3c9df03cab318d34377bba0610c741253b0466/3.11/Dockerfile]
>Reporter: Luis E Rodriguez Pupo
>Assignee: Jason Brown
> Fix For: 3.11.x
>
> Attachments: 13949.png, insert.cql, query.cql, schema.cql
>
>
> While executing a query on a  table contaninig a field with a (escaped) json, 
> the following exception occurs:
> java.lang.ArrayIndexOutOfBoundsException: null
> at 
> org.codehaus.jackson.io.JsonStringEncoder.quoteAsString(JsonStringEncoder.java:141)
>  ~[jackson-core-asl-1.9.2.jar:1.9.2]
> at org.apache.cassandra.cql3.Json.quoteAsJsonString(Json.java:45) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.db.marshal.UTF8Type.toJSONString(UTF8Type.java:66) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection.rowToJson(Selection.java:291) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:431)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.build(Selection.java:417)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:763)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:400)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:378)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:79)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:217)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:248) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:233) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_131]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 

[jira] [Commented] (CASSANDRA-6542) nodetool removenode hangs

2017-10-13 Thread Amy Tobey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204153#comment-16204153
 ] 

Amy Tobey commented on CASSANDRA-6542:
--

I just ran into it on 3.11.  Only 2 nodes showed the dead node in nodetool 
status. I ran the usual 'nodetool removenode $UUID' and that hung. The logs on 
the other nodes showed that tokens were moved around and replication happened, 
then nothing. I killed the nodetool command, ran nodetool removenode force, and 
now everything is happy again.

> nodetool removenode hangs
> -
>
> Key: CASSANDRA-6542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6542
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Ubuntu 12, 1.2.11 DSE
>Reporter: Eric Lubow
>
> Running *nodetool removenode $host-id* doesn't actually remove the node from 
> the ring.  I've let it run anywhere from 5 minutes to 3 days and there are no 
> messages in the log about it hanging or failing, the command just sits there 
> running.  So the regular response has been to run *nodetool removenode 
> $host-id*, give it about 10-15 minutes and then run *nodetool removenode 
> force*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13949) java.lang.ArrayIndexOutOfBoundsException while executing query

2017-10-13 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown reassigned CASSANDRA-13949:
---

Assignee: Jason Brown

> java.lang.ArrayIndexOutOfBoundsException while executing query
> --
>
> Key: CASSANDRA-13949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13949
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Setup of 3 servers y using docker image 
> [https://github.com/docker-library/cassandra/blob/ca3c9df03cab318d34377bba0610c741253b0466/3.11/Dockerfile]
>Reporter: Luis E Rodriguez Pupo
>Assignee: Jason Brown
> Fix For: 3.11.x
>
> Attachments: 13949.png, insert.cql, query.cql, schema.cql
>
>
> While executing a query on a  table contaninig a field with a (escaped) json, 
> the following exception occurs:
> java.lang.ArrayIndexOutOfBoundsException: null
> at 
> org.codehaus.jackson.io.JsonStringEncoder.quoteAsString(JsonStringEncoder.java:141)
>  ~[jackson-core-asl-1.9.2.jar:1.9.2]
> at org.apache.cassandra.cql3.Json.quoteAsJsonString(Json.java:45) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.db.marshal.UTF8Type.toJSONString(UTF8Type.java:66) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection.rowToJson(Selection.java:291) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:431)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.build(Selection.java:417)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:763)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:400)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:378)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:79)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:217)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:248) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:233) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_131]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.11.0.jar:3.11.0]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Find attached the schema of the table, the insertion query with the data 
> provoking the failure, and the failing query.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13949) java.lang.ArrayIndexOutOfBoundsException while executing query

2017-10-13 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204138#comment-16204138
 ] 

Jason Brown commented on CASSANDRA-13949:
-

bq. still old 1.9.13

According to maven central, [1.9.13 is the most current 
version|http://search.maven.org/#search%7Cga%7C1%7Corg.codehaus.jackson] of 
jackson.

bq. It happens if you run again the query requesting the json 

I did run it a bunch of times, but if the updated jackson is working for you, 
let's just move ahead on that. Patch coming shortly.

> java.lang.ArrayIndexOutOfBoundsException while executing query
> --
>
> Key: CASSANDRA-13949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13949
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Setup of 3 servers y using docker image 
> [https://github.com/docker-library/cassandra/blob/ca3c9df03cab318d34377bba0610c741253b0466/3.11/Dockerfile]
>Reporter: Luis E Rodriguez Pupo
> Fix For: 3.11.x
>
> Attachments: 13949.png, insert.cql, query.cql, schema.cql
>
>
> While executing a query on a  table contaninig a field with a (escaped) json, 
> the following exception occurs:
> java.lang.ArrayIndexOutOfBoundsException: null
> at 
> org.codehaus.jackson.io.JsonStringEncoder.quoteAsString(JsonStringEncoder.java:141)
>  ~[jackson-core-asl-1.9.2.jar:1.9.2]
> at org.apache.cassandra.cql3.Json.quoteAsJsonString(Json.java:45) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.db.marshal.UTF8Type.toJSONString(UTF8Type.java:66) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection.rowToJson(Selection.java:291) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:431)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.build(Selection.java:417)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:763)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:400)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:378)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:79)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:217)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:248) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:233) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_131]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.11.0.jar:3.11.0]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> Find attached the schema of the table, the insertion query with the data 
> provoking the 

[jira] [Updated] (CASSANDRA-13917) COMPACT STORAGE inserts on tables without clusterings accept hidden column1 and value columns

2017-10-13 Thread Aleksandr Sorokoumov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Sorokoumov updated CASSANDRA-13917:
-
Fix Version/s: 3.0.15
   3.11.1
   Status: Patch Available  (was: Open)

> COMPACT STORAGE inserts on tables without clusterings accept hidden column1 
> and value columns
> -
>
> Key: CASSANDRA-13917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13917
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Aleksandr Sorokoumov
>Priority: Minor
>  Labels: lhf
> Fix For: 3.11.1, 3.0.15
>
>
> Test for the issue:
> {code}
> @Test
> public void testCompactStorage() throws Throwable
> {
> createTable("CREATE TABLE %s (a int PRIMARY KEY, b int, c int) WITH 
> COMPACT STORAGE");
> assertInvalid("INSERT INTO %s (a, b, c, column1) VALUES (?, ?, ?, 
> ?)", 1, 1, 1, ByteBufferUtil.bytes('a'));
> // This one fails with Some clustering keys are missing: column1, 
> which is still wrong
> assertInvalid("INSERT INTO %s (a, b, c, value) VALUES (?, ?, ?, ?)", 
> 1, 1, 1, ByteBufferUtil.bytes('a'));   
> assertInvalid("INSERT INTO %s (a, b, c, column1, value) VALUES (?, ?, 
> ?, ?, ?)", 1, 1, 1, ByteBufferUtil.bytes('a'), ByteBufferUtil.bytes('b'));
> assertEmpty(execute("SELECT * FROM %s"));
> }
> {code}
> Gladly, these writes are no-op, even though they succeed.
> {{value}} and {{column1}} should be completely hidden. Fixing this one should 
> be as easy as just adding validations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13917) COMPACT STORAGE inserts on tables without clusterings accept hidden column1 and value columns

2017-10-13 Thread Aleksandr Sorokoumov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203776#comment-16203776
 ] 

Aleksandr Sorokoumov commented on CASSANDRA-13917:
--

If the table is created with COMPACT STORAGE and a single primary key, e.g.

{noformat}
cqlsh:k> CREATE TABLE t1 (a int PRIMARY KEY, b int, c int) WITH COMPACT STORAGE;
{noformat}

We get the behavior from the tests:

{noformat}
cqlsh:k> INSERT INTO t1 (a,b,c,column1) VALUES (1,1,1,'a');
cqlsh:k> select * from t1;

 a | b | c
---+---+---

(0 rows)
{noformat}

Corresponding CFMMetaData and the column definition kinds during the {{INSERT}}:
{noformat}
cfm:
isCompactTable() => true
isStaticCompactTable() => true

Column definitions:
a.kind=PARTITION_KEY
b.kind=STATIC
c.kind=STATIC
column1.kind=CLUSTERING
value.kind=REGULAR
{noformat}

Also, if the table contains a column with a name {{column1}}, the hidden column 
will be called {{column2}}:

{noformat}
cqlsh:k> CREATE TABLE t2 (a int PRIMARY KEY, b int, c int, column1 text) WITH 
COMPACT STORAGE;
cqlsh:k> INSERT INTO t2 (a,b,c,column1, column2, value) VALUES 
(1,1,1,'a','a',0xbb);
cqlsh:k> select * from t2;

 a | b | c | column1
---+---+---+-

(0 rows)
{noformat}

If the table is created with COMPACT STORAGE and a compound primary key, it 
works as expected:

{noformat}
cqlsh:k> CREATE TABLE t3 (a int, b int, c int, PRIMARY KEY (a, b)) WITH COMPACT 
STORAGE;
cqlsh:k> INSERT INTO t3 (a,b,c,column1) VALUES (1,1,1,'a');
InvalidRequest: Error from server: code=2200 [Invalid query] message="Undefined 
column name column1"
cqlsh:k> INSERT INTO t3 (a,b,c,column1,value) VALUES (1,1,1,'a',0xff);
InvalidRequest: Error from server: code=2200 [Invalid query] message="Undefined 
column name column1"
cqlsh:k> INSERT INTO t3 (a,b,c,value) VALUES (1,1,1,0xff);
InvalidRequest: Error from server: code=2200 [Invalid query] message="Undefined 
column name value"
{noformat}

Corresponding CFMMetaData during the {{INSERT}}:

{noformat}
cfm.isCompactTable() => true
cfm.isStaticCompactTable() => false
{noformat}

h4. Solution

In {{UpdateStatement.prepareInternal}} when the CFM is {{StaticCompactTable}} 
check that the columns to be updated are not {{CLUSTERING}} or {{REGULAR}}. if 
this is the case - "hide" the columns by returning the error "Undefined column 
name".

Branches:

* [3.0.15|https://github.com/Ge/cassandra/tree/13917-3.0.15]
* [3.11.1|https://github.com/Ge/cassandra/tree/13917-3.11.1]

> COMPACT STORAGE inserts on tables without clusterings accept hidden column1 
> and value columns
> -
>
> Key: CASSANDRA-13917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13917
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Aleksandr Sorokoumov
>Priority: Minor
>  Labels: lhf
>
> Test for the issue:
> {code}
> @Test
> public void testCompactStorage() throws Throwable
> {
> createTable("CREATE TABLE %s (a int PRIMARY KEY, b int, c int) WITH 
> COMPACT STORAGE");
> assertInvalid("INSERT INTO %s (a, b, c, column1) VALUES (?, ?, ?, 
> ?)", 1, 1, 1, ByteBufferUtil.bytes('a'));
> // This one fails with Some clustering keys are missing: column1, 
> which is still wrong
> assertInvalid("INSERT INTO %s (a, b, c, value) VALUES (?, ?, ?, ?)", 
> 1, 1, 1, ByteBufferUtil.bytes('a'));   
> assertInvalid("INSERT INTO %s (a, b, c, column1, value) VALUES (?, ?, 
> ?, ?, ?)", 1, 1, 1, ByteBufferUtil.bytes('a'), ByteBufferUtil.bytes('b'));
> assertEmpty(execute("SELECT * FROM %s"));
> }
> {code}
> Gladly, these writes are no-op, even though they succeed.
> {{value}} and {{column1}} should be completely hidden. Fixing this one should 
> be as easy as just adding validations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13917) COMPACT STORAGE inserts on tables without clusterings accept hidden column1 and value columns

2017-10-13 Thread Aleksandr Sorokoumov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Sorokoumov reassigned CASSANDRA-13917:


Assignee: Aleksandr Sorokoumov

> COMPACT STORAGE inserts on tables without clusterings accept hidden column1 
> and value columns
> -
>
> Key: CASSANDRA-13917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13917
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Aleksandr Sorokoumov
>Priority: Minor
>  Labels: lhf
>
> Test for the issue:
> {code}
> @Test
> public void testCompactStorage() throws Throwable
> {
> createTable("CREATE TABLE %s (a int PRIMARY KEY, b int, c int) WITH 
> COMPACT STORAGE");
> assertInvalid("INSERT INTO %s (a, b, c, column1) VALUES (?, ?, ?, 
> ?)", 1, 1, 1, ByteBufferUtil.bytes('a'));
> // This one fails with Some clustering keys are missing: column1, 
> which is still wrong
> assertInvalid("INSERT INTO %s (a, b, c, value) VALUES (?, ?, ?, ?)", 
> 1, 1, 1, ByteBufferUtil.bytes('a'));   
> assertInvalid("INSERT INTO %s (a, b, c, column1, value) VALUES (?, ?, 
> ?, ?, ?)", 1, 1, 1, ByteBufferUtil.bytes('a'), ByteBufferUtil.bytes('b'));
> assertEmpty(execute("SELECT * FROM %s"));
> }
> {code}
> Gladly, these writes are no-op, even though they succeed.
> {{value}} and {{column1}} should be completely hidden. Fixing this one should 
> be as easy as just adding validations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13813) Don't let user drop (or generally break) tables in system_distributed

2017-10-13 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203638#comment-16203638
 ] 

Aleksey Yeschenko commented on CASSANDRA-13813:
---

Created CASSANDRA-13954 for jmx/nodetool work.

> Don't let user drop (or generally break) tables in system_distributed
> -
>
> Key: CASSANDRA-13813
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13813
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Sylvain Lebresne
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.x, 3.11.x
>
>
> There is not currently no particular restrictions on schema modifications to 
> tables of the {{system_distributed}} keyspace. This does mean you can drop 
> those tables, or even alter them in wrong ways like dropping or renaming 
> columns. All of which is guaranteed to break stuffs (that is, repair if you 
> mess up with on of it's table, or MVs if you mess up with 
> {{view_build_status}}).
> I'm pretty sure this was never intended and is an oversight of the condition 
> on {{ALTERABLE_SYSTEM_KEYSPACES}} in 
> [ClientState|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/ClientState.java#L397].
>  That condition is such that any keyspace not listed in 
> {{ALTERABLE_SYSTEM_KEYSPACES}} (which happens to be the case for 
> {{system_distributed}}) has no specific restrictions whatsoever, while given 
> the naming it's fair to assume the intention that exactly the opposite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13954) Provide a JMX call to sync schema with local storage

2017-10-13 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-13954:
-

 Summary: Provide a JMX call to sync schema with local storage
 Key: CASSANDRA-13954
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13954
 Project: Cassandra
  Issue Type: New Feature
  Components: Distributed Metadata
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 3.0.x, 3.11.x


As discussed in CASSANDRA-13813 comments, we need such a call / nodetool 
command to enable the workaround for CASSANDRA-12701.

This is technically a new feature and shouldn't go into 3.0.x, however in 
practical terms it's part of a solution to CASSANDRA-12701, which is a bug, and 
an pre-requisite for CASSANDRA-13813 - which is also a bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13949) java.lang.ArrayIndexOutOfBoundsException while executing query

2017-10-13 Thread Luis E Rodriguez Pupo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203601#comment-16203601
 ] 

Luis E Rodriguez Pupo commented on CASSANDRA-13949:
---

Hi [~jasobrown] I have tested as promised an image with the libraries replaced 
using a newer (still old 1.9.13) version. You mentioned you did not get the 
error, It happens if you run again the query requesting the json. With the new 
libraries version, the issue apparently gets solved, in this repository 
[https://github.com/lrodriguez2002cu/cassandra-issue-images] I have created 
docker images with the cql files copied inside and the commands for 
initializing the database and so on, so that you can see the behavior. 

Thanks for the follow up.



> java.lang.ArrayIndexOutOfBoundsException while executing query
> --
>
> Key: CASSANDRA-13949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13949
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Setup of 3 servers y using docker image 
> [https://github.com/docker-library/cassandra/blob/ca3c9df03cab318d34377bba0610c741253b0466/3.11/Dockerfile]
>Reporter: Luis E Rodriguez Pupo
> Fix For: 3.11.x
>
> Attachments: 13949.png, insert.cql, query.cql, schema.cql
>
>
> While executing a query on a  table contaninig a field with a (escaped) json, 
> the following exception occurs:
> java.lang.ArrayIndexOutOfBoundsException: null
> at 
> org.codehaus.jackson.io.JsonStringEncoder.quoteAsString(JsonStringEncoder.java:141)
>  ~[jackson-core-asl-1.9.2.jar:1.9.2]
> at org.apache.cassandra.cql3.Json.quoteAsJsonString(Json.java:45) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.db.marshal.UTF8Type.toJSONString(UTF8Type.java:66) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection.rowToJson(Selection.java:291) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:431)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.build(Selection.java:417)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:763)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:400)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:378)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:79)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:217)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:248) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:233) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_131]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 

[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2017-10-13 Thread Thomas Steinmaurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203473#comment-16203473
 ] 

Thomas Steinmaurer commented on CASSANDRA-13929:


Stable for a week now in our 9 node (m4.xlarge) loadtest cluster from a heap 
perspective with entirely disabling the recycler cache.

Any ideas if we can expect something in context of this ticket for 3.11.2? 
Thanks!



> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13953) Switch to CRC32 for sstable metadata checksums

2017-10-13 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-13953:
---

 Summary: Switch to CRC32 for sstable metadata checksums
 Key: CASSANDRA-13953
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13953
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Aleksey Yeschenko
 Fix For: 4.x


We should switch to CRC32 for sstable metadata checksumming for consistency 
with the rest of the code base. There are a few other cleanups that should be 
done at the same time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org