[jira] [Commented] (CASSANDRA-7396) Allow selecting Map key, List index

2017-01-16 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824198#comment-15824198
 ] 

Vassil Lunchev commented on CASSANDRA-7396:
---

This ticket was last updated more than 6 months ago. Is it still in progress?

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
>  Labels: cql, docs-impacting
> Fix For: 3.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11997) Add a STCS compaction subproperty for DESC order bucketing

2016-06-13 Thread Vassil Lunchev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vassil Lunchev updated CASSANDRA-11997:
---
Description: 
Looking at SizeTieredCompactionStrategy.java -> getBuckets().

This method is the only one using 3 of the 10 subproperties of STCS. It buckets 
the files by sorting them ASC and then grouping them using bucket_high and 
min_sstable_size.

getBuckets() practically doesn't use bucket_low at all. As long as it is 
between 0 and 1, the result doesn't depend on bucket_low. For example:

{code:java}
  public static void main(String[] args) {
List> files = new ArrayList<>();
files.add(new Pair<>("10.1G", 10944793422l));
files.add(new Pair<>("9.4G", 10056333820l));
files.add(new Pair<>("8.7G", 9266612562l));
files.add(new Pair<>("4.0G", 4254518390l));
files.add(new Pair<>("3.5G", 3729627496l));
files.add(new Pair<>("2.5G", 2587912419l));
files.add(new Pair<>("2.2G", 2304124647l));
files.add(new Pair<>("1.4G", 1485000127l));
files.add(new Pair<>("1.3G", 1340382610l));
files.add(new Pair<>("456M", 477906537l));
files.add(new Pair<>("451M", 472012692l));
files.add(new Pair<>("53M", 54968524l));
files.add(new Pair<>("18M", 18447540l));
List buckets = getBuckets(files, 1.5, 0.5, 50l*1024*1024);
System.out.println(buckets);
  }
{code}

The result is:
{code}
[[451M, 456M], [8.7G, 9.4G, 10.1G], [53M], [1.3G, 1.4G], [18M], [3.5G, 4.0G], 
[2.2G, 2.5G]]
{code}

You can test it with any value for bucketLow between 0 and 1, the result will 
be the same. And it contains no buckets that can be compacted.

However, if you reverse the initial sorting order to DESC (look at the files 
from largest to smallest) you get a completely different bucketing:

{code:java}
  return p2.right.compareTo(p1.right);
{code} 

{code:txt}
  [[456M, 451M], [4.0G, 3.5G, 2.5G, 2.2G], [10.1G, 9.4G, 8.7G], [53M], [1.4G, 
1.3G], [18M]]
{code}

Now there is a bucket that can be compacted: [4.0G, 3.5G, 2.5G, 2.2G]
After that compaction, there will be one more bucket that can be compacted: 
[10.1G, 9.4G, 8.7G, GB]

The sizes given here are real values, from a production load Cassandra 
deployment. We would like to have an aggressive STCS compaction that compacts 
as soon as reasonably possible. (I know about LCS, let's not include it in this 
ticket). However since the ordering in getBuckets is ASC, we cannot do much 
with configuration parameters. Specifically, using min_threshold = 3 is not 
helping - it all boils down to the ordering.

Probably bucket_high = 2 is an option, but then why does Cassandra offer a 
property that doesn't change anything (with a fixed ASC ordering, bucket_low is 
literally useless)

I would like to have the ability to configure DESC ordering. My suggestion is 
to add a new compaction subproperty for STCS, for example named 
bucket_iteration_order, which has ASC by default for backward compatibility, 
but it can be switched to DESC if an aggressive ordering is required.

  was:
Looking at SizeTieredCompactionStrategy.java -> getBuckets().

This method is the only one using 3 of the 10 subproperties of STCS. It buckets 
the files by sorting them ASC and then grouping them using bucket_high and 
min_sstable_size.

getBuckets() practically doesn't use bucket_low at all. As long as it is 
between 0 and 1, the result doesn't depend on bucket_low. For example:

{code:java}
  public static void main(String[] args) {
List> files = new ArrayList<>();
files.add(new Pair<>("10.1G", 10944793422l));
files.add(new Pair<>("9.4G", 10056333820l));
files.add(new Pair<>("8.7G", 9266612562l));
files.add(new Pair<>("4.0G", 4254518390l));
files.add(new Pair<>("3.5G", 3729627496l));
files.add(new Pair<>("2.5G", 2587912419l));
files.add(new Pair<>("2.2G", 2304124647l));
files.add(new Pair<>("1.4G", 1485000127l));
files.add(new Pair<>("1.3G", 1340382610l));
files.add(new Pair<>("456M", 477906537l));
files.add(new Pair<>("451M", 472012692l));
files.add(new Pair<>("53M", 54968524l));
files.add(new Pair<>("18M", 18447540l));
List buckets = getBuckets(files, 1.5, 0.5, 50l*1024*1024);
System.out.println(buckets);
  }
{code}

The result is:
{code}
[[451M, 456M], [8.7G, 9.4G, 10.1G], [53M], [1.3G, 1.4G], [18M], [3.5G, 4.0G], 
[2.2G, 2.5G]]
{code}

You can test it with any value for bucketLow between 0 and 1, the result will 
be the same. And it contains no buckets that can be compacted.

However, if you reverse the initial sorting order to DESC (look at the files 
from largest to smallest) you get a completely different bucketing:

{code:java}
  return p2.right.compareTo(p1.right);
{code} 

{code:txt}
  [[456M, 451M], [4.0G, 3.5G, 2.5G, 2.2G], [10.1G, 9.4G, 8.7G], [53M], [1.4G, 
1.3G], [18M]]
{code}

Now there is a bucket that can be compacted: [4.0G, 3.5G, 

[jira] [Created] (CASSANDRA-11997) Add a STCS compaction subproperty for DESC order bucketing

2016-06-13 Thread Vassil Lunchev (JIRA)
Vassil Lunchev created CASSANDRA-11997:
--

 Summary: Add a STCS compaction subproperty for DESC order bucketing
 Key: CASSANDRA-11997
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11997
 Project: Cassandra
  Issue Type: Improvement
  Components: Compaction
Reporter: Vassil Lunchev


Looking at SizeTieredCompactionStrategy.java -> getBuckets().

This method is the only one using 3 of the 10 subproperties of STCS. It buckets 
the files by sorting them ASC and then grouping them using bucket_high and 
min_sstable_size.

getBuckets() practically doesn't use bucket_low at all. As long as it is 
between 0 and 1, the result doesn't depend on bucket_low. For example:

{code:java}
  public static void main(String[] args) {
List> files = new ArrayList<>();
files.add(new Pair<>("10.1G", 10944793422l));
files.add(new Pair<>("9.4G", 10056333820l));
files.add(new Pair<>("8.7G", 9266612562l));
files.add(new Pair<>("4.0G", 4254518390l));
files.add(new Pair<>("3.5G", 3729627496l));
files.add(new Pair<>("2.5G", 2587912419l));
files.add(new Pair<>("2.2G", 2304124647l));
files.add(new Pair<>("1.4G", 1485000127l));
files.add(new Pair<>("1.3G", 1340382610l));
files.add(new Pair<>("456M", 477906537l));
files.add(new Pair<>("451M", 472012692l));
files.add(new Pair<>("53M", 54968524l));
files.add(new Pair<>("18M", 18447540l));
List buckets = getBuckets(files, 1.5, 0.5, 50l*1024*1024);
System.out.println(buckets);
  }
{code}

The result is:
{code}
[[451M, 456M], [8.7G, 9.4G, 10.1G], [53M], [1.3G, 1.4G], [18M], [3.5G, 4.0G], 
[2.2G, 2.5G]]
{code}

You can test it with any value for bucketLow between 0 and 1, the result will 
be the same. And it contains no buckets that can be compacted.

However, if you reverse the initial sorting order to DESC (look at the files 
from largest to smallest) you get a completely different bucketing:

{code:java}
  return p2.right.compareTo(p1.right);
{code} 

{code:txt}
  [[456M, 451M], [4.0G, 3.5G, 2.5G, 2.2G], [10.1G, 9.4G, 8.7G], [53M], [1.4G, 
1.3G], [18M]]
{code}

Now there is a bucket that can be compacted: [4.0G, 3.5G, 2.5G, 2.2G]
After that compaction, there will be one more bucket that can be compacted: 
[10.1G, 9.4G, 8.7G, GB]

The sizes given here are real values, from a production load Cassandra 
deployment. We would like to have an aggressive STCS compaction that compacts 
as soon as reasonably possible. (I know about LCS, let's not include it in this 
ticket). However since the ordering in getBuckets is ASC, we cannot do much 
with configuration parameters. Specifically, using min_threshold = 3 is not 
helping - it all boils down to the ordering.

Probably bucket_high = 2 is an option, but then why does Cassandra offer a 
property that doesn't change anything (with a fixed ASC ordering, bucket_low is 
literally useless)

I would like to have the ability to configure DESC ordering. My suggestion is 
to add a new compaction subproperty for STCS, for example named 
bucket_iteration_order, which has ASC by default for backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10876) Alter behavior of batch WARN and fail on single partition batches

2016-05-06 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274377#comment-15274377
 ] 

Vassil Lunchev commented on CASSANDRA-10876:


How about backporting this change to 3.0.x?

I know that it is marked as an improvement, but the patch contains a relatively 
minor change on the borderline between a fix and an improvement.

We are currently using 3.0.x in production and we issue relatively large single 
partition batches. Our logs are full of warnings about that, but moving to 3.6 
just for this trivial fix is a little to much for us.

> Alter behavior of batch WARN and fail on single partition batches
> -
>
> Key: CASSANDRA-10876
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10876
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Patrick McFadin
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6
>
> Attachments: 10876.txt
>
>
> In an attempt to give operator insight into potentially harmful batch usage, 
> Jiras were created to log WARN or fail on certain batch sizes. This ignores 
> the single partition batch, which doesn't create the same issues as a 
> multi-partition batch. 
> The proposal is to ignore size on single partition batch statements. 
> Reference:
> [CASSANDRA-6487|https://issues.apache.org/jira/browse/CASSANDRA-6487]
> [CASSANDRA-8011|https://issues.apache.org/jira/browse/CASSANDRA-8011]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes

2016-05-06 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274345#comment-15274345
 ] 

Vassil Lunchev commented on CASSANDRA-11357:


Sorry Joel, I commented on the wrong issue. I meant backporting CASSANDRA-10876 
indeed.

It just sounds line a minor change on the borderline between a fix and an 
improvement, but I knew that the likelihood of that backport happening is low.

> ClientWarningsTest fails after single partition batch warning changes
> -
>
> Key: CASSANDRA-11357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11357
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>Priority: Trivial
> Fix For: 3.6
>
>
> We no longer warn on single partition batches above the batch size warn 
> threshold, but the test wasn't changed accordingly. We should check that we 
> warn for multi-partition batches above this size and that we don't warn for 
> single partition batches above this size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes

2016-05-06 Thread Vassil Lunchev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vassil Lunchev updated CASSANDRA-11357:
---
Comment: was deleted

(was: What do you think about getting this backported to 3.0.6?)

> ClientWarningsTest fails after single partition batch warning changes
> -
>
> Key: CASSANDRA-11357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11357
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>Priority: Trivial
> Fix For: 3.6
>
>
> We no longer warn on single partition batches above the batch size warn 
> threshold, but the test wasn't changed accordingly. We should check that we 
> warn for multi-partition batches above this size and that we don't warn for 
> single partition batches above this size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes

2016-05-06 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274330#comment-15274330
 ] 

Vassil Lunchev commented on CASSANDRA-11357:


What do you think about getting this backported to 3.0.6?

> ClientWarningsTest fails after single partition batch warning changes
> -
>
> Key: CASSANDRA-11357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11357
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>Priority: Trivial
> Fix For: 3.6
>
>
> We no longer warn on single partition batches above the batch size warn 
> threshold, but the test wasn't changed accordingly. We should check that we 
> warn for multi-partition batches above this size and that we don't warn for 
> single partition batches above this size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10984) Cassandra should not depend on netty-all

2016-03-28 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214176#comment-15214176
 ] 

Vassil Lunchev edited comment on CASSANDRA-10984 at 3/28/16 1:17 PM:
-

I am having a very similar problem with cassandra-driver-core 3.0.0 and Google 
Dataflow. Deploying to Dataflow sometimes works, sometimes gives the netty 
exception:

{code:java}
"java.lang.NoSuchMethodError: 
io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter;
at io.netty.buffer.PoolArena.(PoolArena.java:64)
at io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128)
at 
com.datastax.driver.core.NettyOptions.afterBootstrapInitialized(NettyOptions.java:141)
at 
com.datastax.driver.core.Connection$Factory.newBootstrap(Connection.java:825)
at 
com.datastax.driver.core.Connection$Factory.access$100(Connection.java:677)
at com.datastax.driver.core.Connection.initAsync(Connection.java:129)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:731)
at 
com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251)
at 
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at 
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:333)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:308)
at com.datastax.driver.core.Cluster.connect(Cluster.java:250)
{code}

Is there a known workaround for now?


was (Author: vas...@leanplum.com):
I am having a very similar problem with cassandra-driver-core 3.0.0 and Google 
Dataflow. Deploying to Dataflow sometimes works, sometimes gives the netty 
exception:

"java.lang.NoSuchMethodError: 
io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter;
at io.netty.buffer.PoolArena.(PoolArena.java:64)
at io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128)
at 
com.datastax.driver.core.NettyOptions.afterBootstrapInitialized(NettyOptions.java:141)
at 
com.datastax.driver.core.Connection$Factory.newBootstrap(Connection.java:825)
at 
com.datastax.driver.core.Connection$Factory.access$100(Connection.java:677)
at com.datastax.driver.core.Connection.initAsync(Connection.java:129)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:731)
at 
com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251)
at 
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at 
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:333)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:308)
at com.datastax.driver.core.Cluster.connect(Cluster.java:250)

Is there a known workaround for now?

> Cassandra should not depend on netty-all
> 
>
> Key: CASSANDRA-10984
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10984
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: James Roper
>Assignee: Alex Petrov
>Priority: Minor
> Attachments: 
> 0001-Use-separate-netty-depencencies-instead-of-netty-all.patch, 
> 0001-with-binaries.patch
>
>
> netty-all is a jar that bundles all the individual netty dependencies for 
> convenience together for people trying out netty to get started quickly.  
> Serious projects like Cassandra should never ever ever use it, since it's a 
> recipe for classpath disasters.
> To illustrate, I'm running Cassandra embedded in an app, and I get this error:
> {noformat}
> [JVM-1] 

[jira] [Commented] (CASSANDRA-10984) Cassandra should not depend on netty-all

2016-03-28 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214176#comment-15214176
 ] 

Vassil Lunchev commented on CASSANDRA-10984:


I am having a very similar problem with cassandra-driver-core 3.0.0 and Google 
Dataflow. Deploying to Dataflow sometimes works, sometimes gives the netty 
exception:

"java.lang.NoSuchMethodError: 
io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter;
at io.netty.buffer.PoolArena.(PoolArena.java:64)
at io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145)
at 
io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128)
at 
com.datastax.driver.core.NettyOptions.afterBootstrapInitialized(NettyOptions.java:141)
at 
com.datastax.driver.core.Connection$Factory.newBootstrap(Connection.java:825)
at 
com.datastax.driver.core.Connection$Factory.access$100(Connection.java:677)
at com.datastax.driver.core.Connection.initAsync(Connection.java:129)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:731)
at 
com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251)
at 
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at 
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:333)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:308)
at com.datastax.driver.core.Cluster.connect(Cluster.java:250)

Is there a known workaround for now?

> Cassandra should not depend on netty-all
> 
>
> Key: CASSANDRA-10984
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10984
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: James Roper
>Assignee: Alex Petrov
>Priority: Minor
> Attachments: 
> 0001-Use-separate-netty-depencencies-instead-of-netty-all.patch, 
> 0001-with-binaries.patch
>
>
> netty-all is a jar that bundles all the individual netty dependencies for 
> convenience together for people trying out netty to get started quickly.  
> Serious projects like Cassandra should never ever ever use it, since it's a 
> recipe for classpath disasters.
> To illustrate, I'm running Cassandra embedded in an app, and I get this error:
> {noformat}
> [JVM-1] java.lang.NoSuchMethodError: 
> io.netty.util.internal.PlatformDependent.newLongCounter()Lio/netty/util/internal/LongCounter;
> [JVM-1]   at io.netty.buffer.PoolArena.(PoolArena.java:64) 
> ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PoolArena$HeapArena.(PoolArena.java:593) 
> ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:179)
>  ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:153)
>  ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:145)
>  ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> io.netty.buffer.PooledByteBufAllocator.(PooledByteBufAllocator.java:128)
>  ~[netty-buffer-4.0.33.Final.jar:4.0.33.Final]
> [JVM-1]   at 
> org.apache.cassandra.transport.CBUtil.(CBUtil.java:56) 
> ~[cassandra-all-3.0.0.jar:3.0.0]
> [JVM-1]   at org.apache.cassandra.transport.Server.start(Server.java:134) 
> ~[cassandra-all-3.0.0.jar:3.0.0]
> {noformat}
> {{PlatformDependent}} comes from netty-common, of which version 4.0.33 is on 
> the classpath, but it's also provided by netty-all, which has version 4.0.23 
> brought in by cassandra.  By a fluke of classpath ordering, the classloader 
> has loaded the netty buffer classes from netty-buffer 4.0.33, but the 
> PlatformDependent class from netty-all 4.0.23, and these two versions are not 
> binary compatible, hence the linkage error.
> Essentially to avoid these problems in serious projects, anyone that ever 
> brings in cassandra is going to have to exclude the netty dependency from it, 
> which is error prone, and when you get it wrong, due to the nature of 
> classpath ordering bugs, it might not be till you deploy to production that 
> you actually find out there's a problem.



--
This message 

[jira] [Commented] (CASSANDRA-9259) Bulk Reading from Cassandra

2016-03-19 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197725#comment-15197725
 ] 

Vassil Lunchev commented on CASSANDRA-9259:
---

Very good results!

>From the benchmarks it seems like 100k rows/second is something like a limit. 
>I have seen that limit in tests as well and to me it is more like 100k 
>cells/second. 
Do you think Cassandra would be able to push more than 100k rows/second with 
partition sizes smaller than 100-bytes (I know that is unpractical)?
Also do you think adding more columns to the rows will have any effect. Like, 
do you think the bound is around 100k rows/second or around 100k cells/second?

If I have to bet, it would still be around 100k per second even with smaller 
than 100 bytes partitions, and the bottleneck is the number of cells, not the 
number of rows.

> Bulk Reading from Cassandra
> ---
>
> Key: CASSANDRA-9259
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9259
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, CQL, Local Write-Read Paths, Streaming and 
> Messaging, Testing
>Reporter:  Brian Hess
>Assignee: Stefania
>Priority: Critical
> Fix For: 3.x
>
> Attachments: bulk-read-benchmark.1.html, 
> bulk-read-jfr-profiles.1.tar.gz, bulk-read-jfr-profiles.2.tar.gz
>
>
> This ticket is following on from the 2015 NGCC.  This ticket is designed to 
> be a place for discussing and designing an approach to bulk reading.
> The goal is to have a bulk reading path for Cassandra.  That is, a path 
> optimized to grab a large portion of the data for a table (potentially all of 
> it).  This is a core element in the Spark integration with Cassandra, and the 
> speed at which Cassandra can deliver bulk data to Spark is limiting the 
> performance of Spark-plus-Cassandra operations.  This is especially of 
> importance as Cassandra will (likely) leverage Spark for internal operations 
> (for example CASSANDRA-8234).
> The core CQL to consider is the following:
> SELECT a, b, c FROM myKs.myTable WHERE Token(partitionKey) > X AND 
> Token(partitionKey) <= Y
> Here, we choose X and Y to be contained within one token range (perhaps 
> considering the primary range of a node without vnodes, for example).  This 
> query pushes 50K-100K rows/sec, which is not very fast if we are doing bulk 
> operations via Spark (or other processing frameworks - ETL, etc).  There are 
> a few causes (e.g., inefficient paging).
> There are a few approaches that could be considered.  First, we consider a 
> new "Streaming Compaction" approach.  The key observation here is that a bulk 
> read from Cassandra is a lot like a major compaction, though instead of 
> outputting a new SSTable we would output CQL rows to a stream/socket/etc.  
> This would be similar to a CompactionTask, but would strip out some 
> unnecessary things in there (e.g., some of the indexing, etc). Predicates and 
> projections could also be encapsulated in this new "StreamingCompactionTask", 
> for example.
> Another approach would be an alternate storage format.  For example, we might 
> employ Parquet (just as an example) to store the same data as in the primary 
> Cassandra storage (aka SSTables).  This is akin to Global Indexes (an 
> alternate storage of the same data optimized for a particular query).  Then, 
> Cassandra can choose to leverage this alternate storage for particular CQL 
> queries (e.g., range scans).
> These are just 2 suggestions to get the conversation going.
> One thing to note is that it will be useful to have this storage segregated 
> by token range so that when you extract via these mechanisms you do not get 
> replications-factor numbers of copies of the data.  That will certainly be an 
> issue for some Spark operations (e.g., counting).  Thus, we will want 
> per-token-range storage (even for single disks), so this will likely leverage 
> CASSANDRA-6696 (though, we'll want to also consider the single disk case).
> It is also worth discussing what the success criteria is here.  It is 
> unlikely to be as fast as EDW or HDFS performance (though, that is still a 
> good goal), but being within some percentage of that performance should be 
> set as success.  For example, 2x as long as doing bulk operations on HDFS 
> with similar node count/size/etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11065) null pointer exception in CassandraDaemon.java:195

2016-02-11 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143529#comment-15143529
 ] 

Vassil Lunchev commented on CASSANDRA-11065:


I saw that once, and it was minutes after the keyspace drop. However I haven't 
seen it after that and I am not sure if I can reproduce it, so I am ok with 
this fix.

> null pointer exception in CassandraDaemon.java:195
> --
>
> Key: CASSANDRA-11065
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11065
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vassil Lunchev
>Assignee: Paulo Motta
>Priority: Minor
>
> Running Cassandra 3.0.1 installed from apt-get on debian.
> I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
> one of them still had that keyspace 'tests'. On a node that still has the 
> dropped keyspace I ran:
> nodetools repair tests;
> In the system logs of another node that did not have keyspace 'tests' I am 
> seeing a null pointer exception:
> {code:java}
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
> RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
> Exception in thread Thread[AntiEntropyStage:2,5,main]
> java.lang.RuntimeException: java.lang.NullPointerException
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66-internal]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_66-internal]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   ... 4 common frames omitted
> {code}
> The error appears every time I run:
> nodetools repair tests;
> I can see it in the logs of all nodes, including the node on which I run the 
> repair.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9259) Bulk Reading from Cassandra

2016-01-26 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117397#comment-15117397
 ] 

Vassil Lunchev commented on CASSANDRA-9259:
---

"For full data queries it may be advantageous to have C* be able to compact all 
of the relevant sstables into a format friendlier to analytics workloads."
I would even go further and say - "Cassandra needs a new compaction strategy. 
It has DateTieredCompactionStrategy for time series data. It needs a new one, 
for example ColumnarCompactionStrategy, that is similar in concept to Parquet 
and designed for analytics workloads."

The results here: https://github.com/velvia/cassandra-gdelt
and the ideas here: https://github.com/tuplejump/FiloDB
are very compelling. FilloDB is practically doing a new columnar compaction 
layer on top of C*. And the results are quite promising - "faster than Parquet 
scan speeds" with storage needs "within 35% of Parquet".

> Bulk Reading from Cassandra
> ---
>
> Key: CASSANDRA-9259
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9259
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, CQL, Local Write-Read Paths, Streaming and 
> Messaging, Testing
>Reporter:  Brian Hess
>Priority: Critical
> Fix For: 3.x
>
>
> This ticket is following on from the 2015 NGCC.  This ticket is designed to 
> be a place for discussing and designing an approach to bulk reading.
> The goal is to have a bulk reading path for Cassandra.  That is, a path 
> optimized to grab a large portion of the data for a table (potentially all of 
> it).  This is a core element in the Spark integration with Cassandra, and the 
> speed at which Cassandra can deliver bulk data to Spark is limiting the 
> performance of Spark-plus-Cassandra operations.  This is especially of 
> importance as Cassandra will (likely) leverage Spark for internal operations 
> (for example CASSANDRA-8234).
> The core CQL to consider is the following:
> SELECT a, b, c FROM myKs.myTable WHERE Token(partitionKey) > X AND 
> Token(partitionKey) <= Y
> Here, we choose X and Y to be contained within one token range (perhaps 
> considering the primary range of a node without vnodes, for example).  This 
> query pushes 50K-100K rows/sec, which is not very fast if we are doing bulk 
> operations via Spark (or other processing frameworks - ETL, etc).  There are 
> a few causes (e.g., inefficient paging).
> There are a few approaches that could be considered.  First, we consider a 
> new "Streaming Compaction" approach.  The key observation here is that a bulk 
> read from Cassandra is a lot like a major compaction, though instead of 
> outputting a new SSTable we would output CQL rows to a stream/socket/etc.  
> This would be similar to a CompactionTask, but would strip out some 
> unnecessary things in there (e.g., some of the indexing, etc). Predicates and 
> projections could also be encapsulated in this new "StreamingCompactionTask", 
> for example.
> Another approach would be an alternate storage format.  For example, we might 
> employ Parquet (just as an example) to store the same data as in the primary 
> Cassandra storage (aka SSTables).  This is akin to Global Indexes (an 
> alternate storage of the same data optimized for a particular query).  Then, 
> Cassandra can choose to leverage this alternate storage for particular CQL 
> queries (e.g., range scans).
> These are just 2 suggestions to get the conversation going.
> One thing to note is that it will be useful to have this storage segregated 
> by token range so that when you extract via these mechanisms you do not get 
> replications-factor numbers of copies of the data.  That will certainly be an 
> issue for some Spark operations (e.g., counting).  Thus, we will want 
> per-token-range storage (even for single disks), so this will likely leverage 
> CASSANDRA-6696 (though, we'll want to also consider the single disk case).
> It is also worth discussing what the success criteria is here.  It is 
> unlikely to be as fast as EDW or HDFS performance (though, that is still a 
> good goal), but being within some percentage of that performance should be 
> set as success.  For example, 2x as long as doing bulk operations on HDFS 
> with similar node count/size/etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9259) Bulk Reading from Cassandra

2016-01-26 Thread Vassil Lunchev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117397#comment-15117397
 ] 

Vassil Lunchev edited comment on CASSANDRA-9259 at 1/26/16 3:41 PM:


"For full data queries it may be advantageous to have C* be able to compact all 
of the relevant sstables into a format friendlier to analytics workloads."
I would even go further and say - "Cassandra needs a new compaction strategy. 
It has DateTieredCompactionStrategy for time series data. It needs a new one, 
for example ColumnarCompactionStrategy, that is similar in concept to Parquet 
and designed for analytics workloads."

The results here: https://github.com/velvia/cassandra-gdelt
and the ideas here: https://github.com/tuplejump/FiloDB
are very compelling. FiloDB is practically doing a new columnar compaction 
layer on top of C*. And the results are quite promising - "faster than Parquet 
scan speeds" with storage needs "within 35% of Parquet".


was (Author: vas...@leanplum.com):
"For full data queries it may be advantageous to have C* be able to compact all 
of the relevant sstables into a format friendlier to analytics workloads."
I would even go further and say - "Cassandra needs a new compaction strategy. 
It has DateTieredCompactionStrategy for time series data. It needs a new one, 
for example ColumnarCompactionStrategy, that is similar in concept to Parquet 
and designed for analytics workloads."

The results here: https://github.com/velvia/cassandra-gdelt
and the ideas here: https://github.com/tuplejump/FiloDB
are very compelling. FilloDB is practically doing a new columnar compaction 
layer on top of C*. And the results are quite promising - "faster than Parquet 
scan speeds" with storage needs "within 35% of Parquet".

> Bulk Reading from Cassandra
> ---
>
> Key: CASSANDRA-9259
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9259
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, CQL, Local Write-Read Paths, Streaming and 
> Messaging, Testing
>Reporter:  Brian Hess
>Priority: Critical
> Fix For: 3.x
>
>
> This ticket is following on from the 2015 NGCC.  This ticket is designed to 
> be a place for discussing and designing an approach to bulk reading.
> The goal is to have a bulk reading path for Cassandra.  That is, a path 
> optimized to grab a large portion of the data for a table (potentially all of 
> it).  This is a core element in the Spark integration with Cassandra, and the 
> speed at which Cassandra can deliver bulk data to Spark is limiting the 
> performance of Spark-plus-Cassandra operations.  This is especially of 
> importance as Cassandra will (likely) leverage Spark for internal operations 
> (for example CASSANDRA-8234).
> The core CQL to consider is the following:
> SELECT a, b, c FROM myKs.myTable WHERE Token(partitionKey) > X AND 
> Token(partitionKey) <= Y
> Here, we choose X and Y to be contained within one token range (perhaps 
> considering the primary range of a node without vnodes, for example).  This 
> query pushes 50K-100K rows/sec, which is not very fast if we are doing bulk 
> operations via Spark (or other processing frameworks - ETL, etc).  There are 
> a few causes (e.g., inefficient paging).
> There are a few approaches that could be considered.  First, we consider a 
> new "Streaming Compaction" approach.  The key observation here is that a bulk 
> read from Cassandra is a lot like a major compaction, though instead of 
> outputting a new SSTable we would output CQL rows to a stream/socket/etc.  
> This would be similar to a CompactionTask, but would strip out some 
> unnecessary things in there (e.g., some of the indexing, etc). Predicates and 
> projections could also be encapsulated in this new "StreamingCompactionTask", 
> for example.
> Another approach would be an alternate storage format.  For example, we might 
> employ Parquet (just as an example) to store the same data as in the primary 
> Cassandra storage (aka SSTables).  This is akin to Global Indexes (an 
> alternate storage of the same data optimized for a particular query).  Then, 
> Cassandra can choose to leverage this alternate storage for particular CQL 
> queries (e.g., range scans).
> These are just 2 suggestions to get the conversation going.
> One thing to note is that it will be useful to have this storage segregated 
> by token range so that when you extract via these mechanisms you do not get 
> replications-factor numbers of copies of the data.  That will certainly be an 
> issue for some Spark operations (e.g., counting).  Thus, we will want 
> per-token-range storage (even for single disks), so this will likely leverage 
> CASSANDRA-6696 (though, we'll want to also consider the single disk case).
> It is also worth discussing 

[jira] [Updated] (CASSANDRA-11065) null pointer exception in CassandraDaemon.java:195

2016-01-25 Thread Vassil Lunchev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vassil Lunchev updated CASSANDRA-11065:
---
Description: 
I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
one of them still had that keyspace 'tests'. On a node that still has the 
dropped keyspace I ran:
nodetools repair tests;

In the system logs of another node that did not have keyspace 'tests' I am 
seeing a null pointer exception:

{code:java}
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:2,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.1.jar:3.0.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66-internal]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
... 4 common frames omitted
{code}


  was:
I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
one of them still had that keyspace 'tests'. On a node that still has the 
dropped keyspace I ran:
nodetools repair tests;

In the system logs of another node that did not have keyspace 'tests' I am 
seeing a null pointer exception:

ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:2,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.1.jar:3.0.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66-internal]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
... 4 common frames omitted


> null pointer exception in CassandraDaemon.java:195
> --
>
> Key: CASSANDRA-11065
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11065
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vassil Lunchev
>Priority: Minor
> Fix For: 3.0.1
>
>
> I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
> one of them still had that keyspace 'tests'. On a node that still has the 
> dropped keyspace I ran:
> nodetools repair tests;
> In the system logs of another node that did not have keyspace 'tests' I am 
> seeing a null pointer exception:
> {code:java}
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
> RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
> Exception in thread Thread[AntiEntropyStage:2,5,main]
> java.lang.RuntimeException: java.lang.NullPointerException
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66-internal]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_66-internal]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   ... 4 

[jira] [Created] (CASSANDRA-11065) null pointer exception in CassandraDaemon.java:195

2016-01-25 Thread Vassil Lunchev (JIRA)
Vassil Lunchev created CASSANDRA-11065:
--

 Summary: null pointer exception in CassandraDaemon.java:195
 Key: CASSANDRA-11065
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11065
 Project: Cassandra
  Issue Type: Bug
Reporter: Vassil Lunchev
Priority: Minor
 Fix For: 3.0.1


I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
one of them still had that keyspace 'tests'. On a node that still has the 
dropped keyspace I ran:
nodetools repair tests;

In the system logs of another node that did not have keyspace 'tests' I am 
seeing a null pointer exception:

ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:2,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.1.jar:3.0.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66-internal]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
... 4 common frames omitted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11065) null pointer exception in CassandraDaemon.java:195

2016-01-25 Thread Vassil Lunchev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vassil Lunchev updated CASSANDRA-11065:
---
Description: 
I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
one of them still had that keyspace 'tests'. On a node that still has the 
dropped keyspace I ran:
nodetools repair tests;

In the system logs of another node that did not have keyspace 'tests' I am 
seeing a null pointer exception:

{code:java}
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:2,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.1.jar:3.0.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66-internal]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
... 4 common frames omitted
{code}

The error appears every time I run:
nodetools repair tests;

  was:
I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
one of them still had that keyspace 'tests'. On a node that still has the 
dropped keyspace I ran:
nodetools repair tests;

In the system logs of another node that did not have keyspace 'tests' I am 
seeing a null pointer exception:

{code:java}
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:2,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.1.jar:3.0.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66-internal]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
... 4 common frames omitted
{code}



> null pointer exception in CassandraDaemon.java:195
> --
>
> Key: CASSANDRA-11065
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11065
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vassil Lunchev
>Priority: Minor
> Fix For: 3.0.1
>
>
> I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
> one of them still had that keyspace 'tests'. On a node that still has the 
> dropped keyspace I ran:
> nodetools repair tests;
> In the system logs of another node that did not have keyspace 'tests' I am 
> seeing a null pointer exception:
> {code:java}
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
> RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
> Exception in thread Thread[AntiEntropyStage:2,5,main]
> java.lang.RuntimeException: java.lang.NullPointerException
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66-internal]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_66-internal]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.NullPointerException: null
>   at 
> 

[jira] [Updated] (CASSANDRA-11065) null pointer exception in CassandraDaemon.java:195

2016-01-25 Thread Vassil Lunchev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vassil Lunchev updated CASSANDRA-11065:
---
Description: 
I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
one of them still had that keyspace 'tests'. On a node that still has the 
dropped keyspace I ran:
nodetools repair tests;

In the system logs of another node that did not have keyspace 'tests' I am 
seeing a null pointer exception:

{code:java}
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:2,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.1.jar:3.0.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66-internal]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
... 4 common frames omitted
{code}

The error appears every time I run:
nodetools repair tests;

I can see it in the logs of all nodes, including the node on which I run the 
repair.

  was:
I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
one of them still had that keyspace 'tests'. On a node that still has the 
dropped keyspace I ran:
nodetools repair tests;

In the system logs of another node that did not have keyspace 'tests' I am 
seeing a null pointer exception:

{code:java}
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:2,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.1.jar:3.0.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66-internal]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
... 4 common frames omitted
{code}

The error appears every time I run:
nodetools repair tests;


> null pointer exception in CassandraDaemon.java:195
> --
>
> Key: CASSANDRA-11065
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11065
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vassil Lunchev
>Priority: Minor
> Fix For: 3.0.1
>
>
> I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
> one of them still had that keyspace 'tests'. On a node that still has the 
> dropped keyspace I ran:
> nodetools repair tests;
> In the system logs of another node that did not have keyspace 'tests' I am 
> seeing a null pointer exception:
> {code:java}
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
> RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
> Exception in thread Thread[AntiEntropyStage:2,5,main]
> java.lang.RuntimeException: java.lang.NullPointerException
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_66-internal]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_66-internal]
>   at java.lang.Thread.run(Thread.java:745) 

[jira] [Updated] (CASSANDRA-11065) null pointer exception in CassandraDaemon.java:195

2016-01-25 Thread Vassil Lunchev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vassil Lunchev updated CASSANDRA-11065:
---
Fix Version/s: (was: 3.0.1)
  Description: 
Running Cassandra 3.0.1 installed from apt-get on debian.

I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
one of them still had that keyspace 'tests'. On a node that still has the 
dropped keyspace I ran:
nodetools repair tests;

In the system logs of another node that did not have keyspace 'tests' I am 
seeing a null pointer exception:

{code:java}
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:2,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.1.jar:3.0.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66-internal]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
... 4 common frames omitted
{code}

The error appears every time I run:
nodetools repair tests;

I can see it in the logs of all nodes, including the node on which I run the 
repair.

  was:
I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
one of them still had that keyspace 'tests'. On a node that still has the 
dropped keyspace I ran:
nodetools repair tests;

In the system logs of another node that did not have keyspace 'tests' I am 
seeing a null pointer exception:

{code:java}
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:2,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.1.jar:3.0.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_66-internal]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_66-internal]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
Caused by: java.lang.NullPointerException: null
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:69)
 ~[apache-cassandra-3.0.1.jar:3.0.1]
... 4 common frames omitted
{code}

The error appears every time I run:
nodetools repair tests;

I can see it in the logs of all nodes, including the node on which I run the 
repair.


> null pointer exception in CassandraDaemon.java:195
> --
>
> Key: CASSANDRA-11065
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11065
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vassil Lunchev
>Priority: Minor
>
> Running Cassandra 3.0.1 installed from apt-get on debian.
> I had a keyspace called 'tests'. I dropped it. Then I checked some nodes and 
> one of them still had that keyspace 'tests'. On a node that still has the 
> dropped keyspace I ran:
> nodetools repair tests;
> In the system logs of another node that did not have keyspace 'tests' I am 
> seeing a null pointer exception:
> {code:java}
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,323 
> RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
> ERROR [AntiEntropyStage:2] 2016-01-25 15:02:46,324 CassandraDaemon.java:195 - 
> Exception in thread Thread[AntiEntropyStage:2,5,main]
> java.lang.RuntimeException: java.lang.NullPointerException
>   at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> 

[jira] [Updated] (CASSANDRA-11069) Materialised views require all collections to be selected.

2016-01-25 Thread Vassil Lunchev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vassil Lunchev updated CASSANDRA-11069:
---
Description: 
Running Cassandra 3.0.2

Using the official example from: 
http://www.datastax.com/dev/blog/new-in-cassandra-3-0-materialized-views
The only difference is that I have added a map column to the base table.

{code:cql}
CREATE TABLE scores
(
  user TEXT,
  game TEXT,
  year INT,
  month INT,
  day INT,
  score INT,
  a_map map,
  PRIMARY KEY (user, game, year, month, day)
);

CREATE MATERIALIZED VIEW alltimehigh AS
   SELECT user FROM scores
   WHERE game IS NOT NULL AND score IS NOT NULL AND user IS NOT NULL AND 
year IS NOT NULL AND month IS NOT NULL AND day IS NOT NULL
   PRIMARY KEY (game, score, user, year, month, day)
   WITH CLUSTERING ORDER BY (score desc);

INSERT INTO scores (user, game, year, month, day, score) VALUES ('pcmanus', 
'Coup', 2015, 06, 02, 2000);
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

All of the above works perfectly fine. Until you insert a row where the 'a_map' 
column is not null.

{code:cql}
INSERT INTO scores (user, game, year, month, day, score, a_map) VALUES 
('pcmanus_2', 'Coup', 2015, 06, 02, 2000, {1: 'text'});
{code}

This results in:
{code}
Traceback (most recent call last):
  File "/Users/vassil/apache-cassandra-3.0.2/bin/cqlsh.py", line 1258, in 
perform_simple_statement
result = future.result()
  File 
"/Users/vassil/apache-cassandra-3.0.2/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
WriteFailure: code=1500 [Replica(s) failed to execute write] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Selecting the base table and the materialised view is also interesting:
{code}
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

The result is:
{code}
cqlsh:tests> SELECT * FROM scores;

 user| game | year | month | day | a_map | score
-+--+--+---+-+---+---
 pcmanus | Coup | 2015 | 6 |   2 |  null |  2000

(1 rows)
cqlsh:tests> SELECT * FROM alltimehigh;

 game | score | user  | year | month | day
--+---+---+--+---+-
 Coup |  2000 |   pcmanus | 2015 | 6 |   2
 Coup |  2000 | pcmanus_2 | 2015 | 6 |   2

(2 rows)
{code}

In the logs you can see:
{code:java}
ERROR [SharedPool-Worker-2] 2016-01-26 03:25:27,456 Keyspace.java:484 - Unknown 
exception caught while attempting to update MaterializedView! tests.scores
java.lang.IllegalStateException: [ColumnDefinition{name=a_map, 
type=org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type),
 kind=REGULAR, position=-1}] is not a subset of []
at 
org.apache.cassandra.db.Columns$Serializer.encodeBitmap(Columns.java:531) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Columns$Serializer.serializedSubsetSize(Columns.java:483)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:275)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:247)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:234)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:227)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serializedSize(UnfilteredRowIteratorSerializer.java:169)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serializedSize(PartitionUpdate.java:683)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.serializedSize(Mutation.java:354)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:259) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:461) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:210) 
[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:703) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149)
 

[jira] [Created] (CASSANDRA-11069) Materialised views require all collections to be selected.

2016-01-25 Thread Vassil Lunchev (JIRA)
Vassil Lunchev created CASSANDRA-11069:
--

 Summary: Materialised views require all collections to be selected.
 Key: CASSANDRA-11069
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11069
 Project: Cassandra
  Issue Type: Bug
Reporter: Vassil Lunchev


Running Cassandra 3.0.2

Using the official example from: 
http://www.datastax.com/dev/blog/new-in-cassandra-3-0-materialized-views
The only difference is that the I have added a map column to the base table.

{code:cql}
CREATE TABLE scores
(
  user TEXT,
  game TEXT,
  year INT,
  month INT,
  day INT,
  score INT,
  a_map map,
  PRIMARY KEY (user, game, year, month, day)
);

CREATE MATERIALIZED VIEW alltimehigh AS
   SELECT user FROM scores
   WHERE game IS NOT NULL AND score IS NOT NULL AND user IS NOT NULL AND 
year IS NOT NULL AND month IS NOT NULL AND day IS NOT NULL
   PRIMARY KEY (game, score, user, year, month, day)
   WITH CLUSTERING ORDER BY (score desc);

INSERT INTO scores (user, game, year, month, day, score) VALUES ('pcmanus', 
'Coup', 2015, 06, 02, 2000);
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

All of the above works perfectly fine. Until you insert a row that where the 
'a_map' column is not null.

{code:cql}
INSERT INTO scores (user, game, year, month, day, score, a_map) VALUES 
('pcmanus_2', 'Coup', 2015, 06, 02, 2000, {1: 'text'});
{code}

This results in:
{code}
Traceback (most recent call last):
  File "/Users/vassil/apache-cassandra-3.0.2/bin/cqlsh.py", line 1258, in 
perform_simple_statement
result = future.result()
  File 
"/Users/vassil/apache-cassandra-3.0.2/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
WriteFailure: code=1500 [Replica(s) failed to execute write] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Selecting the base table and the materialised view is also interesting:
{code}
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

The result is:
{code}
cqlsh:tests> SELECT * FROM scores;

 user| game | year | month | day | a_map | score
-+--+--+---+-+---+---
 pcmanus | Coup | 2015 | 6 |   2 |  null |  2000

(1 rows)
cqlsh:tests> SELECT * FROM alltimehigh;

 game | score | user  | year | month | day
--+---+---+--+---+-
 Coup |  2000 |   pcmanus | 2015 | 6 |   2
 Coup |  2000 | pcmanus_2 | 2015 | 6 |   2

(2 rows)
{code}

In the logs you can see:
{code:java}
ERROR [SharedPool-Worker-2] 2016-01-26 03:25:27,456 Keyspace.java:484 - Unknown 
exception caught while attempting to update MaterializedView! tests.scores
java.lang.IllegalStateException: [ColumnDefinition{name=a_map, 
type=org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type),
 kind=REGULAR, position=-1}] is not a subset of []
at 
org.apache.cassandra.db.Columns$Serializer.encodeBitmap(Columns.java:531) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Columns$Serializer.serializedSubsetSize(Columns.java:483)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:275)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:247)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:234)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:227)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serializedSize(UnfilteredRowIteratorSerializer.java:169)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serializedSize(PartitionUpdate.java:683)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.serializedSize(Mutation.java:354)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:259) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:461) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:210) 
[apache-cassandra-3.0.2.jar:3.0.2]
at 

[jira] [Updated] (CASSANDRA-11069) Materialised views require all collections to be selected.

2016-01-25 Thread Vassil Lunchev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vassil Lunchev updated CASSANDRA-11069:
---
Description: 
Running Cassandra 3.0.2

Using the official example from: 
http://www.datastax.com/dev/blog/new-in-cassandra-3-0-materialized-views
The only difference is that I have added a map column to the base table.

{code:cql}
CREATE TABLE scores
(
  user TEXT,
  game TEXT,
  year INT,
  month INT,
  day INT,
  score INT,
  a_map map,
  PRIMARY KEY (user, game, year, month, day)
);

CREATE MATERIALIZED VIEW alltimehigh AS
   SELECT user FROM scores
   WHERE game IS NOT NULL AND score IS NOT NULL AND user IS NOT NULL AND 
year IS NOT NULL AND month IS NOT NULL AND day IS NOT NULL
   PRIMARY KEY (game, score, user, year, month, day)
   WITH CLUSTERING ORDER BY (score desc);

INSERT INTO scores (user, game, year, month, day, score) VALUES ('pcmanus', 
'Coup', 2015, 06, 02, 2000);
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

All of the above works perfectly fine. Until you insert a row that where the 
'a_map' column is not null.

{code:cql}
INSERT INTO scores (user, game, year, month, day, score, a_map) VALUES 
('pcmanus_2', 'Coup', 2015, 06, 02, 2000, {1: 'text'});
{code}

This results in:
{code}
Traceback (most recent call last):
  File "/Users/vassil/apache-cassandra-3.0.2/bin/cqlsh.py", line 1258, in 
perform_simple_statement
result = future.result()
  File 
"/Users/vassil/apache-cassandra-3.0.2/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
WriteFailure: code=1500 [Replica(s) failed to execute write] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Selecting the base table and the materialised view is also interesting:
{code}
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

The result is:
{code}
cqlsh:tests> SELECT * FROM scores;

 user| game | year | month | day | a_map | score
-+--+--+---+-+---+---
 pcmanus | Coup | 2015 | 6 |   2 |  null |  2000

(1 rows)
cqlsh:tests> SELECT * FROM alltimehigh;

 game | score | user  | year | month | day
--+---+---+--+---+-
 Coup |  2000 |   pcmanus | 2015 | 6 |   2
 Coup |  2000 | pcmanus_2 | 2015 | 6 |   2

(2 rows)
{code}

In the logs you can see:
{code:java}
ERROR [SharedPool-Worker-2] 2016-01-26 03:25:27,456 Keyspace.java:484 - Unknown 
exception caught while attempting to update MaterializedView! tests.scores
java.lang.IllegalStateException: [ColumnDefinition{name=a_map, 
type=org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type),
 kind=REGULAR, position=-1}] is not a subset of []
at 
org.apache.cassandra.db.Columns$Serializer.encodeBitmap(Columns.java:531) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Columns$Serializer.serializedSubsetSize(Columns.java:483)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:275)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:247)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:234)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:227)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serializedSize(UnfilteredRowIteratorSerializer.java:169)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serializedSize(PartitionUpdate.java:683)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.serializedSize(Mutation.java:354)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:259) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:461) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:210) 
[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:703) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149)
 

[jira] [Updated] (CASSANDRA-11069) Materialised views require all collections to be selected.

2016-01-25 Thread Vassil Lunchev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vassil Lunchev updated CASSANDRA-11069:
---
Description: 
Running Cassandra 3.0.2

Using the official example from: 
http://www.datastax.com/dev/blog/new-in-cassandra-3-0-materialized-views
The only difference is that I have added a map column to the base table.

{code:cql}
CREATE TABLE scores
(
  user TEXT,
  game TEXT,
  year INT,
  month INT,
  day INT,
  score INT,
  a_map map,
  PRIMARY KEY (user, game, year, month, day)
);

CREATE MATERIALIZED VIEW alltimehigh AS
   SELECT user FROM scores
   WHERE game IS NOT NULL AND score IS NOT NULL AND user IS NOT NULL AND 
year IS NOT NULL AND month IS NOT NULL AND day IS NOT NULL
   PRIMARY KEY (game, score, user, year, month, day)
   WITH CLUSTERING ORDER BY (score desc);

INSERT INTO scores (user, game, year, month, day, score) VALUES ('pcmanus', 
'Coup', 2015, 06, 02, 2000);
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

All of the above works perfectly fine. Until you insert a row where the 'a_map' 
column is not null.

{code:cql}
INSERT INTO scores (user, game, year, month, day, score, a_map) VALUES 
('pcmanus_2', 'Coup', 2015, 06, 02, 2000, {1: 'text'});
{code}

This results in:
{code}
Traceback (most recent call last):
  File "/Users/vassil/apache-cassandra-3.0.2/bin/cqlsh.py", line 1258, in 
perform_simple_statement
result = future.result()
  File 
"/Users/vassil/apache-cassandra-3.0.2/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
WriteFailure: code=1500 [Replica(s) failed to execute write] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Selecting the base table and the materialised view is also interesting:
{code}
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

The result is:
{code}
cqlsh:tests> SELECT * FROM scores;

 user| game | year | month | day | a_map | score
-+--+--+---+-+---+---
 pcmanus | Coup | 2015 | 6 |   2 |  null |  2000

(1 rows)
cqlsh:tests> SELECT * FROM alltimehigh;

 game | score | user  | year | month | day
--+---+---+--+---+-
 Coup |  2000 |   pcmanus | 2015 | 6 |   2
 Coup |  2000 | pcmanus_2 | 2015 | 6 |   2

(2 rows)
{code}

In the logs you can see:
{code:java}
ERROR [SharedPool-Worker-2] 2016-01-26 03:25:27,456 Keyspace.java:484 - Unknown 
exception caught while attempting to update MaterializedView! tests.scores
java.lang.IllegalStateException: [ColumnDefinition{name=a_map, 
type=org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type),
 kind=REGULAR, position=-1}] is not a subset of []
at 
org.apache.cassandra.db.Columns$Serializer.encodeBitmap(Columns.java:531) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Columns$Serializer.serializedSubsetSize(Columns.java:483)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:275)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:247)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:234)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:227)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serializedSize(UnfilteredRowIteratorSerializer.java:169)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serializedSize(PartitionUpdate.java:683)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.serializedSize(Mutation.java:354)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:259) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:461) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:210) 
[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:703) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149)