[jira] [Comment Edited] (CASSANDRA-12500) Counter cache hit counter not incrementing

2016-09-02 Thread Aditya Pandit (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15459556#comment-15459556
 ] 

Aditya Pandit edited comment on CASSANDRA-12500 at 9/3/16 3:04 AM:
---

As per my understanding the CounterCache value gets hit on UPDATE operation of 
existing row. On Updating an existing  row the CounterCache will register a hit 
as it re-uses the same Counter object. 
example: add another update to an existing row from your statements.
UPDATE test.test2 SET v=v+1 WHERE id=1 and c=2;

And then check nodetool info | grep Cache
I added one more row and updated it again to get 4 entries and saw 1 hit(for 
the one that I updated twice).

{noformat}
Key Cache  : entries 23, size 1.77 KiB, capacity 50 MiB, 70 hits, 
92 requests, 0.761 recent hit rate, 14400 save period in seconds
Row Cache  : entries 0, size 0 bytes, capacity 10 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
Counter Cache  : entries 4, size 440 bytes, capacity 20 MiB, 1 hits, 2 
requests, 0.500 recent hit rate, 7200 save period in seconds
Chunk Cache: entries 26, size 1.62 MiB, capacity 219 MiB, 38 
misses, 162 requests, 0.765 recent hit rate, 570.397 microseconds miss latency
{noformat}

Although need a second opinion from some cassandra dev. While debugging I was 
able to verify that the CounterCache is getting updated and used. BTW very good 
detail steps for recreating the issue :-)




was (Author: adityapandit):
CounterCache value gets hit on UPDATE operation of same row the CounterCache 
will register a hit as it re-uses the same Counter object.
example: add another update to the same row:

UPDATE test.test2 SET v=v+1 WHERE id=1 and c=2;

And then check nodetool info | grep Cache
I added one more row and updated it again to get 4 entries and saw 1 hit(for 
the one that I updated twice).

{noformat}
Key Cache  : entries 23, size 1.77 KiB, capacity 50 MiB, 70 hits, 
92 requests, 0.761 recent hit rate, 14400 save period in seconds
Row Cache  : entries 0, size 0 bytes, capacity 10 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
Counter Cache  : entries 4, size 440 bytes, capacity 20 MiB, 1 hits, 2 
requests, 0.500 recent hit rate, 7200 save period in seconds
Chunk Cache: entries 26, size 1.62 MiB, capacity 219 MiB, 38 
misses, 162 requests, 0.765 recent hit rate, 570.397 microseconds miss latency
{noformat}




> Counter cache hit counter not incrementing 
> ---
>
> Key: CASSANDRA-12500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12500
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Priority: Minor
>
> Trivial repro on 3.7 with scripts below. Haven't dug through 
> {{CounterCacheTest}} to find out if the cache is getting skipped or if it's 
> just not updating the hit counter properly: 
> {code}
> #!/bin/sh
> ccm remove test
> ccm create test -v 3.7 -n 1
> sed -i'' -e 's/row_cache_size_in_mb: 0/row_cache_size_in_mb: 100/g' 
> .ccm/test/node1/conf/cassandra.yaml
> ccm start
> sleep 5
> ccm node1 cqlsh < ~/keyspace.cql
> ccm node1 cqlsh < ~/table-counter.cql
> ccm node1 cqlsh < ~/table-counter-clustering.cql
> echo "Schema created, reads and writes starting"
> ccm node1 nodetool info | grep Cache
> echo "UPDATE test.test SET v=v+1 WHERE id=1; " | ccm node1 cqlsh
> echo "UPDATE test.test2 SET v=v+1 WHERE id=1 and c=1; " | ccm node1 cqlsh
> echo "UPDATE test.test2 SET v=v+1 WHERE id=1 and c=2; " | ccm node1 cqlsh
> echo "SELECT * FROM test.test WHERE id=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test WHERE id=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test2 WHERE id=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test2 WHERE id=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test2 WHERE id=1 and c=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test2 WHERE id=1 and c=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> {code}
> Keyspace / tables:
> {code}
> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'}  AND durable_writes = true;
> {code}
> {code}
> CREATE TABLE test.test (
> id int PRIMARY KEY,
> v counter
> ) WITH caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'};
> {code}
> {code}
> CREATE TABLE test.test2 (
> id int,
> c int,
> v counter,
> PRIMARY KEY(id, c)
> ) WITH caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'};
> {code}
> Output:
> {code}
> Schema created, reads and writes starting
> Key Cache  : ent

[jira] [Created] (CASSANDRA-12606) CQLSSTableWriter unable to use blob conversion functions

2016-09-02 Thread Mark Reddy (JIRA)
Mark Reddy created CASSANDRA-12606:
--

 Summary: CQLSSTableWriter unable to use blob conversion functions
 Key: CASSANDRA-12606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12606
 Project: Cassandra
  Issue Type: Bug
  Components: CQL, Tools
Reporter: Mark Reddy
Priority: Minor


Attempting to use blob conversion functions e.g. textAsBlob, from 3.0 - 3.7 
results in:

{noformat}
Exception in thread "main" 
org.apache.cassandra.exceptions.InvalidRequestException: Unknown function 
textasblob called
at 
org.apache.cassandra.cql3.functions.FunctionCall$Raw.prepare(FunctionCall.java:136)
at 
org.apache.cassandra.cql3.Operation$SetValue.prepare(Operation.java:163)
at 
org.apache.cassandra.cql3.statements.UpdateStatement$ParsedInsert.prepareInternal(UpdateStatement.java:173)
at 
org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:785)
at 
org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:771)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.prepareInsert(CQLSSTableWriter.java:567)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.build(CQLSSTableWriter.java:510)
{noformat}

The following snippet will reproduce the issue
{code}
String table = String.format("%s.%s", "test_ks", "test_table");
String schema = String.format("CREATE TABLE %s (test_text text, test_blob blob, 
PRIMARY KEY(test_text));", table);
String insertStatement = String.format("INSERT INTO %s (test_text, test_blob) 
VALUES (?, textAsBlob(?))", table);

File tempDir = Files.createTempDirectory("tempDir").toFile();

CQLSSTableWriter sstableWriter = CQLSSTableWriter.builder()
.forTable(schema)
.using(insertStatement)
.inDirectory(tempDir)
.build();
{code}

This is caused in FunctionResolver.get(...) when 
candidates.addAll(Schema.instance.getFunctions(name.asNativeFunction())); is 
called, as there is no system keyspace initialised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12560) Cassandra Restart issues while restoring to a new cluster

2016-09-02 Thread Prateek Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prateek Agarwal resolved CASSANDRA-12560.
-
Resolution: Invalid

Turns out there were stale directories commit_log and saved_caches which i 
missed to delete earlier. The instructions work correctly with those 
directories deleted.

> Cassandra Restart issues while restoring to a new cluster
> -
>
> Key: CASSANDRA-12560
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12560
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: distro: Ubuntu 14.04 LTS
>Reporter: Prateek Agarwal
>
> I am restoring to a fresh new Cassandra 2.2.5 cluster consisting of 3 nodes.
> Initial cluster health of the NEW cluster:
> {code}
> --  Address   Load   Tokens   OwnsHost ID 
>   Rack
> UN  10.40.1.1   259.31 KB   256  ?   
> d2b29b08-9eac-4733-9798-019275d66cfc  uswest1adevc
> UN  10.40.1.2   230.12 KB   256  ?   
> 5484ab11-32b1-4d01-a5fe-c996a63108f1  uswest1adevc
> UN  10.40.1.3   248.47 KB   256  ?   
> bad95fe2-70c5-4a2f-b517-d7fd7a32bc45  uswest1cdevc
> {code}
> As part of the [restore instructions in Datastax 2.2 
> docs|http://docs.datastax.com/en/cassandra/2.2/cassandra/operations/opsSnapshotRestoreNewCluster.html],
>  i do the following on the new cluster:
> 1) cassandra stop on all of the three nodes one by one.
> 2) Edit cassandra.yaml for all of the three nodes with the backup'ed token 
> ring information. [Step 2 from docs]
> 3) Remove the contents from /var/lib/cassandra/data/system/* [Step 4 from 
> docs]
> 4) cassandra start on nodes 10.40.1.1, 10.40.1.2, 10.40.1.3 respectively.
> Result: 10.40.1.1 restarts back successfully:
> {code}
> --  Address   Load   Tokens   OwnsHost ID 
>   Rack
> UN  10.40.1.1   259.31 KB   256  ?   
> 2d23add3-9eac-4733-9798-019275d125d3  uswest1adevc
> {code}
> But the second and the third nodes fail to restart stating:
> {code}
> java.lang.RuntimeException: A node with address 10.40.1.2 already exists, 
> cancelling join. Use cassandra.replace_address if you want to replace this 
> node.
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:546)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:766)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:693)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:585)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:300) 
> [apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:516)
>  [apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) 
> [apache-cassandra-2.2.5.jar:2.2.5]
> INFO  [StorageServiceShutdownHook] 2016-08-09 18:13:21,980 Gossiper.java:1449 
> - Announcing shutdown
> {code}
> {code}
> java.lang.RuntimeException: A node with address 10.40.1.3 already exists, 
> cancelling join. Use cassandra.replace_address if you want to replace this 
> node.
> ...
> {code}
> Eventual cluster health:
> {code}
> --  Address   Load   Tokens   OwnsHost ID 
>   Rack
> UN  10.40.1.1   259.31 KB   256  ?   
> 2d23add3-9eac-4733-9798-019275d125d3  uswest1adevc
> DN  10.40.1.2   230.12 KB   256  ?   
> 6w2321ad-32b1-4d01-a5fe-c996a63108f1  uswest1adevc
> DN  10.40.1.3   248.47 KB   256  ?   
> 9et4944d-70c5-4a2f-b517-d7fd7a32bc45  uswest1cdevc
> {code}
> I understand that the HostID of a node might change after system dirs are 
> removed.
> I think the restore docs are incomplete and need to mention the 'replace IP' 
> part as well OR am i missing something in my steps?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12591) Re-evaluate the default 160MB sstable_size_in_mb choice in LCS

2016-09-02 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456278#comment-15456278
 ] 

Wei Deng edited comment on CASSANDRA-12591 at 9/2/16 10:33 PM:
---

So I've done some quick initial tests using latest trunk (i.e. C* 3.10) code 
just to prove the point whether this is a worthwhile effort. The hardware I'm 
using is still not a typical/adequate-enough configuration I'd use for a 
production Cassandra deployment (GCE n1-standard-4, with 4 vCPUs, 15GB RAM and 
a single 1TB persistent disk that's spindle-based), but I'm already seeing a 
positive sign that shows bigger max_sstable_size can be helpful for compaction 
throughput.

Based on the initial results (at each max_sstable_size, I did three runs from 
scratch; for all runs I set compaction threads to 4, and since there will be no 
throttling enforced by compaction-stress the setting would be equivalent to 
setting compaction_throughput_mb_per_sec to 0, the initial SSTable files 
generated by `compaction-stress write` are using the default 128MB size, which 
is inline with the typical flush size I saw on this kind of hardware using 
default cassandra.yaml configuration parameters), using 10GB of stress data 
generated by the blogpost data model 
[here|https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml],
 the overall compaction times with 1280MB max_sstable_size are: 7m16.456s, 
7m7.225s, 7m9.102s; the overall compaction times with 160MB max_sstable_size 
are: 9m16.715s, 9m28.146s, 9m7.192s.

Given these numbers, the average seconds to finish compaction with 1280MB 
max_sstable_size is 430.66, and the average seconds to finish compaction with 
160MB max_sstable_size is 557.33, which is already a 23% improvement.

The above tests were conducted using the default parameters from 
compaction-stress which generates unique partitions for all writes, so it 
reflects the worst kind of workload for LCS. Considering this, I also conducted 
another set of tests using {{"partition-count=1000"}} to force 
compaction-stress to generate a lot of overwrites for the same partitions. 
While keeping everything else to same and adding this 
{{"partition-count=1000"}} parameter, the overall compaction times with 1280MB 
max_sstable_size are: 4m59.307s, 4m52.002s, 5m0.967s; the overall compaction 
times with 160MB max_sstable_size are: 6m11.533s, 6m21.200s, 6m10.904s. These 
numbers are understandably faster than the "all unique partition" scenario in 
the last paragraph, and if you calculate the average seconds, 1280MB 
max_sstable_size is 21% faster than 160MB max_sstable_size.

I realize 10GB data is barely enough to test 1280MB sstable size as the data 
will only go from L0->L1, so the next run I'm going to use 100GB data size on 
this hardware (keeping everything else the same) and see how the numbers 
compare.


was (Author: weideng):
So I've done some quick initial tests using latest trunk (i.e. C* 3.10) code 
just to prove the point whether this is a worthwhile effort. The hardware I'm 
using is still not a typical/adequate-enough configuration I'd use for a 
production Cassandra deployment (GCE n1-standard-4, with 4 vCPUs, 15GB RAM and 
a single 1TB persistent disk that's spindle-based), but I'm already seeing a 
positive sign that shows bigger max_sstable_size can be helpful for compaction 
throughput.

Based on the initial results (at each max_sstable_size, I did three runs from 
scratch; for all runs I set compaction threads to 4, and since there will be no 
throttling enforced by compaction-stress the setting would be equivalent to 
setting compaction_throughput_mb_per_sec to 0, the initial SSTable files 
generated by `compaction-stress write` are using the default 128MB size, which 
is inline with the typical flush size I saw on this kind of hardware using 
default cassandra.yaml configuration parameters), using 10GB of stress data 
generated by the blogpost data model 
[here|https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml],
 the overall compaction times with 1280MB max_sstable_size are: 7m16.456s, 
7m7.225s, 7m9.102s; the overall compaction times with 160MB max_sstable_size 
are: 9m16.715s, 9m28.146s, 9m7.192s.

Given these numbers, the average seconds to finish compaction with 1280MB 
max_sstable_size is 430.66, and the average seconds to finish compaction with 
160MB max_sstable_size is 557.33, which is already a 23% improvement.

The above tests were conducted using the default parameters from 
compaction-stress which generates unique partitions for all writes, so it 
reflects the worst kind of workload for LCS. Considering this, I also conducted 
another set of tests using {{"--partition-count=1000"}} to force 
compaction-stress to generate a lot of overwrites for 

[jira] [Comment Edited] (CASSANDRA-12591) Re-evaluate the default 160MB sstable_size_in_mb choice in LCS

2016-09-02 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456278#comment-15456278
 ] 

Wei Deng edited comment on CASSANDRA-12591 at 9/2/16 10:32 PM:
---

So I've done some quick initial tests using latest trunk (i.e. C* 3.10) code 
just to prove the point whether this is a worthwhile effort. The hardware I'm 
using is still not a typical/adequate-enough configuration I'd use for a 
production Cassandra deployment (GCE n1-standard-4, with 4 vCPUs, 15GB RAM and 
a single 1TB persistent disk that's spindle-based), but I'm already seeing a 
positive sign that shows bigger max_sstable_size can be helpful for compaction 
throughput.

Based on the initial results (at each max_sstable_size, I did three runs from 
scratch; for all runs I set compaction threads to 4, and since there will be no 
throttling enforced by compaction-stress the setting would be equivalent to 
setting compaction_throughput_mb_per_sec to 0, the initial SSTable files 
generated by `compaction-stress write` are using the default 128MB size, which 
is inline with the typical flush size I saw on this kind of hardware using 
default cassandra.yaml configuration parameters), using 10GB of stress data 
generated by the blogpost data model 
[here|https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml],
 the overall compaction times with 1280MB max_sstable_size are: 7m16.456s, 
7m7.225s, 7m9.102s; the overall compaction times with 160MB max_sstable_size 
are: 9m16.715s, 9m28.146s, 9m7.192s.

Given these numbers, the average seconds to finish compaction with 1280MB 
max_sstable_size is 430.66, and the average seconds to finish compaction with 
160MB max_sstable_size is 557.33, which is already a 23% improvement.

The above tests were conducted using the default parameters from 
compaction-stress which generates unique partitions for all writes, so it 
reflects the worst kind of workload for LCS. Considering this, I also conducted 
another set of tests using "--partition-count=1000" to force compaction-stress 
to generate a lot of overwrites for the same partitions. While keeping 
everything else to same and adding this "--partition-count=1000" parameter, the 
overall compaction times with 1280MB max_sstable_size are: 4m59.307s, 
4m52.002s, 5m0.967s; the overall compaction times with 160MB max_sstable_size 
are: 6m11.533s, 6m21.200s, 6m10.904s. These numbers are understandably faster 
than the "all unique partition" scenario in the last paragraph, and if you 
calculate the average seconds, 1280MB max_sstable_size is 21% faster than 160MB 
max_sstable_size.

I realize 10GB data is barely enough to test 1280MB sstable size as the data 
will only go from L0->L1, so the next run I'm going to use 100GB data size on 
this hardware (keeping everything else the same) and see how the numbers 
compare.


was (Author: weideng):
So I've done some quick initial tests using latest trunk (i.e. C* 3.10) code 
just to prove the point whether this is a worthwhile effort. The hardware I'm 
using is still not a typical/adequate-enough configuration I'd use for a 
production Cassandra deployment (GCE n1-standard-4, with 4 vCPUs, 15GB RAM and 
a single 1TB persistent disk that's spindle-based), but I'm already seeing a 
positive sign that shows bigger max_sstable_size can be helpful for compaction 
throughput.

Based on the initial results (at each max_sstable_size, I did three runs from 
scratch; for all runs I set compaction threads to 4, and since there will be no 
throttling enforced by compaction-stress the setting would be equivalent to 
setting compaction_throughput_mb_per_sec to 0, the initial SSTable files 
generated by `compaction-stress write` are using the default 128MB size, which 
is inline with the typical flush size I saw on this kind of hardware using 
default cassandra.yaml configuration parameters), using 10GB of stress data 
generated by the blogpost data model 
[here|https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml],
 the overall compaction times with 1280MB max_sstable_size are: 7m16.456s, 
7m7.225s, 7m9.102s; the overall compaction times with 160MB max_sstable_size 
are: 9m16.715s, 9m28.146s, 9m7.192s.

Given these numbers, the average seconds to finish compaction with 1280MB 
max_sstable_size is 430.66, and the average seconds to finish compaction with 
160MB max_sstable_size is 557.33, which is already a 23% improvement.

I realize 10GB data is barely enough to test 1280MB sstable size as the data 
will only go from L0->L1, so the next run I'm going to use 100GB data size on 
this hardware (keeping everything else the same) and see how the numbers 
compare.

> Re-evaluate the default 160MB sstable_size_in_mb choice in LCS
> --

[jira] [Comment Edited] (CASSANDRA-12591) Re-evaluate the default 160MB sstable_size_in_mb choice in LCS

2016-09-02 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456278#comment-15456278
 ] 

Wei Deng edited comment on CASSANDRA-12591 at 9/2/16 10:32 PM:
---

So I've done some quick initial tests using latest trunk (i.e. C* 3.10) code 
just to prove the point whether this is a worthwhile effort. The hardware I'm 
using is still not a typical/adequate-enough configuration I'd use for a 
production Cassandra deployment (GCE n1-standard-4, with 4 vCPUs, 15GB RAM and 
a single 1TB persistent disk that's spindle-based), but I'm already seeing a 
positive sign that shows bigger max_sstable_size can be helpful for compaction 
throughput.

Based on the initial results (at each max_sstable_size, I did three runs from 
scratch; for all runs I set compaction threads to 4, and since there will be no 
throttling enforced by compaction-stress the setting would be equivalent to 
setting compaction_throughput_mb_per_sec to 0, the initial SSTable files 
generated by `compaction-stress write` are using the default 128MB size, which 
is inline with the typical flush size I saw on this kind of hardware using 
default cassandra.yaml configuration parameters), using 10GB of stress data 
generated by the blogpost data model 
[here|https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml],
 the overall compaction times with 1280MB max_sstable_size are: 7m16.456s, 
7m7.225s, 7m9.102s; the overall compaction times with 160MB max_sstable_size 
are: 9m16.715s, 9m28.146s, 9m7.192s.

Given these numbers, the average seconds to finish compaction with 1280MB 
max_sstable_size is 430.66, and the average seconds to finish compaction with 
160MB max_sstable_size is 557.33, which is already a 23% improvement.

The above tests were conducted using the default parameters from 
compaction-stress which generates unique partitions for all writes, so it 
reflects the worst kind of workload for LCS. Considering this, I also conducted 
another set of tests using {{"--partition-count=1000"}} to force 
compaction-stress to generate a lot of overwrites for the same partitions. 
While keeping everything else to same and adding this 
{{"--partition-count=1000"}} parameter, the overall compaction times with 
1280MB max_sstable_size are: 4m59.307s, 4m52.002s, 5m0.967s; the overall 
compaction times with 160MB max_sstable_size are: 6m11.533s, 6m21.200s, 
6m10.904s. These numbers are understandably faster than the "all unique 
partition" scenario in the last paragraph, and if you calculate the average 
seconds, 1280MB max_sstable_size is 21% faster than 160MB max_sstable_size.

I realize 10GB data is barely enough to test 1280MB sstable size as the data 
will only go from L0->L1, so the next run I'm going to use 100GB data size on 
this hardware (keeping everything else the same) and see how the numbers 
compare.


was (Author: weideng):
So I've done some quick initial tests using latest trunk (i.e. C* 3.10) code 
just to prove the point whether this is a worthwhile effort. The hardware I'm 
using is still not a typical/adequate-enough configuration I'd use for a 
production Cassandra deployment (GCE n1-standard-4, with 4 vCPUs, 15GB RAM and 
a single 1TB persistent disk that's spindle-based), but I'm already seeing a 
positive sign that shows bigger max_sstable_size can be helpful for compaction 
throughput.

Based on the initial results (at each max_sstable_size, I did three runs from 
scratch; for all runs I set compaction threads to 4, and since there will be no 
throttling enforced by compaction-stress the setting would be equivalent to 
setting compaction_throughput_mb_per_sec to 0, the initial SSTable files 
generated by `compaction-stress write` are using the default 128MB size, which 
is inline with the typical flush size I saw on this kind of hardware using 
default cassandra.yaml configuration parameters), using 10GB of stress data 
generated by the blogpost data model 
[here|https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml],
 the overall compaction times with 1280MB max_sstable_size are: 7m16.456s, 
7m7.225s, 7m9.102s; the overall compaction times with 160MB max_sstable_size 
are: 9m16.715s, 9m28.146s, 9m7.192s.

Given these numbers, the average seconds to finish compaction with 1280MB 
max_sstable_size is 430.66, and the average seconds to finish compaction with 
160MB max_sstable_size is 557.33, which is already a 23% improvement.

The above tests were conducted using the default parameters from 
compaction-stress which generates unique partitions for all writes, so it 
reflects the worst kind of workload for LCS. Considering this, I also conducted 
another set of tests using "--partition-count=1000" to force compaction-stress 
to generate a lot of overwrites for 

[jira] [Created] (CASSANDRA-12605) Timestamp-order searching of sstables does not handle non-frozen UDTs, frozen collections correctly

2016-09-02 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-12605:
---

 Summary: Timestamp-order searching of sstables does not handle 
non-frozen UDTs, frozen collections correctly
 Key: CASSANDRA-12605
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12605
 Project: Cassandra
  Issue Type: Bug
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs


{{SinglePartitionReadCommand.queryNeitherCountersNorCollections()}} is used to 
determine whether we can search sstables in timestamp order.  We cannot use 
this optimization when there are multicell values (such as unfrozen collections 
or UDTs).  However, this method only checks {{column.type.isCollection() || 
column.type.isCounter()}}.  Instead, it should check 
{{column.type.isMulticell() || column.type.isCounter()}}.

This has two implications:
* We are using timestamp-order searching when querying non-frozen UDTs, which 
can lead to incorrect/stale results being returned.
* We are not taking advantage of this optimization when querying frozen 
collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12599) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_pep8_compliance

2016-09-02 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton resolved CASSANDRA-12599.
---
Resolution: Fixed

Resolved with [PR #1301 to 
dtests|https://github.com/riptano/cassandra-dtest/pull/1301].

> dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_pep8_compliance
> ---
>
> Key: CASSANDRA-12599
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12599
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/687/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_pep8_compliance
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 67, 
> in test_pep8_compliance
> p = subprocess.Popen(cmds, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
>   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
> errread, errwrite)
>   File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
> raise child_exception
> "[Errno 2] No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12500) Counter cache hit counter not incrementing

2016-09-02 Thread Aditya Pandit (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15459556#comment-15459556
 ] 

Aditya Pandit commented on CASSANDRA-12500:
---

CounterCache value gets hit on UPDATE operation of same row the CounterCache 
will register a hit as it re-uses the same Counter object.
example: add another update to the same row:

UPDATE test.test2 SET v=v+1 WHERE id=1 and c=2;

And then check nodetool info | grep Cache
I added one more row and updated it again to get 4 entries and saw 1 hit(for 
the one that I updated twice).

{noformat}
Key Cache  : entries 23, size 1.77 KiB, capacity 50 MiB, 70 hits, 
92 requests, 0.761 recent hit rate, 14400 save period in seconds
Row Cache  : entries 0, size 0 bytes, capacity 10 MiB, 0 hits, 0 
requests, NaN recent hit rate, 0 save period in seconds
Counter Cache  : entries 4, size 440 bytes, capacity 20 MiB, 1 hits, 2 
requests, 0.500 recent hit rate, 7200 save period in seconds
Chunk Cache: entries 26, size 1.62 MiB, capacity 219 MiB, 38 
misses, 162 requests, 0.765 recent hit rate, 570.397 microseconds miss latency
{noformat}




> Counter cache hit counter not incrementing 
> ---
>
> Key: CASSANDRA-12500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12500
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Priority: Minor
>
> Trivial repro on 3.7 with scripts below. Haven't dug through 
> {{CounterCacheTest}} to find out if the cache is getting skipped or if it's 
> just not updating the hit counter properly: 
> {code}
> #!/bin/sh
> ccm remove test
> ccm create test -v 3.7 -n 1
> sed -i'' -e 's/row_cache_size_in_mb: 0/row_cache_size_in_mb: 100/g' 
> .ccm/test/node1/conf/cassandra.yaml
> ccm start
> sleep 5
> ccm node1 cqlsh < ~/keyspace.cql
> ccm node1 cqlsh < ~/table-counter.cql
> ccm node1 cqlsh < ~/table-counter-clustering.cql
> echo "Schema created, reads and writes starting"
> ccm node1 nodetool info | grep Cache
> echo "UPDATE test.test SET v=v+1 WHERE id=1; " | ccm node1 cqlsh
> echo "UPDATE test.test2 SET v=v+1 WHERE id=1 and c=1; " | ccm node1 cqlsh
> echo "UPDATE test.test2 SET v=v+1 WHERE id=1 and c=2; " | ccm node1 cqlsh
> echo "SELECT * FROM test.test WHERE id=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test WHERE id=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test2 WHERE id=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test2 WHERE id=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test2 WHERE id=1 and c=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> echo "SELECT * FROM test.test2 WHERE id=1 and c=1; " | ccm node1 cqlsh
> ccm node1 nodetool info | grep Cache
> {code}
> Keyspace / tables:
> {code}
> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'}  AND durable_writes = true;
> {code}
> {code}
> CREATE TABLE test.test (
> id int PRIMARY KEY,
> v counter
> ) WITH caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'};
> {code}
> {code}
> CREATE TABLE test.test2 (
> id int,
> c int,
> v counter,
> PRIMARY KEY(id, c)
> ) WITH caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'};
> {code}
> Output:
> {code}
> Schema created, reads and writes starting
> Key Cache  : entries 17, size 1.29 KiB, capacity 24 MiB, 61 hits, 
> 84 requests, 0.726 recent hit rate, 14400 save period in seconds
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> Counter Cache  : entries 0, size 0 bytes, capacity 12 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 7200 save period in seconds
> Chunk Cache: entries 14, size 896 KiB, capacity 91 MiB, 38 
> misses, 227 requests, 0.833 recent hit rate, 80.234 microseconds miss latency
>  id | v
> +---
>   1 | 1
> (1 rows)
> Key Cache  : entries 17, size 1.29 KiB, capacity 24 MiB, 70 hits, 
> 93 requests, 0.753 recent hit rate, 14400 save period in seconds
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> Counter Cache  : entries 3, size 328 bytes, capacity 12 MiB, 0 hits, 
> 3 requests, 0.000 recent hit rate, 7200 save period in seconds
> Chunk Cache: entries 14, size 896 KiB, capacity 91 MiB, 38 
> misses, 288 requests, 0.868 recent hit rate, 80.234 microseconds miss latency
>  id | v
> +---
>   1 | 1
> (1 rows)
> Key Cache  : entries 17, size 1.29 KiB, capacity 24 MiB, 72 hits, 
> 95 requests, 0.758 recent hit rate, 14400 

[jira] [Created] (CASSANDRA-12604) ALTER TABLE is missing RENAME instruction

2016-09-02 Thread Tamer AbdulRadi (JIRA)
Tamer AbdulRadi created CASSANDRA-12604:
---

 Summary: ALTER TABLE is missing RENAME instruction
 Key: CASSANDRA-12604
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12604
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation and Website
Reporter: Tamer AbdulRadi


Alter table's documentation doesn't show the {{RENAME}} instruction, but it is 
available on cqlsh (cqlsh 5.0.1 | Cassandra 3.7 | CQL spec 3.4.2 | Native 
protocol v4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12603) ALTER TABLE's DROP has superfluous column_name

2016-09-02 Thread Tamer AbdulRadi (JIRA)
Tamer AbdulRadi created CASSANDRA-12603:
---

 Summary: ALTER TABLE's DROP has superfluous column_name
 Key: CASSANDRA-12603
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12603
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation and Website
Reporter: Tamer AbdulRadi


{{alter_table_instruction}} defined DROP instruction as:

{{DROP column_name ( column_name )*}}

Which means DROP can multiple space-delimited columns names, but this is not 
the case as tested on cqlsh (cqlsh 5.0.1 | Cassandra 3.7 | CQL spec 3.4.2 | 
Native protocol v4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12237) Cassandra stress graphing is broken

2016-09-02 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12237:
--
Status: Ready to Commit  (was: Patch Available)

> Cassandra stress graphing is broken
> ---
>
> Key: CASSANDRA-12237
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12237
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Christopher Batey
> Fix For: 3.x
>
>
> Cassandra stress relies on a tmp file with the stress output so it can parse 
> it and put it the the graph html.
> However the contents of this file is now broken:
> {code}
> Sleeping 2s...Sleeping 2s...
> Sleeping 2s...
> Warming up WRITE with 5 iterations...Warming up WRITE with 5 
> iterations...
> Warming up WRITE with 5 iterations...
> Running WRITE with 500 threads 10 secondsRunning WRITE with 500 threads 10 
> seconds
> Running WRITE with 500 threads 10 seconds
> ...
> {code}
> This is as we create a {code}MultiPrintStream{code} that inherits from 
> {code}PrintWriter{code} and then delegate the call to super as well as a list 
> of other PrintWriters
> The call to super for println comes back into our print method so every line 
> gets logged multiple times as we do the for (PrintStream s: newStreams) many 
> times.
> We can change this to use composition and use our own interface if we want to 
> use a composite for logging the results
> This results in the parsing of this file not quite working and the aggregate 
> stats not working in produced graphs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12237) Cassandra stress graphing is broken

2016-09-02 Thread Christopher Batey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15459409#comment-15459409
 ] 

Christopher Batey commented on CASSANDRA-12237:
---

Fixed

> Cassandra stress graphing is broken
> ---
>
> Key: CASSANDRA-12237
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12237
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Christopher Batey
> Fix For: 3.x
>
>
> Cassandra stress relies on a tmp file with the stress output so it can parse 
> it and put it the the graph html.
> However the contents of this file is now broken:
> {code}
> Sleeping 2s...Sleeping 2s...
> Sleeping 2s...
> Warming up WRITE with 5 iterations...Warming up WRITE with 5 
> iterations...
> Warming up WRITE with 5 iterations...
> Running WRITE with 500 threads 10 secondsRunning WRITE with 500 threads 10 
> seconds
> Running WRITE with 500 threads 10 seconds
> ...
> {code}
> This is as we create a {code}MultiPrintStream{code} that inherits from 
> {code}PrintWriter{code} and then delegate the call to super as well as a list 
> of other PrintWriters
> The call to super for println comes back into our print method so every line 
> gets logged multiple times as we do the for (PrintStream s: newStreams) many 
> times.
> We can change this to use composition and use our own interface if we want to 
> use a composite for logging the results
> This results in the parsing of this file not quite working and the aggregate 
> stats not working in produced graphs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11248) (windows) dtest failure in commitlog_test.TestCommitLog.stop_failure_policy_test and stop_commit_failure_policy_test

2016-09-02 Thread Alessio (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15459280#comment-15459280
 ] 

Alessio commented on CASSANDRA-11248:
-

Definitely a major bug still occurring on Cassandra 3.7 (Mac OS X)

> (windows) dtest failure in 
> commitlog_test.TestCommitLog.stop_failure_policy_test and 
> stop_commit_failure_policy_test
> 
>
> Key: CASSANDRA-11248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11248
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/167/testReport/commitlog_test/TestCommitLog/stop_failure_policy_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #167
> failing intermittently, looks possibly related to CASSANDRA-11242 with:
> {noformat}
> Cannot find the commitlog failure message in logs
> {noformat}
> But there's another suspect message here not present on 11242, which is
> {noformat}
> [node1 ERROR] Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file 
> D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra\/logs/gc.log due to 
> No such file or directory
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12602) Update website Community page

2016-09-02 Thread Dave Lester (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Lester updated CASSANDRA-12602:

Attachment: website-12602.txt

> Update website Community page
> -
>
> Key: CASSANDRA-12602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12602
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation and Website
>Reporter: Dave Lester
>Priority: Trivial
> Attachments: website-12602.txt
>
>
> This change makes the following changes:
> * Adds link to #cassandra-dev IRC archive
> * Adds two additional book and one academic paper to the publications list on 
> the community page
> * Fixes typo in website README instructions
> This patch can be applied to the website via Subversion: 
> https://svn.apache.org/repos/asf/cassandra/site/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12602) Update website Community page

2016-09-02 Thread Dave Lester (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Lester updated CASSANDRA-12602:

Status: Patch Available  (was: Open)

> Update website Community page
> -
>
> Key: CASSANDRA-12602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12602
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation and Website
>Reporter: Dave Lester
>Priority: Trivial
> Attachments: website-12602.txt
>
>
> This change makes the following changes:
> * Adds link to #cassandra-dev IRC archive
> * Adds two additional book and one academic paper to the publications list on 
> the community page
> * Fixes typo in website README instructions
> This patch can be applied to the website via Subversion: 
> https://svn.apache.org/repos/asf/cassandra/site/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12602) Update website Community page

2016-09-02 Thread Dave Lester (JIRA)
Dave Lester created CASSANDRA-12602:
---

 Summary: Update website Community page
 Key: CASSANDRA-12602
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12602
 Project: Cassandra
  Issue Type: Task
  Components: Documentation and Website
Reporter: Dave Lester
Priority: Trivial


This change makes the following changes:

* Adds link to #cassandra-dev IRC archive
* Adds two additional book and one academic paper to the publications list on 
the community page
* Fixes typo in website README instructions

This patch can be applied to the website via Subversion: 
https://svn.apache.org/repos/asf/cassandra/site/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12597) Add a tool to enable/disable the use of Severity in the DynamicEndpointSnitch

2016-09-02 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15459232#comment-15459232
 ] 

Dikang Gu commented on CASSANDRA-12597:
---

[~jjordan] we have customize tool to set the severity over JMX, we are using 
the severity pretty frequently, when we need to do maintenance of our clusters.

I'm happy to add another nodetool command to set the severity number as well. I 
think that can be tracked in a separate jira. What do you think?

Thanks!

> Add a tool to enable/disable the use of Severity in the DynamicEndpointSnitch
> -
>
> Key: CASSANDRA-12597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12597
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
>
> CASSANDRA-11737 and CASSANDRA-11738 add the option to allow disabling the 
> severity in DynamicEndpointSnitch. I think it would be useful to also add 
> nodetool command to enable/disable the functionality, so that we can switch 
> it on and off without restarting the node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12597) Add a tool to enable/disable the use of Severity in the DynamicEndpointSnitch

2016-09-02 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-12597:

Status: In Progress  (was: Patch Available)

> Add a tool to enable/disable the use of Severity in the DynamicEndpointSnitch
> -
>
> Key: CASSANDRA-12597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12597
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
>
> CASSANDRA-11737 and CASSANDRA-11738 add the option to allow disabling the 
> severity in DynamicEndpointSnitch. I think it would be useful to also add 
> nodetool command to enable/disable the functionality, so that we can switch 
> it on and off without restarting the node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12597) Add a tool to enable/disable the use of Severity in the DynamicEndpointSnitch

2016-09-02 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15459214#comment-15459214
 ] 

Jeremiah Jordan commented on CASSANDRA-12597:
-

After CASSANDRA-11738 the only severity that can exist is severity that you 
manually set on servers over JMX.  Not sure it makes sense to add nodetool 
commands for enable/disable using it when we don't have commands to set it.  If 
we want to add commands to set it as well, we should also add documentation on 
what it is, and what it does.

> Add a tool to enable/disable the use of Severity in the DynamicEndpointSnitch
> -
>
> Key: CASSANDRA-12597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12597
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 3.x
>
>
> CASSANDRA-11737 and CASSANDRA-11738 add the option to allow disabling the 
> severity in DynamicEndpointSnitch. I think it would be useful to also add 
> nodetool command to enable/disable the functionality, so that we can switch 
> it on and off without restarting the node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12588) Cannot find column durable_wrıtes

2016-09-02 Thread LLc. (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15459193#comment-15459193
 ] 

LLc. commented on CASSANDRA-12588:
--

Hi ,

Cassandra 3.7  ,OS x ElCapitan , java version "1.8.0_102" ,


cassandra -f   run 

Error : 

Exception (java.lang.AssertionError) encountered during startup: Cannot find 
column durable_wrıtes
java.lang.AssertionError: Cannot find column durable_wrıtes
at 
org.apache.cassandra.db.RowUpdateBuilder.add(RowUpdateBuilder.java:271)
at 
org.apache.cassandra.schema.SchemaKeyspace.makeCreateKeyspaceMutation(SchemaKeyspace.java:395)
at 
org.apache.cassandra.schema.SchemaKeyspace.makeCreateKeyspaceMutation(SchemaKeyspace.java:402)
at 
org.apache.cassandra.schema.SchemaKeyspace.saveSystemKeyspacesSchema(SchemaKeyspace.java:267)
at 
org.apache.cassandra.db.SystemKeyspace.finishStartup(SystemKeyspace.java:470)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:343)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:585)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
ERROR 17:55:40 Exception encountered during startup
java.lang.AssertionError: Cannot find column durable_wrıtes
at 
org.apache.cassandra.db.RowUpdateBuilder.add(RowUpdateBuilder.java:271) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.schema.SchemaKeyspace.makeCreateKeyspaceMutation(SchemaKeyspace.java:395)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.schema.SchemaKeyspace.makeCreateKeyspaceMutation(SchemaKeyspace.java:402)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.schema.SchemaKeyspace.saveSystemKeyspacesSchema(SchemaKeyspace.java:267)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.SystemKeyspace.finishStartup(SystemKeyspace.java:470) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:343) 
[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:585) 
[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714) 
[apache-cassandra-3.7.jar:3.7]



> Cannot find column durable_wrıtes
> -
>
> Key: CASSANDRA-12588
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12588
> Project: Cassandra
>  Issue Type: Bug
>Reporter: LLc.
>
> help please
> run :
> cassandra -f
> ERROR 17:00:16 Exception encountered during startup
> java.lang.AssertionError: Cannot find column durable_wrıtes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12598) BailErrorStragery alike for ANTLR grammar parsing

2016-09-02 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-12598:

Fix Version/s: (was: 3.10)
   3.x

> BailErrorStragery alike for ANTLR grammar parsing
> -
>
> Key: CASSANDRA-12598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12598
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Berenguer Blasi
> Fix For: 3.x
>
>
> CQL parsing is missing a mechanism similar to 
> http://www.antlr.org/api/Java/org/antlr/v4/runtime/BailErrorStrategy.html
> This solves:
> - Stopping parsing instead of continuing when we've got already an error 
> which is wasteful.
> - Any skipped java code tied to 'recovered' missing tokens might later cause 
> java exceptions (think non-init variables, non incremented integers (div by 
> zero), etc.) which will bubble up directly and will hide properly formatted 
> error messages to the user with no indication on what went wrong at all. Just 
> a cryptic NPE i.e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12600) dtest failure in internode_ssl_test.TestInternodeSSL.putget_with_internode_ssl_test

2016-09-02 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton resolved CASSANDRA-12600.
---
Resolution: Duplicate

> dtest failure in 
> internode_ssl_test.TestInternodeSSL.putget_with_internode_ssl_test
> ---
>
> Key: CASSANDRA-12600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12600
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/388/testReport/internode_ssl_test/TestInternodeSSL/putget_with_internode_ssl_test/
> {code}
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [MigrationStage:1] 2016-09-01 02:30:51,453 FailureDetector.java:250 - 
> Unknown endpoint: /127.0.0.1
> java.lang.IllegalArgumentException: 
>   at 
> org.apache.cassandra.gms.FailureDetector.isAlive(FailureDetector.java:250) 
> [main/:na]
>   at 
> org.apache.cassandra.service.MigrationTask.runMayThrow(MigrationTask.java:74) 
> [main/:na]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> [main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12237) Cassandra stress graphing is broken

2016-09-02 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15459027#comment-15459027
 ] 

Joel Knighton commented on CASSANDRA-12237:
---

One more nit - the braces should be dropped in [this 
conditional|https://github.com/chbatey/cassandra-1/commit/24ded4c6ba0769c0047297df4f740a7641794b9b#diff-fd2f2d2364937fcb1c0d73c8314f1418R65]
 in accordance with the code style. The rest of the changes look good.

Sorry for not catching these in the first pass of review - that's my fault.

> Cassandra stress graphing is broken
> ---
>
> Key: CASSANDRA-12237
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12237
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Christopher Batey
> Fix For: 3.x
>
>
> Cassandra stress relies on a tmp file with the stress output so it can parse 
> it and put it the the graph html.
> However the contents of this file is now broken:
> {code}
> Sleeping 2s...Sleeping 2s...
> Sleeping 2s...
> Warming up WRITE with 5 iterations...Warming up WRITE with 5 
> iterations...
> Warming up WRITE with 5 iterations...
> Running WRITE with 500 threads 10 secondsRunning WRITE with 500 threads 10 
> seconds
> Running WRITE with 500 threads 10 seconds
> ...
> {code}
> This is as we create a {code}MultiPrintStream{code} that inherits from 
> {code}PrintWriter{code} and then delegate the call to super as well as a list 
> of other PrintWriters
> The call to super for println comes back into our print method so every line 
> gets logged multiple times as we do the for (PrintStream s: newStreams) many 
> times.
> We can change this to use composition and use our own interface if we want to 
> use a composite for logging the results
> This results in the parsing of this file not quite working and the aggregate 
> stats not working in produced graphs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12594) sstabledump fails on frozen collection cells

2016-09-02 Thread Andy Tolbert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15458966#comment-15458966
 ] 

Andy Tolbert edited comment on CASSANDRA-12594 at 9/2/16 4:24 PM:
--

That is understandable (y).  I was working around 
{{ColumnDefinition.cellValueType()}} returning the same thing it returns for a 
non-frozen collection.   Wouldn't there also be an issue in the tools' 
dependence on {{AbstractType#getString}} to get a string representation of 
data?  Currently collections return hexstring representation for {{getString}} 
(via {{CollectionType}}, which doesn't seem useful for the sstabledump output). 
 Should the various {{CollectionType}} implementations implement their own 
{{getString}} as well?


was (Author: andrew.tolbert):
That is understandable (y).  I was working around 
{{ColumnDefinition.cellValueType()}} returning the same thing it returns for a 
non-frozen collection.   Wouldn't there also be an issue in the tools' 
dependence on {{AbstractType#getString}} to get a string representation of 
data?  Currently collections return hexstring representation for {{getString}} 
(via {{CollectionType}}, which doesn't seem useful for the sstabledump output.  
Should the various {{CollectionType}} implementations implement their own 
{{getString}} as well?

> sstabledump fails on frozen collection cells
> 
>
> Key: CASSANDRA-12594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andy Tolbert
>Assignee: Andy Tolbert
>Priority: Minor
> Fix For: 3.0.9, 3.9
>
> Attachments: CASSANDRA-12594-3.0.txt, CASSANDRA-12594-3.0_2.txt
>
>
> sstabledump throws an exception when attempting to parse a cell that is a 
> frozen collection, i.e.:
> {noformat}
> [
>   {
> "partition" : {
>   "key" : [ "0" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18,
> "liveness_info" : { "tstamp" : "2016-09-01T22:06:45.670810Z" },
> "cells" : [
>   { "name" : "m", "value" }
> ] }
> ] }
> ]Exception in thread "main" java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:613)
>   at 
> org.apache.cassandra.db.marshal.TupleType.getString(TupleType.java:211)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeCell(JsonTransformer.java:441)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeColumnData(JsonTransformer.java:375)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeRow(JsonTransformer.java:279)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:214)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
>   at java.util.Iterator.forEachRemaining(Iterator.java:116)
>   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
>   at 
> org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:102)
>   at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:242)
> {noformat}
> This is because the code doesn't consider that the cell may be a frozen 
> collection, and attempts to get the string representation using the value 
> type which doesn't work.
> Example data:
> {noformat}
> CREATE KEYSPACE simple WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'};
> CREATE TABLE simple.unfrozen_map (
> k int PRIMARY KEY,
> m map>>
> );
> CREATE TABLE simple.frozen_map (
> k int PRIMARY KEY,
> m frozen>>>
> );
> insert into unfrozen_map (k, m) values (0, {'a': ('b', 'c'), 'd': ('e', 'f'), 
> 'g': ('h', 'i')});
> insert into frozen_map (k, m) values (0, {'a': ('b', 'c'), 'd': ('e', 'f'), 
> 'g': ('h', 'i')});
> {noformat}
> unfrozen_map will properly dump each cell individually, but frozen_map fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12594) sstabledump fails on frozen collection cells

2016-09-02 Thread Andy Tolbert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15458966#comment-15458966
 ] 

Andy Tolbert commented on CASSANDRA-12594:
--

That is understandable (y).  I was working around 
{{ColumnDefinition.cellValueType()}} returning the same thing it returns for a 
non-frozen collection.   Wouldn't there also be an issue in the tools' 
dependence on {{AbstractType#getString}} to get a string representation of 
data?  Currently collections return hexstring representation for {{getString}} 
(via {{CollectionType}}, which doesn't seem useful for the sstabledump output.  
Should the various {{CollectionType}} implementations implement their own 
{{getString}} as well?

> sstabledump fails on frozen collection cells
> 
>
> Key: CASSANDRA-12594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andy Tolbert
>Assignee: Andy Tolbert
>Priority: Minor
> Fix For: 3.0.9, 3.9
>
> Attachments: CASSANDRA-12594-3.0.txt, CASSANDRA-12594-3.0_2.txt
>
>
> sstabledump throws an exception when attempting to parse a cell that is a 
> frozen collection, i.e.:
> {noformat}
> [
>   {
> "partition" : {
>   "key" : [ "0" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18,
> "liveness_info" : { "tstamp" : "2016-09-01T22:06:45.670810Z" },
> "cells" : [
>   { "name" : "m", "value" }
> ] }
> ] }
> ]Exception in thread "main" java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:613)
>   at 
> org.apache.cassandra.db.marshal.TupleType.getString(TupleType.java:211)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeCell(JsonTransformer.java:441)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeColumnData(JsonTransformer.java:375)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeRow(JsonTransformer.java:279)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:214)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
>   at java.util.Iterator.forEachRemaining(Iterator.java:116)
>   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
>   at 
> org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:102)
>   at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:242)
> {noformat}
> This is because the code doesn't consider that the cell may be a frozen 
> collection, and attempts to get the string representation using the value 
> type which doesn't work.
> Example data:
> {noformat}
> CREATE KEYSPACE simple WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'};
> CREATE TABLE simple.unfrozen_map (
> k int PRIMARY KEY,
> m map>>
> );
> CREATE TABLE simple.frozen_map (
> k int PRIMARY KEY,
> m frozen>>>
> );
> insert into unfrozen_map (k, m) values (0, {'a': ('b', 'c'), 'd': ('e', 'f'), 
> 'g': ('h', 'i')});
> insert into frozen_map (k, m) values (0, {'a': ('b', 'c'), 'd': ('e', 'f'), 
> 'g': ('h', 'i')});
> {noformat}
> unfrozen_map will properly dump each cell individually, but frozen_map fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2016-09-02 Thread Brett Snyder (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15458931#comment-15458931
 ] 

Brett Snyder commented on CASSANDRA-10271:
--

[~blerer] Should this only be done in 3.x branch now?  It appears 
CASSANDRA-10707 was only done in 3.x

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Brett Snyder
>Priority: Minor
> Fix For: 2.2.x, 3.x
>
> Attachments: cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12591) Re-evaluate the default 160MB sstable_size_in_mb choice in LCS

2016-09-02 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15458913#comment-15458913
 ] 

Edward Capriolo commented on CASSANDRA-12591:
-

I would argue that a spindle-based system is not the common case anymore and 
that most deployments are SSD based. Also compaction time is different based on 
write patterns. I think important factors include : Unique partitions, cells 
per row, number of overwrites, % tombstones, % TTL data. I mention this because 
I have seen benchmark data that is impressive but not always applicable to real 
world data.

> Re-evaluate the default 160MB sstable_size_in_mb choice in LCS
> --
>
> Key: CASSANDRA-12591
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12591
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>  Labels: lcs
>
> There has been some effort from CASSANDRA-5727 in benchmarking and evaluating 
> the best max_sstable_size used by LeveledCompactionStrategy, and the 
> conclusion derived from that effort was to use 160MB as the most optimal size 
> for both throughput (i.e. the time spent on compaction, the smaller the 
> better) and the amount of bytes compacted (to avoid write amplification, the 
> less the better).
> However, when I read more into that test report (the short 
> [comment|https://issues.apache.org/jira/browse/CASSANDRA-5727?focusedCommentId=13722571&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13722571]
>  describing the tests), I realized it was conducted on a hardware with the 
> following configuration: "a single rackspace node with 2GB of ram." I'm not 
> sure if this was an ok hardware configuration for production Cassandra 
> deployment at that time (mid-2013), but it is definitely far lower from 
> today's hardware standard now.
> Given that we now have compaction-stress which is able to generate SSTables 
> based on user defined stress profile with user defined table schema and 
> compaction parameters (compatible to cassandra-stress), it would be a useful 
> effort to relook at this number using a more realistic hardware configuration 
> and see if 160MB is still the optimal choice. It might also impact our 
> perceived "practical" node density with LCS nodes if it turns out bigger 
> max_sstable_size actually works better as it will allow less number of 
> SSTables (and hence less level and less write amplification) per node with 
> bigger density.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12601) dtest failure in auth_test.TestAuth.auth_metrics_test

2016-09-02 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12601:
--
Description: 
This failure is happening on many different tests in [trunk_offheap_dtest 
#389|http://cassci.datastax.com/job/trunk_offheap_dtest/389/].

One example:

http://cassci.datastax.com/job/trunk_offheap_dtest/389/testReport/auth_test/TestAuth/auth_metrics_test/

>From the logs:
{code}
ERROR [main] 2016-09-02 01:13:43,688 CassandraDaemon.java:752 - Local host name 
unknown: java.net.UnknownHostException: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: unknown error
{code}

  was:
This failure is happening on many different tests in [trunk_offheap_dtest 
#389|http://cstar-dashboards.datastax.com/?job=trunk_offheap_dtest&build=389].

One example:

http://cassci.datastax.com/job/trunk_offheap_dtest/389/testReport/auth_test/TestAuth/auth_metrics_test/

>From the logs:
{code}
ERROR [main] 2016-09-02 01:13:43,688 CassandraDaemon.java:752 - Local host name 
unknown: java.net.UnknownHostException: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: unknown error
{code}


> dtest failure in auth_test.TestAuth.auth_metrics_test
> -
>
> Key: CASSANDRA-12601
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12601
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> This failure is happening on many different tests in [trunk_offheap_dtest 
> #389|http://cassci.datastax.com/job/trunk_offheap_dtest/389/].
> One example:
> http://cassci.datastax.com/job/trunk_offheap_dtest/389/testReport/auth_test/TestAuth/auth_metrics_test/
> From the logs:
> {code}
> ERROR [main] 2016-09-02 01:13:43,688 CassandraDaemon.java:752 - Local host 
> name unknown: java.net.UnknownHostException: 
> openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: 
> openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: unknown error
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12601) dtest failure in auth_test.TestAuth.auth_metrics_test

2016-09-02 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12601:
--
Description: 
This failure is happening on many different tests in [trunk_offheap_dtest 
#389|http://cstar-dashboards.datastax.com/?job=trunk_offheap_dtest&build=389].

One example:

http://cassci.datastax.com/job/trunk_offheap_dtest/389/testReport/auth_test/TestAuth/auth_metrics_test/

>From the logs:
{code}
ERROR [main] 2016-09-02 01:13:43,688 CassandraDaemon.java:752 - Local host name 
unknown: java.net.UnknownHostException: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: unknown error
{code}

  was:
This failure is happening on many different tests in trunk_offheap_dtest.

One example:

http://cassci.datastax.com/job/trunk_offheap_dtest/389/testReport/auth_test/TestAuth/auth_metrics_test/

>From the logs:
{code}
ERROR [main] 2016-09-02 01:13:43,688 CassandraDaemon.java:752 - Local host name 
unknown: java.net.UnknownHostException: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: unknown error
{code}


> dtest failure in auth_test.TestAuth.auth_metrics_test
> -
>
> Key: CASSANDRA-12601
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12601
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> This failure is happening on many different tests in [trunk_offheap_dtest 
> #389|http://cstar-dashboards.datastax.com/?job=trunk_offheap_dtest&build=389].
> One example:
> http://cassci.datastax.com/job/trunk_offheap_dtest/389/testReport/auth_test/TestAuth/auth_metrics_test/
> From the logs:
> {code}
> ERROR [main] 2016-09-02 01:13:43,688 CassandraDaemon.java:752 - Local host 
> name unknown: java.net.UnknownHostException: 
> openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: 
> openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: unknown error
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12601) dtest failure in auth_test.TestAuth.auth_metrics_test

2016-09-02 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12601:
-

 Summary: dtest failure in auth_test.TestAuth.auth_metrics_test
 Key: CASSANDRA-12601
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12601
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log

This failure is happening on many different tests in trunk_offheap_dtest.

One example:

http://cassci.datastax.com/job/trunk_offheap_dtest/389/testReport/auth_test/TestAuth/auth_metrics_test/

>From the logs:
{code}
ERROR [main] 2016-09-02 01:13:43,688 CassandraDaemon.java:752 - Local host name 
unknown: java.net.UnknownHostException: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: 
openstack-cassci-external-df85c4d-jenkins-trunk-offheap-dtest-3: unknown error
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12600) dtest failure in internode_ssl_test.TestInternodeSSL.putget_with_internode_ssl_test

2016-09-02 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12600:
-

 Summary: dtest failure in 
internode_ssl_test.TestInternodeSSL.putget_with_internode_ssl_test
 Key: CASSANDRA-12600
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12600
 Project: Cassandra
  Issue Type: Bug
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log

example failure:

http://cassci.datastax.com/job/trunk_offheap_dtest/388/testReport/internode_ssl_test/TestInternodeSSL/putget_with_internode_ssl_test/

{code}
Standard Output

Unexpected error in node2 log, error: 
ERROR [MigrationStage:1] 2016-09-01 02:30:51,453 FailureDetector.java:250 - 
Unknown endpoint: /127.0.0.1
java.lang.IllegalArgumentException: 
at 
org.apache.cassandra.gms.FailureDetector.isAlive(FailureDetector.java:250) 
[main/:na]
at 
org.apache.cassandra.service.MigrationTask.runMayThrow(MigrationTask.java:74) 
[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12599) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_pep8_compliance

2016-09-02 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12599:
-

 Summary: dtest failure in 
cqlsh_tests.cqlsh_tests.TestCqlsh.test_pep8_compliance
 Key: CASSANDRA-12599
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12599
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/cassandra-2.2_dtest/687/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_pep8_compliance

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
wrapped
f(obj)
  File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 67, 
in test_pep8_compliance
p = subprocess.Popen(cmds, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
"[Errno 2] No such file or directory
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11195) paging may returns incomplete results on small page size

2016-09-02 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11195:
---
Reviewer: Sylvain Lebresne

> paging may returns incomplete results on small page size
> 
>
> Key: CASSANDRA-11195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11195
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Attachments: allfiles.tar.gz, node1.log, node1_debug.log, node2.log, 
> node2_debug.log
>
>
> This was found through a flapping test, and running that test is still the 
> easiest way to repro the issue. On CI we're seeing a 40-50% failure rate, but 
> locally this test fails much less frequently.
> If I attach a python debugger and re-query the "bad" query, it continues to 
> return incomplete data indefinitely. If I go directly to cqlsh I can see all 
> rows just fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11195) paging may returns incomplete results on small page size

2016-09-02 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15458674#comment-15458674
 ] 

Russ Hatch commented on CASSANDRA-11195:


Full upgrade suite runs look good. There was a singular failure on each due to 
an unrelated dtest import problem (I've opened a dtest PR for).

> paging may returns incomplete results on small page size
> 
>
> Key: CASSANDRA-11195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11195
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Benjamin Lerer
>  Labels: dtest
> Attachments: allfiles.tar.gz, node1.log, node1_debug.log, node2.log, 
> node2_debug.log
>
>
> This was found through a flapping test, and running that test is still the 
> easiest way to repro the issue. On CI we're seeing a 40-50% failure rate, but 
> locally this test fails much less frequently.
> If I attach a python debugger and re-query the "bad" query, it continues to 
> return incomplete data indefinitely. If I go directly to cqlsh I can see all 
> rows just fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12202) LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0

2016-09-02 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15458453#comment-15458453
 ] 

Marcus Eriksson commented on CASSANDRA-12202:
-

seems the multiplexed one timed out, running a new try with 
[this|https://github.com/krummas/cassandra/commits/marcuse/12202-2.2-multiplex] 
to actively cancel the ongoing compactions instead of waiting for them to finish

> LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0
> ---
>
> Key: CASSANDRA-12202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12202
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> We actually fixed this for 3.7+ in CASSANDRA-11657, need to backport that fix 
> to 2.1+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12413) CompactionsCQLTest.testTriggerMinorCompactionDTCS fails

2016-09-02 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12413:

   Resolution: Fixed
Fix Version/s: (was: 3.9)
   3.10
   Status: Resolved  (was: Ready to Commit)

committed, thanks

> CompactionsCQLTest.testTriggerMinorCompactionDTCS fails
> ---
>
> Key: CASSANDRA-12413
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12413
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Marcus Eriksson
>  Labels: unittest
> Fix For: 3.10
>
>
> [Link|http://cassci.datastax.com/job/cassandra-3.9_testall/lastCompletedBuild/testReport/org.apache.cassandra.db.compaction/CompactionsCQLTest/testTriggerMinorCompactionDTCS/]
> Error Message
> No minor compaction triggered in 5000ms
> Stacktrace
> {noformat}
> junit.framework.AssertionFailedError: No minor compaction triggered in 5000ms
>   at 
> org.apache.cassandra.db.compaction.CompactionsCQLTest.waitForMinor(CompactionsCQLTest.java:247)
>   at 
> org.apache.cassandra.db.compaction.CompactionsCQLTest.testTriggerMinorCompactionDTCS(CompactionsCQLTest.java:72)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Use weak references in compaction logger to avoid strong ref loops

2016-09-02 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk cdf02f668 -> 6bb757715


Use weak references in compaction logger to avoid strong ref loops

Patch by marcuse; reviewed by Carl Yeksigian for CASSANDRA-12413


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bb75771
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bb75771
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bb75771

Branch: refs/heads/trunk
Commit: 6bb7577157e553dc81c280b3c83862e7a397219d
Parents: cdf02f6
Author: Marcus Eriksson 
Authored: Thu Aug 11 09:33:07 2016 +0200
Committer: Marcus Eriksson 
Committed: Fri Sep 2 14:35:52 2016 +0200

--
 .../db/compaction/CompactionLogger.java | 28 
 1 file changed, 23 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bb75771/src/java/org/apache/cassandra/db/compaction/CompactionLogger.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionLogger.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionLogger.java
index 16a7f2a..c8def3d 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionLogger.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionLogger.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.db.compaction;
 
 import java.io.IOException;
 import java.io.OutputStreamWriter;
+import java.lang.ref.WeakReference;
 import java.nio.file.*;
 import java.util.Collection;
 import java.util.HashSet;
@@ -32,6 +33,7 @@ import java.util.concurrent.atomic.AtomicInteger;
 import java.util.function.Consumer;
 import java.util.function.Function;
 
+import com.google.common.collect.MapMaker;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -102,20 +104,23 @@ public class CompactionLogger
 private static final JsonNodeFactory json = JsonNodeFactory.instance;
 private static final Logger logger = 
LoggerFactory.getLogger(CompactionLogger.class);
 private static final Writer serializer = new CompactionLogSerializer();
-private final ColumnFamilyStore cfs;
-private final CompactionStrategyManager csm;
+private final WeakReference cfsRef;
+private final WeakReference csmRef;
 private final AtomicInteger identifier = new AtomicInteger(0);
-private final Map 
compactionStrategyMapping = new ConcurrentHashMap<>();
+private final Map 
compactionStrategyMapping = new MapMaker().weakKeys().makeMap();
 private final AtomicBoolean enabled = new AtomicBoolean(false);
 
 public CompactionLogger(ColumnFamilyStore cfs, CompactionStrategyManager 
csm)
 {
-this.csm = csm;
-this.cfs = cfs;
+csmRef = new WeakReference<>(csm);
+cfsRef = new WeakReference<>(cfs);
 }
 
 private void forEach(Consumer consumer)
 {
+CompactionStrategyManager csm = csmRef.get();
+if (csm == null)
+return;
 csm.getStrategies()
.forEach(l -> l.forEach(consumer));
 }
@@ -129,7 +134,10 @@ public class CompactionLogger
 
 private ArrayNode sstableMap(Collection sstables, 
CompactionStrategyAndTableFunction csatf)
 {
+CompactionStrategyManager csm = csmRef.get();
 ArrayNode node = json.arrayNode();
+if (csm == null)
+return node;
 sstables.forEach(t -> 
node.add(csatf.apply(csm.getCompactionStrategyFor(t), t)));
 return node;
 }
@@ -142,6 +150,10 @@ public class CompactionLogger
 private JsonNode formatSSTables(AbstractCompactionStrategy strategy)
 {
 ArrayNode node = json.arrayNode();
+CompactionStrategyManager csm = csmRef.get();
+ColumnFamilyStore cfs = cfsRef.get();
+if (csm == null || cfs == null)
+return node;
 for (SSTableReader sstable : cfs.getLiveSSTables())
 {
 if (csm.getCompactionStrategyFor(sstable) == strategy)
@@ -165,6 +177,9 @@ public class CompactionLogger
 private JsonNode startStrategy(AbstractCompactionStrategy strategy)
 {
 ObjectNode node = json.objectNode();
+CompactionStrategyManager csm = csmRef.get();
+if (csm == null)
+return node;
 node.put("strategyId", getId(strategy));
 node.put("type", strategy.getName());
 node.put("tables", formatSSTables(strategy));
@@ -200,6 +215,9 @@ public class CompactionLogger
 
 private void describeStrategy(ObjectNode node)
 {
+ColumnFamilyStore cfs = cfsRef.get();
+if (cfs == null)
+return;
 node.put("keyspace", cfs.keyspace.getName());
 node.put("table", cfs.getTableName());
 node.put("time", System.currentTimeMillis());



[jira] [Commented] (CASSANDRA-12587) Log when there is a timestamp tie that is being broken

2016-09-02 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15458343#comment-15458343
 ] 

Jason Brown commented on CASSANDRA-12587:
-

I was thinking that it might be easier to trigger a monitoring system alert 
based on a metric going above zero, rather than simply monitoring logs. The log 
would be more informational, for sure, with the CQL partition.

With or without a metric is fine with me, but the log entry I'm definitely +1 on

> Log when there is a timestamp tie that is being broken
> --
>
> Key: CASSANDRA-12587
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12587
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>
> When there is a timestamp tie, it can be very difficult to discern what has 
> happened, since currently the columns will resolve individually by value.  
> CASSANDRA-6123 would make this a bit more deterministic, but that would also 
> make scenarios like this nearly impossible to troubleshoot.  Since timestamp 
> ties *should* be fairly rare, I propose we at least log the row key that had 
> a tie so operators are aware that something that should almost never happen, 
> is happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12237) Cassandra stress graphing is broken

2016-09-02 Thread Christopher Batey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15458257#comment-15458257
 ] 

Christopher Batey commented on CASSANDRA-12237:
---

[~iamaleksey] Updated the commit msg, CHANGES.txt and StressAction.

> Cassandra stress graphing is broken
> ---
>
> Key: CASSANDRA-12237
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12237
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Christopher Batey
> Fix For: 3.x
>
>
> Cassandra stress relies on a tmp file with the stress output so it can parse 
> it and put it the the graph html.
> However the contents of this file is now broken:
> {code}
> Sleeping 2s...Sleeping 2s...
> Sleeping 2s...
> Warming up WRITE with 5 iterations...Warming up WRITE with 5 
> iterations...
> Warming up WRITE with 5 iterations...
> Running WRITE with 500 threads 10 secondsRunning WRITE with 500 threads 10 
> seconds
> Running WRITE with 500 threads 10 seconds
> ...
> {code}
> This is as we create a {code}MultiPrintStream{code} that inherits from 
> {code}PrintWriter{code} and then delegate the call to super as well as a list 
> of other PrintWriters
> The call to super for println comes back into our print method so every line 
> gets logged multiple times as we do the for (PrintStream s: newStreams) many 
> times.
> We can change this to use composition and use our own interface if we want to 
> use a composite for logging the results
> This results in the parsing of this file not quite working and the aggregate 
> stats not working in produced graphs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9454) Log WARN on Multi Partition IN clause Queries

2016-09-02 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-9454:

Status: Awaiting Feedback  (was: In Progress)

> Log WARN on Multi Partition IN clause Queries
> -
>
> Key: CASSANDRA-9454
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9454
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Sebastian Estevez
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 2.2.x
>
>
> Similar to CASSANDRA-6487 but for multi-partition queries.
> Show warning (ideally at the client CASSANDRA-8930) when users try to use IN 
> clauses when clustering columns span multiple partitions. The right way to go 
> is async requests per partition.
> **Update**: Unless the query is CL.ONE and all the partition ranges are on 
> the node! In which case multi partition IN is okay.
> This can cause an OOM
> {code}
> ERROR [Thread-388] 2015-05-18 12:11:10,147 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-388,5,main]
> java.lang.OutOfMemoryError: Java heap space
> ERROR [ReadStage:321] 2015-05-18 12:11:10,147 CassandraDaemon.java (line 199) 
> Exception in thread Thread[ReadStage:321,5,main]
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
> at 
> org.apache.cassandra.io.util.MappedFileDataInput.readBytes(MappedFileDataInput.java:146)
> at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo.deserialize(IndexHelper.java:187)
> at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:122)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:970)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:871)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:41)
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
> at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> {code}
> By flooding heap with:
> {code}org.apache.cassandra.io.sstable.IndexHelper$IndexInfo{code}
> taken from:
> http://stackoverflow.com/questions/30366729/out-of-memory-error-in-cassandra-when-querying-big-rows-containing-a-collection



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-09-02 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11537:
-
Status: Awaiting Feedback  (was: In Progress)

[~appodictic] can you take a look?

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9625) GraphiteReporter not reporting

2016-09-02 Thread Loic Lambiel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457863#comment-15457863
 ] 

Loic Lambiel commented on CASSANDRA-9625:
-

How can we go ahead and fix this annoying bug ?

Running Cassandra 2.1.13, it happens randomly when there's a certain amount of 
compactions queued / running on nodes. There's nothing in the log at the time 
it stops reporting the metrics. It happens also when there's no repair in 
progress.

> GraphiteReporter not reporting
> --
>
> Key: CASSANDRA-9625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9625
> Project: Cassandra
>  Issue Type: Bug
> Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
>Reporter: Eric Evans
>Assignee: T Jake Luciani
> Attachments: Screen Shot 2016-04-13 at 10.40.58 AM.png, metrics.yaml, 
> thread-dump.log
>
>
> When upgrading from 2.1.3 to 2.1.6, the Graphite metrics reporter stops 
> working.  The usual startup is logged, and one batch of samples is sent, but 
> the reporting interval comes and goes, and no other samples are ever sent.  
> The logs are free from errors.
> Frustratingly, metrics reporting works in our smaller (staging) environment 
> on 2.1.6; We are able to reproduce this on all 6 of production nodes, but not 
> on a 3 node (otherwise identical) staging cluster (maybe it takes a certain 
> level of concurrency?).
> Attached is a thread dump, and our metrics.yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-09-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457835#comment-15457835
 ] 

Stefania commented on CASSANDRA-12423:
--

Both dtest jobs were aborted (they timed out when collecting the artifacts) and 
so I restarted them about an hour ago.  I've just restarted the utests for 3.0 
as well.

Thanks for the review and for improving the patch, I'll take care of committing 
once CI is green, or post another update in case of issues.

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tomasz Grabiec
>Assignee: Stefania
> Attachments: 12423.tar.gz
>
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-09-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457807#comment-15457807
 ] 

Sylvain Lebresne commented on CASSANDRA-12423:
--

bq. . I merely removed {{prefixValues}}, which was no longer required

Sorry for forgetting that one.

bq. Rebased, squashed, and cherry-picked for trunk.

Thanks. With that, I'm personally +1 on the patches as is once we have green 
CI. On that front, the dtests are still running but the 3.0 testall run has 3 
weird failures. They pretty clearly seems unrelated to this issue but it could 
be nice re-running the job to make sure it's not a persistent problem with that 
branch.

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tomasz Grabiec
>Assignee: Stefania
> Attachments: 12423.tar.gz
>
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12237) Cassandra stress graphing is broken

2016-09-02 Thread Christopher Batey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385861#comment-15385861
 ] 

Christopher Batey edited comment on CASSANDRA-12237 at 9/2/16 7:36 AM:
---

Branch here: https://github.com/chbatey/cassandra-1/tree/stress-graph-logging 

* The temporary results log contained each line three times 



   
* None of the aggregate metrics showed up due to name changes in the logs/stats 



   
json



   
* Auto run results were all the same color as the graph code was looking for a  



   
log line that the user hadn't specified a thread count was no long there (all   



   
results ended up with the same name)



   
* Only the first aggregates were shown for a multi run   


was (Author: chbatey):
Branch here: https://github.com/chbatey/cassandra-1/tree/stress-graph-logging 

Commit msg explains all the changes. I found a few more issues before i could 
get graphs working again.

> Cassandra stress graphing is broken
> ---
>
> Key: CASSANDRA-12237
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12237
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Christopher Batey
> Fix For: 3.x
>
>
> Cassandra stress relies on a tmp file with the stress output so it can parse 
> it and put it the the graph html.
> However the contents of this file is now broken:
> {code}
> Sleeping 2s...Sleeping 2s...
> Sleeping 2s...
> Warming up WRITE with 5 iterations...Warming up WRITE with 5 
> iterations...
> Warming up WRITE with 5 iterations...
> Running WRITE with 500 threads 10 secondsRunning WRITE with 500 threads 10 
> seconds
> Running WRITE with 500 threads 10 seconds
> ...
> {code}
> This is as we create a {code}MultiPrintStream{code} that inherits from 
> {code}PrintWriter{code} and then delegate the call to super as well as a list 
> of other PrintWriters
> The call to super for println comes back into our print method so every line 
> gets logged multiple times as we do the for (PrintStream s: newStreams) many 
> times.
> We can change this to use composition and use our own interface if we want to 
> use a composite for logging the results
> This results in the parsing of this file not quite working and the aggregate 
> stats not working in produced graphs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12594) sstabledump fails on frozen collection cells

2016-09-02 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12594:
-
Status: Open  (was: Patch Available)

> sstabledump fails on frozen collection cells
> 
>
> Key: CASSANDRA-12594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andy Tolbert
>Assignee: Andy Tolbert
>Priority: Minor
> Fix For: 3.0.9, 3.9
>
> Attachments: CASSANDRA-12594-3.0.txt, CASSANDRA-12594-3.0_2.txt
>
>
> sstabledump throws an exception when attempting to parse a cell that is a 
> frozen collection, i.e.:
> {noformat}
> [
>   {
> "partition" : {
>   "key" : [ "0" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18,
> "liveness_info" : { "tstamp" : "2016-09-01T22:06:45.670810Z" },
> "cells" : [
>   { "name" : "m", "value" }
> ] }
> ] }
> ]Exception in thread "main" java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:613)
>   at 
> org.apache.cassandra.db.marshal.TupleType.getString(TupleType.java:211)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeCell(JsonTransformer.java:441)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeColumnData(JsonTransformer.java:375)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeRow(JsonTransformer.java:279)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:214)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
>   at java.util.Iterator.forEachRemaining(Iterator.java:116)
>   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
>   at 
> org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:102)
>   at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:242)
> {noformat}
> This is because the code doesn't consider that the cell may be a frozen 
> collection, and attempts to get the string representation using the value 
> type which doesn't work.
> Example data:
> {noformat}
> CREATE KEYSPACE simple WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'};
> CREATE TABLE simple.unfrozen_map (
> k int PRIMARY KEY,
> m map>>
> );
> CREATE TABLE simple.frozen_map (
> k int PRIMARY KEY,
> m frozen>>>
> );
> insert into unfrozen_map (k, m) values (0, {'a': ('b', 'c'), 'd': ('e', 'f'), 
> 'g': ('h', 'i')});
> insert into frozen_map (k, m) values (0, {'a': ('b', 'c'), 'd': ('e', 'f'), 
> 'g': ('h', 'i')});
> {noformat}
> unfrozen_map will properly dump each cell individually, but frozen_map fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12594) sstabledump fails on frozen collection cells

2016-09-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457770#comment-15457770
 ] 

Sylvain Lebresne commented on CASSANDRA-12594:
--

To be honest, I think the underlying problem is in 
{{ColumnDefinition.cellValueType()}}. It pretty clearly doesn't behave 
correctly for frozen collection, where it doesn't return the "type of the cell 
value". Now, that method is used in a few other places so we should have a 
closer look as to why this hasn't been a problem in those other places (and if 
it is a problem there, why we haven't found it), but it really looks fishy as 
it is and I'd rather fix it than make sstabledump workaround what is clearly 
not right.

> sstabledump fails on frozen collection cells
> 
>
> Key: CASSANDRA-12594
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12594
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andy Tolbert
>Assignee: Andy Tolbert
>Priority: Minor
> Fix For: 3.0.9, 3.9
>
> Attachments: CASSANDRA-12594-3.0.txt, CASSANDRA-12594-3.0_2.txt
>
>
> sstabledump throws an exception when attempting to parse a cell that is a 
> frozen collection, i.e.:
> {noformat}
> [
>   {
> "partition" : {
>   "key" : [ "0" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18,
> "liveness_info" : { "tstamp" : "2016-09-01T22:06:45.670810Z" },
> "cells" : [
>   { "name" : "m", "value" }
> ] }
> ] }
> ]Exception in thread "main" java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:613)
>   at 
> org.apache.cassandra.db.marshal.TupleType.getString(TupleType.java:211)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeCell(JsonTransformer.java:441)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeColumnData(JsonTransformer.java:375)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializeRow(JsonTransformer.java:279)
>   at 
> org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:214)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
>   at java.util.Iterator.forEachRemaining(Iterator.java:116)
>   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
>   at 
> org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:102)
>   at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:242)
> {noformat}
> This is because the code doesn't consider that the cell may be a frozen 
> collection, and attempts to get the string representation using the value 
> type which doesn't work.
> Example data:
> {noformat}
> CREATE KEYSPACE simple WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': '1'};
> CREATE TABLE simple.unfrozen_map (
> k int PRIMARY KEY,
> m map>>
> );
> CREATE TABLE simple.frozen_map (
> k int PRIMARY KEY,
> m frozen>>>
> );
> insert into unfrozen_map (k, m) values (0, {'a': ('b', 'c'), 'd': ('e', 'f'), 
> 'g': ('h', 'i')});
> insert into frozen_map (k, m) values (0, {'a': ('b', 'c'), 'd': ('e', 'f'), 
> 'g': ('h', 'i')});
> {noformat}
> unfrozen_map will properly dump each cell individually, but frozen_map fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12457) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test

2016-09-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457760#comment-15457760
 ] 

Stefania commented on CASSANDRA-12457:
--

The multiplex 
[run|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/64]
 for testing the default execute-delayed-task policy completed without 
failures, confirming that there is a problem with non periodic tasks being 
skipped when this policy is set to false and shutdown is called.

I've also changed the order in SS drain and stopped the compactions and the 
batchlog manager before flushing system tables, for the reasons discussed 
above. Further, I've enhanced compaction manager to handle rejected executions.

Here is the patch for 2.2 with ordinary CI:

|2.2|[patch|https://github.com/stef1927/cassandra/commits/12457-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12457-2.2-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12457-2.2-dtest/]|

The multiplex run for the latest patch is 
[here|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/65/].

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x.bug_5732_test
> 
>
> Key: CASSANDRA-12457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12457
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.x
>
> Attachments: 12457_2.1_logs_with_allocation_stacks.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_1.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_2.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_3.tar.gz, 
> 12457_2.2_logs_with_allocation_stacks_4.tar.gz, node1.log, node1_debug.log, 
> node1_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_upgrade/16/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_1_x_To_indev_2_2_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 216, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 666, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: Upgrade test beginning, 
> setting CASSANDRA_VERSION to 2.1.15, and jdk to 8. (Prior values will be 
> restored after test).\ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: [[Row(table_name=u'ks', index_name=u'test.testindex')], 
> [Row(table_name=u'ks', index_name=u'test.testindex')]]\ndtest: DEBUG: 
> upgrading node1 to git:91f7387e1f785b18321777311a5c3416af0663c2\nccm: INFO: 
> Fetching Cassandra updates...\ndtest: DEBUG: Querying upgraded node\ndtest: 
> DEBUG: Querying old node\ndtest: DEBUG: removing ccm cluster test at: 
> /mnt/tmp/dtest-D8UF3i\ndtest: DEBUG: clearing ssl stores from 
> [/mnt/tmp/dtest-D8UF3i] directory\n- >> end captured 
> logging << -"
> {code}
> {code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:91f7387e1f785b18321777311a5c3416af0663c2
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@73deb57f) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2098812276:/mnt/tmp/dtest-D8UF3i/test/node1/data1/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-4
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7926de0f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1009016655:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-08-13 01:34:34,581 Ref.java:1

[jira] [Commented] (CASSANDRA-12588) Cannot find column durable_wrıtes

2016-09-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457753#comment-15457753
 ] 

Sylvain Lebresne commented on CASSANDRA-12588:
--

That's a bit low in info to provide some help. At the very least, it would be 
useful to know the Cassandra version you're using, how you installed it, as 
well as the system.log file (as it should have a bit more information).

> Cannot find column durable_wrıtes
> -
>
> Key: CASSANDRA-12588
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12588
> Project: Cassandra
>  Issue Type: Bug
>Reporter: LLc.
>
> help please
> run :
> cassandra -f
> ERROR 17:00:16 Exception encountered during startup
> java.lang.AssertionError: Cannot find column durable_wrıtes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)