[jira] [Commented] (CASSANDRA-11615) cassandra-stress blocks when connecting to a big cluster

2016-04-20 Thread Eduard Tudenhoefner (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251319#comment-15251319
 ] 

Eduard Tudenhoefner commented on CASSANDRA-11615:
-

[~tjake] just out of curiosity, is it possible that this performance penalty 
happens "only" during the warmup phase? Or is there also a chance that it 
happens during the warmup and the main performance run?

> cassandra-stress blocks when connecting to a big cluster
> 
>
> Key: CASSANDRA-11615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11615
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Eduard Tudenhoefner
>Assignee: Eduard Tudenhoefner
> Fix For: 3.0.x
>
> Attachments: 11615-3.0.patch
>
>
> I had a *100* node cluster and was running 
> {code}
> cassandra-stress read n=100 no-warmup cl=LOCAL_QUORUM -rate 'threads=20' 
> 'limit=1000/s'
> {code}
> Based on the thread dump it looks like it's been blocked at 
> https://github.com/apache/cassandra/blob/cassandra-3.0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java#L96
> {code}
> "Thread-20" #245 prio=5 os_prio=0 tid=0x7f3781822000 nid=0x46c4 waiting 
> for monitor entry [0x7f36cc788000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> "Thread-19" #244 prio=5 os_prio=0 tid=0x7f378182 nid=0x46c3 waiting 
> for monitor entry [0x7f36cc889000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> {code}
> I was trying the same with with a smaller cluster (50 nodes) and it was 
> working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-20 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251314#comment-15251314
 ] 

Stefania commented on CASSANDRA-11574:
--

I've installed Python 2.7.6 from a tarball, by following the instructions in 
the first answer of this 
[link|http://askubuntu.com/questions/101591/how-do-i-install-python-2-7-2-on-ubuntu],
 still no luck I'm afraid.

I'm really in the dark here, let's try replacing the problematic line with 
these 2 lines, maybe we get more clues:

{code}
num_processes = ImportTask.get_num_processes(16)
copy_options['numprocesses'] = int(opts.pop('numprocesses', num_processes))
{code}

As a workaround, you can also set {{num_processes}} to a reasonable default, 
{{get_num_processes()}} will return the number of cores on the machine minus 1, 
capped at 16, so you could set {{num_processes = 3}} on a 4 core machine for 
example.



> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-20 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251237#comment-15251237
 ] 

Stefania commented on CASSANDRA-11574:
--

OK - let me see if I can reproduce using a Python tarball installation. My 
installation is based on the default Ubuntu packages: {{apt-get install -y 
python python-pip python-dev python-setuptools}}, the version is also 2.7.6.

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-20 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251231#comment-15251231
 ] 

Stefania commented on CASSANDRA-11137:
--

LGTM, committed as ae063e806191f8285f1f3bcab068b2c4bfbc257b.

The failing json dtests pass locally provided the dtest patch is applied. For 
future reference, you can launch a CASSCI job with a different dtest repo and 
branch as follows:

{code}
!build my-job DTEST_REPO=my-gh-repo DTEST_BRANCH=my-branch
{code}

I've created the dtest pull request 
[here|https://github.com/riptano/cassandra-dtest/pull/944], I'll let the test 
eng team review it before merging it.

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.6
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-20 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11137:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Patch Available)

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.6
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: JSON datetime formatting needs timezone

2016-04-20 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5062bb6eb -> ae063e806


JSON datetime formatting needs timezone

patch by Alex Petrov; reviewed by Stefania Alborghetti for CASSANDRA-11137


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ae063e80
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ae063e80
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ae063e80

Branch: refs/heads/trunk
Commit: ae063e806191f8285f1f3bcab068b2c4bfbc257b
Parents: 5062bb6
Author: Alex Petrov 
Authored: Mon Apr 18 18:59:04 2016 +0200
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 12:02:47 2016 +0800

--
 CHANGES.txt | 1 +
 .../apache/cassandra/serializers/TimestampSerializer.java   | 9 ++---
 .../apache/cassandra/cql3/validation/entities/JsonTest.java | 6 --
 3 files changed, 11 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae063e80/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9f67555..1b94b2d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * JSON datetime formatting needs timezone (CASSANDRA-11137)
  * Add support to rebuild from specific range (CASSANDRA-10409)
  * Optimize the overlapping lookup by calculating all the
bounds in advance (CASSANDRA-11571)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae063e80/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index fbd98d1..9bd9a8d 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -49,11 +49,11 @@ public class TimestampSerializer implements 
TypeSerializer
 "-MM-dd HH:mm:ssX",
 "-MM-dd HH:mm:ssXX",
 "-MM-dd HH:mm:ssXXX",
-"-MM-dd HH:mm:ss.SSS",   // TO_JSON_FORMAT
+"-MM-dd HH:mm:ss.SSS",
 "-MM-dd HH:mm:ss.SSS z",
 "-MM-dd HH:mm:ss.SSS zz",
 "-MM-dd HH:mm:ss.SSS zzz",
-"-MM-dd HH:mm:ss.SSSX",
+"-MM-dd HH:mm:ss.SSSX", // TO_JSON_FORMAT
 "-MM-dd HH:mm:ss.SSSXX",
 "-MM-dd HH:mm:ss.SSSXXX",
 "-MM-dd'T'HH:mm",
@@ -108,11 +108,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
+private static final String TO_JSON_FORMAT = dateStringPatterns[19];
 private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
 {
 protected SimpleDateFormat initialValue()
 {
-return new SimpleDateFormat(dateStringPatterns[15]);
+SimpleDateFormat sdf = new SimpleDateFormat(TO_JSON_FORMAT);
+sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
+return sdf;
 }
 };
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae063e80/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
index 43d5309..08bb8a7 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
@@ -618,8 +618,10 @@ public class JsonTest extends CQLTester
 assertRows(execute("SELECT k, toJson(timeval) FROM %s WHERE k = ?", 
0), row(0, "\"00:00:00.00123\""));
 
 //  timestamp 
-execute("INSERT INTO %s (k, timestampval) VALUES (?, ?)", 0, new 
SimpleDateFormat("y-M-d").parse("2014-01-01"));
-assertRows(execute("SELECT k, toJson(timestampval) FROM %s WHERE k = 
?", 0), row(0, "\"2014-01-01 00:00:00.000\""));
+SimpleDateFormat sdf = new SimpleDateFormat("y-M-d");
+sdf.setTimeZone(TimeZone.getTimeZone("UDT"));
+execute("INSERT INTO %s (k, timestampval) VALUES (?, ?)", 0, 
sdf.parse("2014-01-01"));
+assertRows(execute("SELECT k, toJson(timestampval) FROM %s WHERE k = 
?", 0), row(0, "\"2014-01-01 00:00:00.000Z\""));
 
 //  timeuuid 
 execute("INSERT INTO %s (k, timeuuidval) VALUES (?, ?)", 0, 
UUID.fromString("6bddc89a-5644-11e4-97fc-56847afe9799"));



[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-20 Thread Nandakishore Arvapaly (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251210#comment-15251210
 ] 

Nandakishore Arvapaly commented on CASSANDRA-11574:
---

I tried by giving the columns names too but still get the same error that I 
listed recently with --debug option.The order of columns in csv file and the 
copy command matches.

copy moviesdb.movie(movieid,moviename,releasedyear) from 
'/root/cassandra_data/moviebyyear.csv';

The python version i'm using is "Python 2.7.6". I installed the python using 
tarball installation.

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-20 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251194#comment-15251194
 ] 

Stefania edited comment on CASSANDRA-11574 at 4/21/16 3:46 AM:
---

Try using this command: {{copy ks.movie (movieid, moviename, releasedyear) from 
'movies.csv';}}. Otherwise the columns in the table and file do not match and 
it should fail, as expected, with {{Failed to import 1 rows: ParseError - 
invalid literal for int() with base 10: 'Sabrina',  given up without retries}} 
and so forth for each row.

To see the column order in the table you can issue {{DESCRIBE TABLE 
table_name}}. The release year is moved to the first column because it is the 
partition key, even though it is the third column in the CREATE TABLE command.

Assuming the column names are specified, then it works fine for me. Could you 
confirm? If it still fails, what is your Python implementation and version?


was (Author: stefania):
Try using this command: {{copy ks.movie (movieid, moviename, releasedyear) from 
'movies.csv';}}. Otherwise the columns in the table and file do not match and 
it should fail, as expected, with {{Failed to import 1 rows: ParseError - 
invalid literal for int() with base 10: 'Sabrina',  given up without retries}} 
and so forth for each row.

To see the column order in the table you can issue {{DESCRIBE TABLE 
table_name}}. The release year is moved to the first column because it is the 
partition key, even though it is the third column in the CREATE TABLE command.

Assuming the column names are specified, then it works fine for me. Could you 
confirm? If it still fails, what if your Python implementation and version?

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-20 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251194#comment-15251194
 ] 

Stefania commented on CASSANDRA-11574:
--

Try using this command: {{copy ks.movie (movieid, moviename, releasedyear) from 
'movies.csv';}}. Otherwise the columns in the table and file do not match and 
it should fail, as expected, with {{Failed to import 1 rows: ParseError - 
invalid literal for int() with base 10: 'Sabrina',  given up without retries}} 
and so forth for each row.

To see the column order in the table you can issue {{DESCRIBE TABLE 
table_name}}. The release year is moved to the first column because it is the 
partition key, even though it is the third column in the CREATE TABLE command.

Assuming the column names are specified, then it works fine for me. Could you 
confirm? If it still fails, what if your Python implementation and version?

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11616) cassandra very high cpu rate

2016-04-20 Thread PengtaoGeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PengtaoGeng resolved CASSANDRA-11616.
-
Resolution: Not A Problem

Not a cassandra problem, it's my fault of using secondary index.
I found in my application has this cql:  SELECT * FROM userlabel where phone = 
''.
This cql will select out too many row because of many empty phone in table 
"userlabel" and they are indexed.

It also may be the main reason of issue "Stack overflow when querying 2ndary 
index" CASSANDRA-11304, because too many row need to be queried and then occur 
error via recursion invoke.

> cassandra very high cpu rate
> 
>
> Key: CASSANDRA-11616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: CentOS release 6.4
> 4 nodes cluster
> cassandra 3.0.5
> nodetool cfstats mykeyspace show the table data volume: Number of keys 
> (estimate): 77570676
>Reporter: PengtaoGeng
>Assignee: Sam Tunnicliffe
> Attachments: Image.png
>
>
> Under the very low speed of query CPU utilization of 100%.
> Query cql is only by partition key or by second index.
> Blow is the desc table info:
> CREATE TABLE mykeyspace.userlabel (
> id text PRIMARY KEY,
> cardno text,
> phone text,
> ccount text,
> username text
> ) ;
> CREATE INDEX userlabel_phone ON mykeyspace.userlabel (phone)
> top -H and jstack find the utilization cpu threads, they are all come from 
> "SharedPool-Worker".
> Show one thread jstack info:
> {quote}
> "SharedPool-Worker-28" #205 daemon prio=5 os_prio=0 tid=0x7f1610cc8780 
> nid=0xe7c0 runnable [0x7f0ed566f000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.cassandra.utils.MurmurHash.hash3_x64_128(MurmurHash.java:191)
> at 
> org.apache.cassandra.dht.Murmur3Partitioner.getHash(Murmur3Partitioner.java:181)
> at 
> org.apache.cassandra.dht.Murmur3Partitioner.decorateKey(Murmur3Partitioner.java:53)
> at 
> org.apache.cassandra.db.PartitionPosition$ForKey.get(PartitionPosition.java:49)
> at 
> org.apache.cassandra.db.marshal.PartitionerDefinedOrder.compareCustom(PartitionerDefinedOrder.java:93)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:158)
> at 
> org.apache.cassandra.db.ClusteringComparator.compareComponent(ClusteringComparator.java:166)
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:137)
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:126)
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:44)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.compareTo(MergeIterator.java:378)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.replaceAndSink(MergeIterator.java:266)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108)
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:130)
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.hasNext(CompositesSearcher.java:83)
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289)
> at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 

[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-20 Thread Nandakishore Arvapaly (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251149#comment-15251149
 ] 

Nandakishore Arvapaly commented on CASSANDRA-11574:
---

Hi Stefania,

I have tried the way you said but still faced the same issue.


Here is my simple schema for table "movie"

CREATE TABLE movie (
movieid int,
moviename text,
releasedyear int,
PRIMARY KEY(releasedyear,movieid,moviename)
);

My sample data

1,Toy Story,1995
2,Jumanji,1995
3,Grumpier Old Men,1995
4,Waiting to Exhale,1995
5,Father of the Bride Part II,1995
6,Heat,1995
7,Sabrina,1995
8,Tom and Huck,1995
9,Sudden Death,1995
10,GoldenEye,1995

The command I used to get into cqlsh

cqlsh 172.17.3.77 9042 --debug


The COPY command I executed

copy movie from '/root/cassandra_data/moviebyyear.csv';


The error I got
cqlsh:moviesdb> copy movie12 from '/root/cassandra_data/moviebyyear12.csv';
Traceback (most recent call last):
  File "/usr/bin/cqlsh.py", line 1191, in onecmd
self.handle_statement(st, statementtext)
  File "/usr/bin/cqlsh.py", line 1228, in handle_statement
return custom_handler(parsed)
  File "/usr/bin/cqlsh.py", line 1937, in do_copy
task = ImportTask(self, ks, table, columns, fname, opts, 
DEFAULT_PROTOCOL_VERSION, CONFIG_FILE)
  File "cqlshlib/copyutil.py", line 1052, in 
cqlshlib.copyutil.ImportTask.__init__ (cqlshlib/copyutil.c:27710)
CopyTask.__init__(self, shell, ks, table, columns, fname, opts, 
protocol_version, config_file, 'from')
  File "cqlshlib/copyutil.py", line 219, in cqlshlib.copyutil.CopyTask.__init__ 
(cqlshlib/copyutil.c:9708)
self.options = self.parse_options(opts, direction)
  File "cqlshlib/copyutil.py", line 320, in 
cqlshlib.copyutil.CopyTask.parse_options (cqlshlib/copyutil.c:11850)
copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(16)))
TypeError: get_num_processes() takes no keyword arguments


> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-20 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11137:
-
Reviewer: Stefania

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
> Fix For: 3.x
>
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11549) cqlsh: COPY FROM ignores NULL values in conversion

2016-04-20 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11549:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   2.2.7
   3.0.6
   3.6
   Status: Resolved  (was: Ready to Commit)

Release tag 2.1.15 doesn't exist yet, leaving fixed version as 2.1.x to be set 
to 2.1.15 later.

> cqlsh: COPY FROM ignores NULL values in conversion
> --
>
> Key: CASSANDRA-11549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11549
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.6, 2.1.x, 3.0.6, 2.2.7
>
>
> COPY FROM fails to import empty values. 
> For example:
> {code}
> $ cat test.csv
> a,10,20
> b,30,
> c,50,60
> $ cqlsh
> cqlsh> create keyspace if not exists test with replication = {'class': 
> 'SimpleStrategy', 'replication_factor':1};
> cqlsh> create table if not exists test.test (t text primary key, i1 int, i2 
> int);
> cqlsh> copy test.test (t,i1,i2) from 'test.csv';
> {code}
> Imports:
> {code}
> select * from test.test";
>  t | i1 | i2
> ---++
>  a | 10 | 20
>  c | 50 | 60
> (2 rows)
> {code}
> and generates a {{ParseError - invalid literal for int() with base 10: '',  
> given up without retries}} for the row with an empty value.
> It should import the empty value as a {{null}} and there should be no error:
> {code}
> cqlsh> select * from test.test";
>  t | i1 | i2
> ---++--
>  a | 10 |   20
>  c | 50 |   60
>  b | 30 | null
> (3 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11549) cqlsh: COPY FROM ignores NULL values in conversion

2016-04-20 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251131#comment-15251131
 ] 

Stefania commented on CASSANDRA-11549:
--

Thanks, committed as c8914c0725f190df6e522179157850ebc855fdcb and dtest pull 
request merged.

> cqlsh: COPY FROM ignores NULL values in conversion
> --
>
> Key: CASSANDRA-11549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11549
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> COPY FROM fails to import empty values. 
> For example:
> {code}
> $ cat test.csv
> a,10,20
> b,30,
> c,50,60
> $ cqlsh
> cqlsh> create keyspace if not exists test with replication = {'class': 
> 'SimpleStrategy', 'replication_factor':1};
> cqlsh> create table if not exists test.test (t text primary key, i1 int, i2 
> int);
> cqlsh> copy test.test (t,i1,i2) from 'test.csv';
> {code}
> Imports:
> {code}
> select * from test.test";
>  t | i1 | i2
> ---++
>  a | 10 | 20
>  c | 50 | 60
> (2 rows)
> {code}
> and generates a {{ParseError - invalid literal for int() with base 10: '',  
> given up without retries}} for the row with an empty value.
> It should import the empty value as a {{null}} and there should be no error:
> {code}
> cqlsh> select * from test.test";
>  t | i1 | i2
> ---++--
>  a | 10 |   20
>  c | 50 |   60
>  b | 30 | null
> (3 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[10/10] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-20 Thread stefania
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5062bb6e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5062bb6e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5062bb6e

Branch: refs/heads/trunk
Commit: 5062bb6eb05ae9fe376b7738c5170ff92e48636f
Parents: 2b08690 bb68078
Author: Stefania Alborghetti 
Authored: Thu Apr 21 10:18:26 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:18:26 2016 +0800

--
 CHANGES.txt| 3 ++-
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5062bb6e/CHANGES.txt
--
diff --cc CHANGES.txt
index f6ec738,eb2405c..9f67555
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -70,14 -16,10 +70,15 @@@ Merged from 2.2
   * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
- 
+ Merged from 2.1:
+  * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
  
 -3.0.5
 +3.5
 + * StaticTokenTreeBuilder should respect posibility of duplicate tokens 
(CASSANDRA-11525)
 + * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
 + * Avoid index segment stitching in RAM which lead to OOM on big SSTable 
files (CASSANDRA-11383)
 + * Fix clustering and row filters for LIKE queries on clustering columns 
(CASSANDRA-11397)
 +Merged from 3.0:
   * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
   * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
   * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5062bb6e/pylib/cqlshlib/copyutil.py
--



[02/10] cassandra git commit: cqlsh: COPY FROM ignores NULL values in conversion

2016-04-20 Thread stefania
cqlsh: COPY FROM ignores NULL values in conversion

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11549


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c8914c07
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c8914c07
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c8914c07

Branch: refs/heads/cassandra-2.2
Commit: c8914c0725f190df6e522179157850ebc855fdcb
Parents: 4389c9c
Author: Stefania Alborghetti 
Authored: Tue Apr 12 10:12:18 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:10:37 2016 +0800

--
 CHANGES.txt| 3 +++
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8914c07/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4a91a58..73780de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.15
+ * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
+
 2.1.14
  * (cqlsh) Fix potential COPY deadlock when parent process is terminating child
processes (CASSANDRA-11505)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8914c07/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 12239d8..b6e0cff 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -1877,7 +1877,7 @@ class ImportConversion(object):
 raise ParseError(self.get_null_primary_key_message(i))
 
 try:
-return [conv(val) for conv, val in zip(converters, row)]
+return [conv(val) if val != self.nullval else None for conv, val 
in zip(converters, row)]
 except Exception, e:
 raise ParseError(str(e))
 



[09/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-20 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bb68078e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bb68078e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bb68078e

Branch: refs/heads/trunk
Commit: bb68078e7c6cfb51c931604f9fc137921db18347
Parents: c568efe b80ff54
Author: Stefania Alborghetti 
Authored: Thu Apr 21 10:17:50 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:17:50 2016 +0800

--
 CHANGES.txt| 3 ++-
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bb68078e/CHANGES.txt
--
diff --cc CHANGES.txt
index 6fffe2a,e51e6d2..eb2405c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -16,25 -13,6 +16,26 @@@ Merged from 2.2
   * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
- 
++Merged from 2.1:
++ * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 +
 +3.0.5
 + * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
 + * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
 + * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)
 + * Upgrade ohc to 0.4.3
 + * Enable SO_REUSEADDR for JMX RMI server sockets (CASSANDRA-11093)
 + * Allocate merkletrees with the correct size (CASSANDRA-11390)
 + * Support streaming pre-3.0 sstables (CASSANDRA-10990)
 + * Add backpressure to compressed commit log (CASSANDRA-10971)
 + * SSTableExport supports secondary index tables (CASSANDRA-11330)
 + * Fix sstabledump to include missing info in debug output (CASSANDRA-11321)
 + * Establish and implement canonical bulk reading workload(s) 
(CASSANDRA-10331)
 + * Fix paging for IN queries on tables without clustering columns 
(CASSANDRA-11208)
 + * Remove recursive call from CompositesSearcher (CASSANDRA-11304)
 + * Fix filtering on non-primary key columns for queries without index 
(CASSANDRA-6377)
 + * Fix sstableloader fail when using materialized view (CASSANDRA-11275)
 +Merged from 2.2:
   * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
   * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
   * Add cassandra-stress keystore option (CASSANDRA-9325)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bb68078e/pylib/cqlshlib/copyutil.py
--



[03/10] cassandra git commit: cqlsh: COPY FROM ignores NULL values in conversion

2016-04-20 Thread stefania
cqlsh: COPY FROM ignores NULL values in conversion

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11549


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c8914c07
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c8914c07
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c8914c07

Branch: refs/heads/cassandra-3.0
Commit: c8914c0725f190df6e522179157850ebc855fdcb
Parents: 4389c9c
Author: Stefania Alborghetti 
Authored: Tue Apr 12 10:12:18 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:10:37 2016 +0800

--
 CHANGES.txt| 3 +++
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8914c07/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4a91a58..73780de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.15
+ * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
+
 2.1.14
  * (cqlsh) Fix potential COPY deadlock when parent process is terminating child
processes (CASSANDRA-11505)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8914c07/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 12239d8..b6e0cff 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -1877,7 +1877,7 @@ class ImportConversion(object):
 raise ParseError(self.get_null_primary_key_message(i))
 
 try:
-return [conv(val) for conv, val in zip(converters, row)]
+return [conv(val) if val != self.nullval else None for conv, val 
in zip(converters, row)]
 except Exception, e:
 raise ParseError(str(e))
 



[01/10] cassandra git commit: cqlsh: COPY FROM ignores NULL values in conversion

2016-04-20 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 4389c9cfd -> c8914c072
  refs/heads/cassandra-2.2 eb072a0fa -> b80ff541e
  refs/heads/cassandra-3.0 c568efee5 -> bb68078e7
  refs/heads/trunk 2b08690de -> 5062bb6eb


cqlsh: COPY FROM ignores NULL values in conversion

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11549


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c8914c07
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c8914c07
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c8914c07

Branch: refs/heads/cassandra-2.1
Commit: c8914c0725f190df6e522179157850ebc855fdcb
Parents: 4389c9c
Author: Stefania Alborghetti 
Authored: Tue Apr 12 10:12:18 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:10:37 2016 +0800

--
 CHANGES.txt| 3 +++
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8914c07/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4a91a58..73780de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.15
+ * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
+
 2.1.14
  * (cqlsh) Fix potential COPY deadlock when parent process is terminating child
processes (CASSANDRA-11505)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8914c07/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 12239d8..b6e0cff 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -1877,7 +1877,7 @@ class ImportConversion(object):
 raise ParseError(self.get_null_primary_key_message(i))
 
 try:
-return [conv(val) for conv, val in zip(converters, row)]
+return [conv(val) if val != self.nullval else None for conv, val 
in zip(converters, row)]
 except Exception, e:
 raise ParseError(str(e))
 



[06/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-20 Thread stefania
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b80ff541
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b80ff541
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b80ff541

Branch: refs/heads/cassandra-2.2
Commit: b80ff541e25d4123f772dd82f00094e7c3837698
Parents: eb072a0 c8914c0
Author: Stefania Alborghetti 
Authored: Thu Apr 21 10:12:24 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:12:24 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b80ff541/CHANGES.txt
--
diff --cc CHANGES.txt
index 6e6e17b,73780de..e51e6d2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,63 -1,15 +1,64 @@@
 -2.1.15
 +2.2.7
 + * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 +  report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 +Merged from 2.1:
+  * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 -
 -2.1.14
   * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
 processes (CASSANDRA-11505)
 - * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
 +
 +
 +2.2.6
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
 + * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
 + * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 

[04/10] cassandra git commit: cqlsh: COPY FROM ignores NULL values in conversion

2016-04-20 Thread stefania
cqlsh: COPY FROM ignores NULL values in conversion

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11549


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c8914c07
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c8914c07
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c8914c07

Branch: refs/heads/trunk
Commit: c8914c0725f190df6e522179157850ebc855fdcb
Parents: 4389c9c
Author: Stefania Alborghetti 
Authored: Tue Apr 12 10:12:18 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:10:37 2016 +0800

--
 CHANGES.txt| 3 +++
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8914c07/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4a91a58..73780de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.15
+ * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
+
 2.1.14
  * (cqlsh) Fix potential COPY deadlock when parent process is terminating child
processes (CASSANDRA-11505)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8914c07/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 12239d8..b6e0cff 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -1877,7 +1877,7 @@ class ImportConversion(object):
 raise ParseError(self.get_null_primary_key_message(i))
 
 try:
-return [conv(val) for conv, val in zip(converters, row)]
+return [conv(val) if val != self.nullval else None for conv, val 
in zip(converters, row)]
 except Exception, e:
 raise ParseError(str(e))
 



[05/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-20 Thread stefania
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b80ff541
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b80ff541
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b80ff541

Branch: refs/heads/cassandra-3.0
Commit: b80ff541e25d4123f772dd82f00094e7c3837698
Parents: eb072a0 c8914c0
Author: Stefania Alborghetti 
Authored: Thu Apr 21 10:12:24 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:12:24 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b80ff541/CHANGES.txt
--
diff --cc CHANGES.txt
index 6e6e17b,73780de..e51e6d2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,63 -1,15 +1,64 @@@
 -2.1.15
 +2.2.7
 + * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 +  report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 +Merged from 2.1:
+  * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 -
 -2.1.14
   * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
 processes (CASSANDRA-11505)
 - * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
 +
 +
 +2.2.6
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
 + * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
 + * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 

[08/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-20 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bb68078e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bb68078e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bb68078e

Branch: refs/heads/cassandra-3.0
Commit: bb68078e7c6cfb51c931604f9fc137921db18347
Parents: c568efe b80ff54
Author: Stefania Alborghetti 
Authored: Thu Apr 21 10:17:50 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:17:50 2016 +0800

--
 CHANGES.txt| 3 ++-
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bb68078e/CHANGES.txt
--
diff --cc CHANGES.txt
index 6fffe2a,e51e6d2..eb2405c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -16,25 -13,6 +16,26 @@@ Merged from 2.2
   * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
- 
++Merged from 2.1:
++ * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 +
 +3.0.5
 + * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
 + * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
 + * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)
 + * Upgrade ohc to 0.4.3
 + * Enable SO_REUSEADDR for JMX RMI server sockets (CASSANDRA-11093)
 + * Allocate merkletrees with the correct size (CASSANDRA-11390)
 + * Support streaming pre-3.0 sstables (CASSANDRA-10990)
 + * Add backpressure to compressed commit log (CASSANDRA-10971)
 + * SSTableExport supports secondary index tables (CASSANDRA-11330)
 + * Fix sstabledump to include missing info in debug output (CASSANDRA-11321)
 + * Establish and implement canonical bulk reading workload(s) 
(CASSANDRA-10331)
 + * Fix paging for IN queries on tables without clustering columns 
(CASSANDRA-11208)
 + * Remove recursive call from CompositesSearcher (CASSANDRA-11304)
 + * Fix filtering on non-primary key columns for queries without index 
(CASSANDRA-6377)
 + * Fix sstableloader fail when using materialized view (CASSANDRA-11275)
 +Merged from 2.2:
   * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
   * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
   * Add cassandra-stress keystore option (CASSANDRA-9325)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bb68078e/pylib/cqlshlib/copyutil.py
--



[07/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-20 Thread stefania
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b80ff541
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b80ff541
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b80ff541

Branch: refs/heads/trunk
Commit: b80ff541e25d4123f772dd82f00094e7c3837698
Parents: eb072a0 c8914c0
Author: Stefania Alborghetti 
Authored: Thu Apr 21 10:12:24 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 10:12:24 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b80ff541/CHANGES.txt
--
diff --cc CHANGES.txt
index 6e6e17b,73780de..e51e6d2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,63 -1,15 +1,64 @@@
 -2.1.15
 +2.2.7
 + * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 +  report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 +Merged from 2.1:
+  * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
 -
 -2.1.14
   * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
 processes (CASSANDRA-11505)
 - * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
 +
 +
 +2.2.6
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
 + * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
 + * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
 + 

[jira] [Updated] (CASSANDRA-11437) Make number of cores used by cqlsh COPY visible to testing code

2016-04-20 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11437:
-
Fix Version/s: (was: 3.x)
   3.6

> Make number of cores used by cqlsh COPY visible to testing code
> ---
>
> Key: CASSANDRA-11437
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11437
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Stefania
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6
>
>
> As per this conversation with [~Stefania]:
> https://github.com/riptano/cassandra-dtest/pull/869#issuecomment-200597829
> we don't currently have a way to verify that the test environment variable 
> {{CQLSH_COPY_TEST_NUM_CORES}} actually affects the behavior of {{COPY}} in 
> the intended way. If this were added, we could make our tests of the one-core 
> edge case a little stricter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10134) Always require replace_address to replace existing address

2016-04-20 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10134:
--
Labels: docs-impacting  (was: )

> Always require replace_address to replace existing address
> --
>
> Key: CASSANDRA-10134
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10134
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Tyler Hobbs
>Assignee: Sam Tunnicliffe
>  Labels: docs-impacting
> Fix For: 3.x
>
>
> Normally, when a node is started from a clean state with the same address as 
> an existing down node, it will fail to start with an error like this:
> {noformat}
> ERROR [main] 2015-08-19 15:07:51,577 CassandraDaemon.java:554 - Exception 
> encountered during startup
> java.lang.RuntimeException: A node with address /127.0.0.3 already exists, 
> cancelling join. Use cassandra.replace_address if you want to replace this 
> node.
>   at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:783)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:720)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537)
>  [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:626) 
> [main/:na]
> {noformat}
> However, if {{auto_bootstrap}} is set to false or the node is in its own seed 
> list, it will not throw this error and will start normally.  The new node 
> then takes over the host ID of the old node (even if the tokens are 
> different), and the only message you will see is a warning in the other 
> nodes' logs:
> {noformat}
> logger.warn("Changing {}'s host ID from {} to {}", endpoint, storedId, 
> hostId);
> {noformat}
> This could cause an operator to accidentally wipe out the token information 
> for a down node without replacing it.  To fix this, we should check for an 
> endpoint collision even if {{auto_bootstrap}} is false or the node is a seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11474) cqlsh: COPY FROM should use regular inserts for single statement batches

2016-04-20 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11474:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   2.2.7
   3.0.6
   3.6
   Status: Resolved  (was: Patch Available)

> cqlsh: COPY FROM should use regular inserts for single statement batches
> 
>
> Key: CASSANDRA-11474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11474
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>Priority: Minor
>  Labels: lhf
> Fix For: 3.6, 3.0.6, 2.2.7
>
>
> I haven't reproduced it with a test yet but, from code inspection, if CQL 
> rows are larger than {{batch_size_fail_threshold_in_kb}} and this parameter 
> cannot be changed, then data import will fail.
> Users can control the batch size by setting MAXBATCHSIZE.
> If a batch contains a single statement, there is no need to use a batch and 
> we should use normal inserts instead or, alternatively, we should skip the 
> batch size check for unlogged batches with only one statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11474) cqlsh: COPY FROM should use regular inserts for single statement batches

2016-04-20 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251101#comment-15251101
 ] 

Stefania commented on CASSANDRA-11474:
--

Thank you, committed as eb072a0fa05292dd347e96d3bc45b445995227ec and pull 
request merged.

> cqlsh: COPY FROM should use regular inserts for single statement batches
> 
>
> Key: CASSANDRA-11474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11474
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> I haven't reproduced it with a test yet but, from code inspection, if CQL 
> rows are larger than {{batch_size_fail_threshold_in_kb}} and this parameter 
> cannot be changed, then data import will fail.
> Users can control the batch size by setting MAXBATCHSIZE.
> If a batch contains a single statement, there is no need to use a batch and 
> we should use normal inserts instead or, alternatively, we should skip the 
> batch size check for unlogged batches with only one statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/6] cassandra git commit: cqlsh: COPY FROM should use regular inserts for single statement batches and report errors correctly if workers processes crash on initialization

2016-04-20 Thread stefania
cqlsh: COPY FROM should use regular inserts for single statement batches
and report errors correctly if workers processes crash on initialization

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11474


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb072a0f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb072a0f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb072a0f

Branch: refs/heads/cassandra-3.0
Commit: eb072a0fa05292dd347e96d3bc45b445995227ec
Parents: e865e39
Author: Stefania Alborghetti 
Authored: Wed Apr 6 17:14:11 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 09:43:35 2016 +0800

--
 CHANGES.txt|  2 ++
 pylib/cqlshlib/copyutil.py | 42 ++---
 2 files changed, 29 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb072a0f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cf07e80..6e6e17b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.2.7
+ * cqlsh: COPY FROM should use regular inserts for single statement batches and
+  report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
  * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 Merged from 2.1:
  * (cqlsh) Fix potential COPY deadlock when parent process is terminating child

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb072a0f/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 8140c93..dae819c 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -2066,8 +2066,7 @@ class ImportProcess(ChildProcess):
 profile_off(pr, file_name='worker_profile_%d.txt' % 
(os.getpid(),))
 
 except Exception, exc:
-if self.debug:
-traceback.print_exc(exc)
+self.report_error(exc)
 
 finally:
 self.close()
@@ -2156,20 +2155,25 @@ class ImportProcess(ChildProcess):
 return make_statement_with_failures if self.test_failures else 
make_statement
 
 def make_counter_batch_statement(self, query, conv, batch, replicas):
-statement = BatchStatement(batch_type=BatchType.COUNTER, 
consistency_level=self.consistency_level)
-statement.replicas = replicas
-statement.keyspace = self.ks
-for row in batch['rows']:
+def make_full_query(r):
 where_clause = []
 set_clause = []
-for i, value in enumerate(row):
+for i, value in enumerate(r):
 if i in conv.primary_key_indexes:
 where_clause.append("%s=%s" % (self.valid_columns[i], 
value))
 else:
 set_clause.append("%s=%s+%s" % (self.valid_columns[i], 
self.valid_columns[i], value))
+return query % (','.join(set_clause), ' AND '.join(where_clause))
 
-full_query_text = query % (','.join(set_clause), ' AND 
'.join(where_clause))
-statement.add(full_query_text)
+if len(batch['rows']) == 1:
+statement = SimpleStatement(make_full_query(batch['rows'][0]), 
consistency_level=self.consistency_level)
+else:
+statement = BatchStatement(batch_type=BatchType.COUNTER, 
consistency_level=self.consistency_level)
+for row in batch['rows']:
+statement.add(make_full_query(row))
+
+statement.replicas = replicas
+statement.keyspace = self.ks
 return statement
 
 def make_prepared_batch_statement(self, query, _, batch, replicas):
@@ -2183,17 +2187,25 @@ class ImportProcess(ChildProcess):
 We could optimize further by removing bound_statements altogether but 
we'd have to duplicate much
 more driver's code (BoundStatement.bind()).
 """
-statement = BatchStatement(batch_type=BatchType.UNLOGGED, 
consistency_level=self.consistency_level)
+if len(batch['rows']) == 1:
+statement = query.bind(batch['rows'][0])
+else:
+statement = BatchStatement(batch_type=BatchType.UNLOGGED, 
consistency_level=self.consistency_level)
+statement._statements_and_parameters = [(True, query.query_id, 
query.bind(r).values) for r in batch['rows']]
+
 statement.replicas = replicas
 statement.keyspace = self.ks
-statement._statements_and_parameters = [(True, query.query_id, 
query.bind(r).values) for r in batch['rows']]
 return statement
 
 def 

[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-20 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c568efee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c568efee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c568efee

Branch: refs/heads/cassandra-3.0
Commit: c568efee54fd118ee9039394391e911fe690a1f3
Parents: dab1d57 eb072a0
Author: Stefania Alborghetti 
Authored: Thu Apr 21 09:44:08 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 09:45:49 2016 +0800

--
 CHANGES.txt|  2 ++
 pylib/cqlshlib/copyutil.py | 40 ++--
 2 files changed, 28 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c568efee/CHANGES.txt
--
diff --cc CHANGES.txt
index ae73437,6e6e17b..6fffe2a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,13 +1,17 @@@
 -2.2.7
 +3.0.6
 + * Ensure columnfilter covers indexed columns for thrift 2i queries 
(CASSANDRA-11523)
 + * Only open one sstable scanner per sstable (CASSANDRA-11412)
 + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)
 + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
+   report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
   * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 -Merged from 2.1:
 - * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
 -   processes (CASSANDRA-11505)
 -
 -
 -2.2.6
   * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
   * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c568efee/pylib/cqlshlib/copyutil.py
--



[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-20 Thread stefania
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2b08690d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2b08690d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2b08690d

Branch: refs/heads/trunk
Commit: 2b08690deffce2b0b297e8638eb7961903ab4b46
Parents: 95d927d c568efe
Author: Stefania Alborghetti 
Authored: Thu Apr 21 09:46:36 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 09:48:22 2016 +0800

--
 CHANGES.txt| 1 +
 pylib/cqlshlib/copyutil.py | 5 ++---
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2b08690d/CHANGES.txt
--
diff --cc CHANGES.txt
index 6e3efb6,6fffe2a..f6ec738
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -64,6 -9,8 +64,7 @@@ Merged from 3.0
 header is received (CASSANDRA-11464)
   * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
  Merged from 2.2:
 - * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
 -  report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
++ * cqlsh: COPY FROM should report errors correctly if workers processes crash 
on initialization (CASSANDRA-11474)
   * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
   * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
   * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2b08690d/pylib/cqlshlib/copyutil.py
--



[1/6] cassandra git commit: cqlsh: COPY FROM should use regular inserts for single statement batches and report errors correctly if workers processes crash on initialization

2016-04-20 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 e865e396c -> eb072a0fa
  refs/heads/cassandra-3.0 dab1d578d -> c568efee5
  refs/heads/trunk 95d927d38 -> 2b08690de


cqlsh: COPY FROM should use regular inserts for single statement batches
and report errors correctly if workers processes crash on initialization

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11474


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb072a0f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb072a0f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb072a0f

Branch: refs/heads/cassandra-2.2
Commit: eb072a0fa05292dd347e96d3bc45b445995227ec
Parents: e865e39
Author: Stefania Alborghetti 
Authored: Wed Apr 6 17:14:11 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 09:43:35 2016 +0800

--
 CHANGES.txt|  2 ++
 pylib/cqlshlib/copyutil.py | 42 ++---
 2 files changed, 29 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb072a0f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cf07e80..6e6e17b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.2.7
+ * cqlsh: COPY FROM should use regular inserts for single statement batches and
+  report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
  * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 Merged from 2.1:
  * (cqlsh) Fix potential COPY deadlock when parent process is terminating child

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb072a0f/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 8140c93..dae819c 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -2066,8 +2066,7 @@ class ImportProcess(ChildProcess):
 profile_off(pr, file_name='worker_profile_%d.txt' % 
(os.getpid(),))
 
 except Exception, exc:
-if self.debug:
-traceback.print_exc(exc)
+self.report_error(exc)
 
 finally:
 self.close()
@@ -2156,20 +2155,25 @@ class ImportProcess(ChildProcess):
 return make_statement_with_failures if self.test_failures else 
make_statement
 
 def make_counter_batch_statement(self, query, conv, batch, replicas):
-statement = BatchStatement(batch_type=BatchType.COUNTER, 
consistency_level=self.consistency_level)
-statement.replicas = replicas
-statement.keyspace = self.ks
-for row in batch['rows']:
+def make_full_query(r):
 where_clause = []
 set_clause = []
-for i, value in enumerate(row):
+for i, value in enumerate(r):
 if i in conv.primary_key_indexes:
 where_clause.append("%s=%s" % (self.valid_columns[i], 
value))
 else:
 set_clause.append("%s=%s+%s" % (self.valid_columns[i], 
self.valid_columns[i], value))
+return query % (','.join(set_clause), ' AND '.join(where_clause))
 
-full_query_text = query % (','.join(set_clause), ' AND 
'.join(where_clause))
-statement.add(full_query_text)
+if len(batch['rows']) == 1:
+statement = SimpleStatement(make_full_query(batch['rows'][0]), 
consistency_level=self.consistency_level)
+else:
+statement = BatchStatement(batch_type=BatchType.COUNTER, 
consistency_level=self.consistency_level)
+for row in batch['rows']:
+statement.add(make_full_query(row))
+
+statement.replicas = replicas
+statement.keyspace = self.ks
 return statement
 
 def make_prepared_batch_statement(self, query, _, batch, replicas):
@@ -2183,17 +2187,25 @@ class ImportProcess(ChildProcess):
 We could optimize further by removing bound_statements altogether but 
we'd have to duplicate much
 more driver's code (BoundStatement.bind()).
 """
-statement = BatchStatement(batch_type=BatchType.UNLOGGED, 
consistency_level=self.consistency_level)
+if len(batch['rows']) == 1:
+statement = query.bind(batch['rows'][0])
+else:
+statement = BatchStatement(batch_type=BatchType.UNLOGGED, 
consistency_level=self.consistency_level)
+statement._statements_and_parameters = [(True, query.query_id, 
query.bind(r).values) for r in batch['rows']]
+
 statement.replicas = replicas
 

[3/6] cassandra git commit: cqlsh: COPY FROM should use regular inserts for single statement batches and report errors correctly if workers processes crash on initialization

2016-04-20 Thread stefania
cqlsh: COPY FROM should use regular inserts for single statement batches
and report errors correctly if workers processes crash on initialization

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11474


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb072a0f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb072a0f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb072a0f

Branch: refs/heads/trunk
Commit: eb072a0fa05292dd347e96d3bc45b445995227ec
Parents: e865e39
Author: Stefania Alborghetti 
Authored: Wed Apr 6 17:14:11 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 09:43:35 2016 +0800

--
 CHANGES.txt|  2 ++
 pylib/cqlshlib/copyutil.py | 42 ++---
 2 files changed, 29 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb072a0f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cf07e80..6e6e17b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.2.7
+ * cqlsh: COPY FROM should use regular inserts for single statement batches and
+  report errors correctly if workers processes crash on initialization 
(CASSANDRA-11474)
  * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 Merged from 2.1:
  * (cqlsh) Fix potential COPY deadlock when parent process is terminating child

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb072a0f/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 8140c93..dae819c 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -2066,8 +2066,7 @@ class ImportProcess(ChildProcess):
 profile_off(pr, file_name='worker_profile_%d.txt' % 
(os.getpid(),))
 
 except Exception, exc:
-if self.debug:
-traceback.print_exc(exc)
+self.report_error(exc)
 
 finally:
 self.close()
@@ -2156,20 +2155,25 @@ class ImportProcess(ChildProcess):
 return make_statement_with_failures if self.test_failures else 
make_statement
 
 def make_counter_batch_statement(self, query, conv, batch, replicas):
-statement = BatchStatement(batch_type=BatchType.COUNTER, 
consistency_level=self.consistency_level)
-statement.replicas = replicas
-statement.keyspace = self.ks
-for row in batch['rows']:
+def make_full_query(r):
 where_clause = []
 set_clause = []
-for i, value in enumerate(row):
+for i, value in enumerate(r):
 if i in conv.primary_key_indexes:
 where_clause.append("%s=%s" % (self.valid_columns[i], 
value))
 else:
 set_clause.append("%s=%s+%s" % (self.valid_columns[i], 
self.valid_columns[i], value))
+return query % (','.join(set_clause), ' AND '.join(where_clause))
 
-full_query_text = query % (','.join(set_clause), ' AND 
'.join(where_clause))
-statement.add(full_query_text)
+if len(batch['rows']) == 1:
+statement = SimpleStatement(make_full_query(batch['rows'][0]), 
consistency_level=self.consistency_level)
+else:
+statement = BatchStatement(batch_type=BatchType.COUNTER, 
consistency_level=self.consistency_level)
+for row in batch['rows']:
+statement.add(make_full_query(row))
+
+statement.replicas = replicas
+statement.keyspace = self.ks
 return statement
 
 def make_prepared_batch_statement(self, query, _, batch, replicas):
@@ -2183,17 +2187,25 @@ class ImportProcess(ChildProcess):
 We could optimize further by removing bound_statements altogether but 
we'd have to duplicate much
 more driver's code (BoundStatement.bind()).
 """
-statement = BatchStatement(batch_type=BatchType.UNLOGGED, 
consistency_level=self.consistency_level)
+if len(batch['rows']) == 1:
+statement = query.bind(batch['rows'][0])
+else:
+statement = BatchStatement(batch_type=BatchType.UNLOGGED, 
consistency_level=self.consistency_level)
+statement._statements_and_parameters = [(True, query.query_id, 
query.bind(r).values) for r in batch['rows']]
+
 statement.replicas = replicas
 statement.keyspace = self.ks
-statement._statements_and_parameters = [(True, query.query_id, 
query.bind(r).values) for r in batch['rows']]
 return statement
 
 def 

[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-20 Thread stefania
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c568efee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c568efee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c568efee

Branch: refs/heads/trunk
Commit: c568efee54fd118ee9039394391e911fe690a1f3
Parents: dab1d57 eb072a0
Author: Stefania Alborghetti 
Authored: Thu Apr 21 09:44:08 2016 +0800
Committer: Stefania Alborghetti 
Committed: Thu Apr 21 09:45:49 2016 +0800

--
 CHANGES.txt|  2 ++
 pylib/cqlshlib/copyutil.py | 40 ++--
 2 files changed, 28 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c568efee/CHANGES.txt
--
diff --cc CHANGES.txt
index ae73437,6e6e17b..6fffe2a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,13 +1,17 @@@
 -2.2.7
 +3.0.6
 + * Ensure columnfilter covers indexed columns for thrift 2i queries 
(CASSANDRA-11523)
 + * Only open one sstable scanner per sstable (CASSANDRA-11412)
 + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)
 + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * cqlsh: COPY FROM should use regular inserts for single statement batches 
and
+   report errors correctly if workers processes crash on 
initialization (CASSANDRA-11474)
   * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
 -Merged from 2.1:
 - * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
 -   processes (CASSANDRA-11505)
 -
 -
 -2.2.6
   * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
   * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c568efee/pylib/cqlshlib/copyutil.py
--



[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-20 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251067#comment-15251067
 ] 

Stefania commented on CASSANDRA-11574:
--

I'm really sorry but I cannot not reproduce this: not locally on my laptop 
running Ubuntu Trusty with python 2.7.6, nor on any of our CI servers, which 
run tests on Debian Jessie and Windows using Python 2.7. 

The line calling {{get_num_processes}} or its signature haven't changed since 
3.0.4 but the implementation has. It's worth testing it, although by reading 
[the 
documentation|https://docs.python.org/2.7/reference/expressions.html#calls] I 
don't see why {{cap=16}} would cause this issue now and not previously.

It is safe to edit copyutil.py, no other changes or compilation is required:

{code}
-copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(cap=16)))
+copy_options['numprocesses'] = int(opts.pop('numprocesses', 
self.get_num_processes(16)))
{code}

Would you mind testing this change to see if it fixes it? 

If this doesn't work, could you share your schema and a sample csv file, along 
with the full output obtained by running cqlsh with the {{--debug}} option?

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11548) Anticompaction not removing old sstables

2016-04-20 Thread Ruoran Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251053#comment-15251053
 ] 

Ruoran Wang commented on CASSANDRA-11548:
-

Thank you [~pauloricardomg]. May I ask about the release schedule for 2.1.14?

> Anticompaction not removing old sstables
> 
>
> Key: CASSANDRA-11548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11548
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.13
>Reporter: Ruoran Wang
>Assignee: Ruoran Wang
> Fix For: 2.1.14
>
> Attachments: 0001-cassandra-2.1.13-potential-fix.patch
>
>
> 1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
> Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
> sstable : repairedSSTables)```
> 2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
> Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
> repairedSSTables)```
> I think the effect of those above change might cause the 
> markCompactedSSTablesReplaced fail on 
> DataTracker.java
> {noformat}
>assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
> String.format("Expecting new size of %d, got %d while 
> replacing %s by %s in %s",
>   newSSTablesSize, newSSTables.size() + 
> newShadowed.size(), oldSSTables, replacements, this);
> {noformat}
> Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
> leaving the oldsstables not removed. (Then this might cause row out of order 
> error when doing incremental repair if there are L1 un-repaired sstables.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-20 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251011#comment-15251011
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


I'm not sure I understand if you're saying there's an error or not:
bq. However, there's still potential for error due to segment id disorder.
bq. And even without this construction positions will jump a segment forward 
and back continuously which is prone to bugs.

But you then state:
bq. In normal use... I can only see this causing inefficient replay, which 
isn't much of a problem and I'll happily leave it for another ticket.
Not trying to nit-pick, just honestly trying to wrap my head around whether 
there may be a correctness timing issue due to relying on a single 
globalPosition for both CLSM streams. Assuming we tackle the split-CF issue, I 
think things should be correct with the current implementation on replay. It 
admittedly adds a new "pathological" scenario as you've laid out, and one that 
I believe will end up being common (all CDC except system), so I think it would 
certainly warrant a follow-up ticket.

bq. But I don't understand how you deal with the issue while turning CDC 
on/off, for example:
In short: right now the code is not dealing with that correctly. I would 
initially think something along the lines of an OpOrder for CDC writes combined 
with a "block writes to CDC and flush MT/logs" on any CDC toggle or CDC-enabled 
CF schema changes, however I don't think OpOrder has any provisions for a 
non-chained / block-the-producers type model; the needs of CDC might take a new 
synchronization mechanism for this to be done correctly, and I'm not sure if it 
would be more appropriate to hold up writes during that flush or just WTE them.

I believe that guaranteeing mutations for any given CF will only exist in a 
single CommitLogSegmentManager's un-flushed logs on disk at any given time will 
resolve the sstable flush time vs. data in log time scenario and better 
preserve the previous assumptions of a single CommitLog.

For both the above and for general schema changes screwing up replay ordering, 
I see two possible solutions. First, and more complex, would be creating a 
point-in-time "checkpoint" of before and after that schema change that's 
in-line (i.e. same CommitLogSegment stream) with the data we're parsing. In our 
case, that would mean flushing that CDC data and the schema changes, blocking 
CDC writes until that flush is complete (using the general mechanism mentioned 
above). A second, simpler method would be to disallow schema changes on tables 
while CDC is enabled and restricting setting CDC status on a keyspace to 
creation time in v1 of this ticket.

While that would greatly restrict the flexibility of using the initial cut of 
the feature, given how close we are to 3.6 freeze and the fact that CDC being 
per-keyspace instead of per-CF means taking this into consideration during 
modeling time anyway, I'm in favor of restricting schema change on CDC-enabled 
CF and disallowing CDC toggling via ALTER on v1.

[~brianmhess]: could you chime in on both this potential restriction and the 
above question concerning mixing durableWrites with CDC? I would expect to 
create a few follow-up tickets from this immediately and hope to have those in 
by 3.8.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of 

[Cassandra Wiki] Trivial Update of "Committers" by StefaniaAlborghetti

2016-04-20 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by StefaniaAlborghetti:
https://wiki.apache.org/cassandra/Committers?action=diff=57=58

  ||Robert Stupp ||Jan 2015 ||Datastax || ||
  ||Sam Tunnicliffe ||May 2015 ||Datastax || ||
  ||Benjamin Lerer ||Jul 2015 ||Datastax || ||
- ||Carl Yeksigian || Jan 2016 || Datastax || Also a 
[[http://thrift.apache.org|Thrift]] committer ||
+ ||Carl Yeksigian ||Jan 2016 ||Datastax ||Also a 
[[http://thrift.apache.org|Thrift]] committer ||
+ ||Stefania Alborghetti ||Apr 2016 ||Datastax || ||
  
  
  {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}}


[jira] [Assigned] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-20 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-11574:


Assignee: Stefania

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
>Assignee: Stefania
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11258) Repair scheduling - Resource locking API

2016-04-20 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250952#comment-15250952
 ] 

Paulo Motta commented on CASSANDRA-11258:
-

Sorry for the delay. See some improvement comments below.

>From the code it seems that when an LWT insert timeouts, the 
>{{CasLockFactory}} assumes the lock was not acquired, but maybe the operation 
>succeeded and there was a timeout, so we will not be able to re-acquire the 
>lock before it expires. So we should perform a read at {{SERIAL}} level in 
>this situation to make sure any previous in-progress operations are committed 
>and we get the most recent value.

Is the {{sufficientNodesForLocking}} check necessary? I noticed that we are 
doing non-LWT reads at {{ONE}}, but we should use {{QUORUM}} instead and that 
check will be automatically done when reading or writing.

I think we should adjust our nomenclature and mindset from distributed locks to 
expiring leases, since this is what we are doing rather than distributed 
locking. If you agree, can you rename classes to reflect that?

When renewing the lease we should also insert the current lease holder priority 
into the {{resource_lock_priority}} table, otherwise other nodes might try to 
acquire the lease while it's being hold (the operation will fail, but the load 
on the system will be higher due to LWT).

We should also probably let lease holders renew leases explicitly rather than 
auto-renewing leases at the lease service, so for example the job scheduler can 
abort the job if it cannot renew the lease. For that matter, we should probably 
extend the {{DistributedLease}} interface with methods to renew the lease 
and/or check if it's still valid (perhaps we should have a look at the [JINI 
lease spec|https://river.apache.org/doc/specs/html/lease-spec.html] for 
inspiration, although it looks a bit verbose).

We should also use {{DateTieredCompactionStrategy}} on the lock tables to 
reduce compaction load on these tables, but we can probably do that later since 
we wil need to tune it according to ttl.

> Repair scheduling - Resource locking API
> 
>
> Key: CASSANDRA-11258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11258
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
>
> Create a resource locking API & implementation that is able to lock a 
> resource in a specified data center. It should handle priorities to avoid 
> node starvation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11613) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test

2016-04-20 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-11613:
---

Assignee: Tyler Hobbs

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test
> --
>
> Key: CASSANDRA-11613
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11613
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/8/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk/more_user_types_test
> Failed on CassCI build upgrade_tests-all-custom_branch_runs #8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11609) Nested UDTs cause error when migrating 2.x schema to trunk

2016-04-20 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11609:

Summary: Nested UDTs cause error when migrating 2.x schema to trunk  (was: 
cassandra won't start with schema complaint that does not appear to be valid)

> Nested UDTs cause error when migrating 2.x schema to trunk
> --
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> This was found in the upgrades user_types_test.
> Can also be repro'd with ccm.
> To repro using ccm:
> Create a 1 node cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250937#comment-15250937
 ] 

Tyler Hobbs commented on CASSANDRA-11609:
-

The root of the problem was that in the 2.x schema, implicitly frozen types 
(like nested UDTs) were not wrapped in {{FrozenType()}}.  Before 
CASSANDRA-7423, these were still implicitly frozen, so the 2.x -> 3.x migration 
correctly converted them.  After 7423 this no longer happens, so they need to 
be explicitly converted as part of the schema migration.

Migrations from 2.x -> 3.0.x -> 3.x should be fine without this patch.

I attempted to extend {{LegacySchemaMigratorTest}} to cover this, but 
unfortunately it doesn't seem possible to create a table that uses a UDT within 
the test framework.  So, I think we'll need to continue to rely on the existing 
upgrade tests for coverage here.

Patch and pending CI runs:

||branch||testall||dtest||
|[CASSANDRA-11609|https://github.com/thobbs/cassandra/tree/CASSANDRA-11609]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11609-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11609-dtest]|

[~blerer] would you mind reviewing?

> cassandra won't start with schema complaint that does not appear to be valid
> 
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> This was found in the upgrades user_types_test.
> Can also be repro'd with ccm.
> To repro using ccm:
> Create a 1 node cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11609:

Status: Patch Available  (was: In Progress)

> cassandra won't start with schema complaint that does not appear to be valid
> 
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> This was found in the upgrades user_types_test.
> Can also be repro'd with ccm.
> To repro using ccm:
> Create a 1 node cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11608) dtest failure in replace_address_test.TestReplaceAddress.replace_first_boot_test

2016-04-20 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11608:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in 
> replace_address_test.TestReplaceAddress.replace_first_boot_test
> 
>
> Key: CASSANDRA-11608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11608
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> This looks like a timeout kind of flap. It's flapped once. Example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/344/testReport/replace_address_test/TestReplaceAddress/replace_first_boot_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #344 - 2.2.6-tentative
> {code}
> Error Message
> 15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:
> INFO  [main] 2016-04-15 16:21:32,345 Config.java:4.
> See system.log for remainder
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4i5qkE
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: Starting cluster with 3 nodes.
> dtest: DEBUG: 32
> dtest: DEBUG: Inserting Data...
> dtest: DEBUG: Stopping node 3.
> dtest: DEBUG: Testing node stoppage (query should fail).
> dtest: DEBUG: Retrying read after timeout. Attempt #0
> dtest: DEBUG: Retrying read after timeout. Attempt #1
> dtest: DEBUG: Retrying request after UE. Attempt #2
> dtest: DEBUG: Retrying request after UE. Attempt #3
> dtest: DEBUG: Retrying request after UE. Attempt #4
> dtest: DEBUG: Starting node 4 to replace node 3
> dtest: DEBUG: Verifying querying works again.
> dtest: DEBUG: Verifying tokens migrated sucessfully
> dtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 TokenMetadata.java:196 - 
> Token -3855903180169109916 changing ownership from /127.0.0.3 to 
> /127.0.0.4\n', <_sre.SRE_Match object at 0x7fd21c0e2370>)
> dtest: DEBUG: Try to restart node 3 (should fail)
> dtest: DEBUG: [('WARN  [GossipStage:1] 2016-04-15 16:21:22,942 
> StorageService.java:1962 - Host ID collision for 
> 75916cc0-86ec-4136-b336-862a49953616 between /127.0.0.3 and /127.0.0.4; 
> /127.0.0.4 is the new owner\n', <_sre.SRE_Match object at 0x7fd1f83555e0>)]
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/replace_address_test.py", line 212, 
> in replace_first_boot_test
> node4.start(wait_for_binary_proto=True)
>   File "/home/automaton/ccm/ccmlib/node.py", line 610, in start
> node.watch_log_for_alive(self, from_mark=mark)
>   File "/home/automaton/ccm/ccmlib/node.py", line 457, in watch_log_for_alive
> self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, 
> filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 425, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:\nINFO  [main] 
> 2016-04-15 16:21:32,345 Config.java:4.\nSee system.log for 
> remainder\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-4i5qkE\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n'start_rpc': 
> 'true'}\ndtest: DEBUG: Starting cluster with 3 nodes.\ndtest: DEBUG: 
> 32\ndtest: DEBUG: Inserting Data...\ndtest: DEBUG: Stopping node 3.\ndtest: 
> DEBUG: Testing node stoppage (query should fail).\ndtest: DEBUG: Retrying 
> read after timeout. Attempt #0\ndtest: DEBUG: Retrying read after timeout. 
> Attempt #1\ndtest: DEBUG: Retrying request after UE. Attempt #2\ndtest: 
> DEBUG: Retrying request after UE. Attempt #3\ndtest: DEBUG: Retrying request 
> after UE. Attempt #4\ndtest: DEBUG: Starting node 4 to replace node 3\ndtest: 
> DEBUG: Verifying querying works again.\ndtest: DEBUG: Verifying tokens 
> migrated sucessfully\ndtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 
> TokenMetadata.java:196 - Token 

[jira] [Updated] (CASSANDRA-11608) dtest failure in replace_address_test.TestReplaceAddress.replace_first_boot_test

2016-04-20 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11608:

Status: Patch Available  (was: Open)

https://github.com/riptano/cassandra-dtest/pull/942

> dtest failure in 
> replace_address_test.TestReplaceAddress.replace_first_boot_test
> 
>
> Key: CASSANDRA-11608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11608
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> This looks like a timeout kind of flap. It's flapped once. Example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/344/testReport/replace_address_test/TestReplaceAddress/replace_first_boot_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #344 - 2.2.6-tentative
> {code}
> Error Message
> 15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:
> INFO  [main] 2016-04-15 16:21:32,345 Config.java:4.
> See system.log for remainder
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4i5qkE
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: Starting cluster with 3 nodes.
> dtest: DEBUG: 32
> dtest: DEBUG: Inserting Data...
> dtest: DEBUG: Stopping node 3.
> dtest: DEBUG: Testing node stoppage (query should fail).
> dtest: DEBUG: Retrying read after timeout. Attempt #0
> dtest: DEBUG: Retrying read after timeout. Attempt #1
> dtest: DEBUG: Retrying request after UE. Attempt #2
> dtest: DEBUG: Retrying request after UE. Attempt #3
> dtest: DEBUG: Retrying request after UE. Attempt #4
> dtest: DEBUG: Starting node 4 to replace node 3
> dtest: DEBUG: Verifying querying works again.
> dtest: DEBUG: Verifying tokens migrated sucessfully
> dtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 TokenMetadata.java:196 - 
> Token -3855903180169109916 changing ownership from /127.0.0.3 to 
> /127.0.0.4\n', <_sre.SRE_Match object at 0x7fd21c0e2370>)
> dtest: DEBUG: Try to restart node 3 (should fail)
> dtest: DEBUG: [('WARN  [GossipStage:1] 2016-04-15 16:21:22,942 
> StorageService.java:1962 - Host ID collision for 
> 75916cc0-86ec-4136-b336-862a49953616 between /127.0.0.3 and /127.0.0.4; 
> /127.0.0.4 is the new owner\n', <_sre.SRE_Match object at 0x7fd1f83555e0>)]
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/replace_address_test.py", line 212, 
> in replace_first_boot_test
> node4.start(wait_for_binary_proto=True)
>   File "/home/automaton/ccm/ccmlib/node.py", line 610, in start
> node.watch_log_for_alive(self, from_mark=mark)
>   File "/home/automaton/ccm/ccmlib/node.py", line 457, in watch_log_for_alive
> self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, 
> filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 425, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:\nINFO  [main] 
> 2016-04-15 16:21:32,345 Config.java:4.\nSee system.log for 
> remainder\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-4i5qkE\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n'start_rpc': 
> 'true'}\ndtest: DEBUG: Starting cluster with 3 nodes.\ndtest: DEBUG: 
> 32\ndtest: DEBUG: Inserting Data...\ndtest: DEBUG: Stopping node 3.\ndtest: 
> DEBUG: Testing node stoppage (query should fail).\ndtest: DEBUG: Retrying 
> read after timeout. Attempt #0\ndtest: DEBUG: Retrying read after timeout. 
> Attempt #1\ndtest: DEBUG: Retrying request after UE. Attempt #2\ndtest: 
> DEBUG: Retrying request after UE. Attempt #3\ndtest: DEBUG: Retrying request 
> after UE. Attempt #4\ndtest: DEBUG: Starting node 4 to replace node 3\ndtest: 
> DEBUG: Verifying querying works again.\ndtest: DEBUG: Verifying tokens 
> migrated sucessfully\ndtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 
> 

[jira] [Issue Comment Deleted] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1

2016-04-20 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-11600:

Comment: was deleted

(was: patches:
[3.0|https://github.com/bdeggleston/cassandra/tree/11600-3.0]
[trunk|https://github.com/bdeggleston/cassandra/tree/11600-trunk])

> Don't require HEAP_NEW_SIZE to be set when using G1
> ---
>
> Key: CASSANDRA-11600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11600
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 3.6, 3.0.x
>
>
> Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when 
> using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE 
> together, and won't start until you do. Since we ignore that setting if 
> you're using G1, we shouldn't require that the user set it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1

2016-04-20 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250757#comment-15250757
 ] 

Blake Eggleston commented on CASSANDRA-11600:
-

| *3.0* | *trunk* |
| [branch|https://github.com/bdeggleston/cassandra/tree/11600-3.0] | 
[branch|https://github.com/bdeggleston/cassandra/tree/11600-trunk] |
| 
[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-3.0-2-testall/1/]
 | 
[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-trunk-2-testall/1/]
 |
| 
[dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-3.0-2-dtest/1/]
 | 
[dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-trunk-2-dtest/1/]
 |

commit info: should merge cleanly

> Don't require HEAP_NEW_SIZE to be set when using G1
> ---
>
> Key: CASSANDRA-11600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11600
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 3.6, 3.0.x
>
>
> Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when 
> using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE 
> together, and won't start until you do. Since we ignore that setting if 
> you're using G1, we shouldn't require that the user set it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1

2016-04-20 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-11600:

Status: Patch Available  (was: In Progress)

> Don't require HEAP_NEW_SIZE to be set when using G1
> ---
>
> Key: CASSANDRA-11600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11600
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 3.6, 3.0.x
>
>
> Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when 
> using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE 
> together, and won't start until you do. Since we ignore that setting if 
> you're using G1, we shouldn't require that the user set it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11497) dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test

2016-04-20 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11497:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test
> 
>
> Key: CASSANDRA-11497
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11497
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Stefania
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/637/testReport/sstableutil_test/SSTableUtilTest/abortedcompaction_test
> Failed on CassCI build cassandra-3.0_dtest #637
> Next run passed, so this could be a flaky test.
> {noformat}
> Error Message
> 0 not greater than 0
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-gbo1Uc
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: About to invoke sstableutil...
> dtest: DEBUG: Listing files...
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-CRC.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Data.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Digest.crc32
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Filter.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Index.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Statistics.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Summary.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-TOC.txt
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-CRC.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Data.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Digest.crc32
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Filter.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Index.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Statistics.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Summary.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-TOC.txt
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-CRC.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Data.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Digest.crc32
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Filter.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Index.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Statistics.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Summary.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-TOC.txt
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-CRC.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Data.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Digest.crc32
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Filter.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Index.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Statistics.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Summary.db
> 

[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-20 Thread Nandakishore Arvapaly (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250628#comment-15250628
 ] 

Nandakishore Arvapaly commented on CASSANDRA-11574:
---

Even I faced the same issue when trying to execute below command.

copy movie(movieid,moviename,releasedyear) from 
'/root/cassandra_data/moviebyyear.csv';

These are the version I use
cqlsh 5.0.1, Cassandra 3.0.5, CQL spec 3.4.0 .

I didn't faced this issue when i'm using dsc20 version.

Can someone please resolve this quickly.

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11609:

Fix Version/s: 3.x

> cassandra won't start with schema complaint that does not appear to be valid
> 
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> This was found in the upgrades user_types_test.
> Can also be repro'd with ccm.
> To repro using ccm:
> Create a 1 node cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11608) dtest failure in replace_address_test.TestReplaceAddress.replace_first_boot_test

2016-04-20 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250594#comment-15250594
 ] 

Philip Thompson commented on CASSANDRA-11608:
-

I see now the bad start is on line 216, not 182. Thus why my fix had no effect.

> dtest failure in 
> replace_address_test.TestReplaceAddress.replace_first_boot_test
> 
>
> Key: CASSANDRA-11608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11608
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> This looks like a timeout kind of flap. It's flapped once. Example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/344/testReport/replace_address_test/TestReplaceAddress/replace_first_boot_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #344 - 2.2.6-tentative
> {code}
> Error Message
> 15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:
> INFO  [main] 2016-04-15 16:21:32,345 Config.java:4.
> See system.log for remainder
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4i5qkE
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: Starting cluster with 3 nodes.
> dtest: DEBUG: 32
> dtest: DEBUG: Inserting Data...
> dtest: DEBUG: Stopping node 3.
> dtest: DEBUG: Testing node stoppage (query should fail).
> dtest: DEBUG: Retrying read after timeout. Attempt #0
> dtest: DEBUG: Retrying read after timeout. Attempt #1
> dtest: DEBUG: Retrying request after UE. Attempt #2
> dtest: DEBUG: Retrying request after UE. Attempt #3
> dtest: DEBUG: Retrying request after UE. Attempt #4
> dtest: DEBUG: Starting node 4 to replace node 3
> dtest: DEBUG: Verifying querying works again.
> dtest: DEBUG: Verifying tokens migrated sucessfully
> dtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 TokenMetadata.java:196 - 
> Token -3855903180169109916 changing ownership from /127.0.0.3 to 
> /127.0.0.4\n', <_sre.SRE_Match object at 0x7fd21c0e2370>)
> dtest: DEBUG: Try to restart node 3 (should fail)
> dtest: DEBUG: [('WARN  [GossipStage:1] 2016-04-15 16:21:22,942 
> StorageService.java:1962 - Host ID collision for 
> 75916cc0-86ec-4136-b336-862a49953616 between /127.0.0.3 and /127.0.0.4; 
> /127.0.0.4 is the new owner\n', <_sre.SRE_Match object at 0x7fd1f83555e0>)]
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/replace_address_test.py", line 212, 
> in replace_first_boot_test
> node4.start(wait_for_binary_proto=True)
>   File "/home/automaton/ccm/ccmlib/node.py", line 610, in start
> node.watch_log_for_alive(self, from_mark=mark)
>   File "/home/automaton/ccm/ccmlib/node.py", line 457, in watch_log_for_alive
> self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, 
> filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 425, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:\nINFO  [main] 
> 2016-04-15 16:21:32,345 Config.java:4.\nSee system.log for 
> remainder\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-4i5qkE\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n'start_rpc': 
> 'true'}\ndtest: DEBUG: Starting cluster with 3 nodes.\ndtest: DEBUG: 
> 32\ndtest: DEBUG: Inserting Data...\ndtest: DEBUG: Stopping node 3.\ndtest: 
> DEBUG: Testing node stoppage (query should fail).\ndtest: DEBUG: Retrying 
> read after timeout. Attempt #0\ndtest: DEBUG: Retrying read after timeout. 
> Attempt #1\ndtest: DEBUG: Retrying request after UE. Attempt #2\ndtest: 
> DEBUG: Retrying request after UE. Attempt #3\ndtest: DEBUG: Retrying request 
> after UE. Attempt #4\ndtest: DEBUG: Starting node 4 to replace node 3\ndtest: 
> DEBUG: Verifying querying works again.\ndtest: DEBUG: Verifying tokens 
> migrated sucessfully\ndtest: DEBUG: ('WARN  [main] 

[jira] [Updated] (CASSANDRA-10406) Nodetool supports to rebuild from specific ranges.

2016-04-20 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-10406:
---
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Patch Available)

Committed, thanks!

> Nodetool supports to rebuild from specific ranges.
> --
>
> Key: CASSANDRA-10406
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10406
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.6
>
> Attachments: 0001-nodetool-rebuild-support-range-tokens.patch
>
>
> Add the 'nodetool rebuildrange' command, so that if `nodetool rebuild` 
> failed, we do not need to rebuild all the ranges, and can just rebuild those 
> failed ones.
> Should be easily ported to all versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Add support to rebuild from specific range

2016-04-20 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/trunk a0e8de99d -> 95d927d38


Add support to rebuild from specific range

patch by Dikang Gu; reviewed by yukim for CASSANDRA-10409


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/95d927d3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/95d927d3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/95d927d3

Branch: refs/heads/trunk
Commit: 95d927d38fbb13bfdcfc8e5a7475eb3e44082aaa
Parents: a0e8de9
Author: Dikang Gu 
Authored: Thu Apr 14 15:29:30 2016 -0500
Committer: Yuki Morishita 
Committed: Wed Apr 20 14:09:51 2016 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/service/StorageService.java   | 65 ++--
 .../cassandra/service/StorageServiceMBean.java  | 10 +++
 .../org/apache/cassandra/tools/NodeProbe.java   |  4 +-
 .../cassandra/tools/nodetool/Rebuild.java   | 22 ++-
 5 files changed, 79 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/95d927d3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ca679b2..6e3efb6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Add support to rebuild from specific range (CASSANDRA-10409)
  * Optimize the overlapping lookup by calculating all the
bounds in advance (CASSANDRA-11571)
  * Support json/yaml output in noetool tablestats (CASSANDRA-5977)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/95d927d3/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 8390482..6051567 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -26,23 +26,8 @@ import java.lang.management.ManagementFactory;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.EnumMap;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.LinkedHashMap;
-import java.util.LinkedList;
-import java.util.List;
-import java.util.Map;
+import java.util.*;
 import java.util.Map.Entry;
-import java.util.Set;
-import java.util.SortedMap;
-import java.util.TreeMap;
-import java.util.UUID;
 import java.util.concurrent.CopyOnWriteArrayList;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
@@ -52,6 +37,8 @@ import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
+import java.util.regex.MatchResult;
+import java.util.regex.Pattern;
 import javax.annotation.Nullable;
 import javax.management.JMX;
 import javax.management.MBeanServer;
@@ -1142,13 +1129,26 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public void rebuild(String sourceDc)
 {
+rebuild(sourceDc, null, null);
+}
+
+public void rebuild(String sourceDc, String keyspace, String tokens)
+{
 // check on going rebuild
 if (!isRebuilding.compareAndSet(false, true))
 {
 throw new IllegalStateException("Node is still rebuilding. Check 
nodetool netstats.");
 }
 
-logger.info("rebuild from dc: {}", sourceDc == null ? "(any dc)" : 
sourceDc);
+// check the arguments
+if (keyspace == null && tokens != null)
+{
+throw new IllegalArgumentException("Cannot specify tokens without 
keyspace.");
+}
+
+logger.info("rebuild from dc: {}, {}, {}", sourceDc == null ? "(any 
dc)" : sourceDc,
+keyspace == null ? "(All keyspaces)" : keyspace,
+tokens == null ? "(All tokens)" : tokens);
 
 try
 {
@@ -1164,8 +1164,35 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 if (sourceDc != null)
 streamer.addSourceFilter(new 
RangeStreamer.SingleDatacenterFilter(DatabaseDescriptor.getEndpointSnitch(), 
sourceDc));
 
-for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())
-streamer.addRanges(keyspaceName, getLocalRanges(keyspaceName));
+if (keyspace == null)
+{
+for (String keyspaceName : 

[jira] [Comment Edited] (CASSANDRA-11623) Compactions w/ Short Rows Spending Time in getOnDiskFilePointer

2016-04-20 Thread Tom Petracca (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250483#comment-15250483
 ] 

Tom Petracca edited comment on CASSANDRA-11623 at 4/20/16 7:07 PM:
---

A solution to this is to just estimate the size of the written file.
- 
[2.2|https://github.com/tpetracca/cassandra/commit/d9028ce6be8956279807b428ff55d38ae759b1de]
- 
[3.0|https://github.com/tpetracca/cassandra/commit/a1c8d32e9443536e58b1fac164981e8c01f30d9f]
- 
[3.5|https://github.com/tpetracca/cassandra/commit/08e1f26569339f74f146b073351e2ca7cf1ba5a7]
- 
[trunk|https://github.com/tpetracca/cassandra/commit/6abe3d9401b1fba00885389c7870884bda8b7d0f]
 - this was the only clean cherry-pick (from 3.5)


was (Author: tpetracca):
A solution to this is to just estimate the size of the written file.
https://github.com/tpetracca/cassandra/commit/d9028ce6be8956279807b428ff55d38ae759b1de

> Compactions w/ Short Rows Spending Time in getOnDiskFilePointer
> ---
>
> Key: CASSANDRA-11623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11623
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom Petracca
>Priority: Minor
> Attachments: compactiontask_profile.png
>
>
> Been doing some performance tuning and profiling of my cassandra cluster and 
> noticed that compaction speeds for my tables that I know to have very short 
> rows were going particularly slowly.  Profiling shows a ton of time being 
> spent in BigTableWriter.getOnDiskFilePointer(), and attaching strace to a 
> CompactionTask shows that a majority of time is being spent lseek (called by 
> getOnDiskFilePointer), and not read or write.
> Going deeper it looks like we call getOnDiskFilePointer each row (sometimes 
> multiple times per row) in order to see if we've reached our expected sstable 
> size and should start a new writer.  This is pretty unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11505) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors

2016-04-20 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11505:

   Resolution: Fixed
Fix Version/s: (was: 2.2.x)
   (was: 2.1.x)
   2.2.7
   2.1.14
   Status: Resolved  (was: Patch Available)

[~Stefania] thanks for the reminder.  The patch and tests both look great to me!

+1, committed as {{4389c9cfd86fb3f31a9419c44f0521604be3637b}} to 2.1 and merged 
up to 2.2.  Merges above that were no-ops, as expected.

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors
> -
>
> Key: CASSANDRA-11505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11505
> Project: Cassandra
>  Issue Type: Test
>  Components: Tools
>Reporter: Michael Shuler
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.1.14, 2.2.7
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/197/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors
> Failed on CassCI build cassandra-3.0_novnode_dtest #197
> {noformat}
> Error Message
> False is not true
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-c2AJlu
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse errors
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 943, in test_reading_max_parse_errors
> self.assertTrue(num_rows_imported < (num_rows / 2))  # less than the 
> maximum number of valid rows in the csv
>   File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue
> raise self.failureException(msg)
> "False is not true\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-c2AJlu\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None,\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse 
> errors\n- >> end captured logging << 
> -"
> Standard Output
> (EE)  Using CQL driver:  '/home/automaton/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>(EE)
>   Using connect timeout: 5 seconds(EE)  Using 'utf-8' encoding(EE)  
> :2:Failed to import 2500 rows: ParseError - could not convert string 
> to float: abc,  given up without retries(EE)  :2:Exceeded maximum 
> number of parse errors 10(EE)  :2:Failed to process 2500 rows; failed 
> rows written to import_ks_testmaxparseerrors.err(EE)  :2:Exceeded 
> maximum number of parse errors 10(EE)  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/4] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-20 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dab1d578
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dab1d578
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dab1d578

Branch: refs/heads/trunk
Commit: dab1d578d75151c597dc1e611b430ae99134f72c
Parents: 14f08e6 e865e39
Author: Tyler Hobbs 
Authored: Wed Apr 20 13:57:29 2016 -0500
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:57:29 2016 -0500

--

--




[1/4] cassandra git commit: cqlsh: Fix potential COPY deadlock

2016-04-20 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk a5e501f09 -> a0e8de99d


cqlsh: Fix potential COPY deadlock

This deadlock could occur when the parent process is terminating child
processes (partial backport of CASSANDRA-11320).

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-11505


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4389c9cf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4389c9cf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4389c9cf

Branch: refs/heads/trunk
Commit: 4389c9cfd86fb3f31a9419c44f0521604be3637b
Parents: 209ebd3
Author: Stefania Alborghetti 
Authored: Mon Apr 11 10:31:34 2016 +0800
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:52:08 2016 -0500

--
 CHANGES.txt|   2 +
 pylib/cqlshlib/copyutil.py | 171 +---
 2 files changed, 91 insertions(+), 82 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4389c9cf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76d3673..4a91a58 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.14
+ * (cqlsh) Fix potential COPY deadlock when parent process is terminating child
+   processes (CASSANDRA-11505)
  * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4389c9cf/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 28e08b1..12239d8 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -28,8 +28,8 @@ import random
 import re
 import struct
 import sys
-import time
 import threading
+import time
 import traceback
 
 from bisect import bisect_right
@@ -57,6 +57,7 @@ from sslhandling import ssl_settings
 
 PROFILE_ON = False
 STRACE_ON = False
+DEBUG = False  # This may be set to True when initializing the task
 IS_LINUX = platform.system() == 'Linux'
 
 CopyOptions = namedtuple('CopyOptions', 'copy dialect unrecognized')
@@ -70,6 +71,16 @@ def safe_normpath(fname):
 return os.path.normpath(os.path.expanduser(fname)) if fname else fname
 
 
+def printdebugmsg(msg):
+if DEBUG:
+printmsg(msg)
+
+
+def printmsg(msg, eol='\n'):
+sys.stdout.write(msg + eol)
+sys.stdout.flush()
+
+
 class OneWayChannel(object):
 """
 A one way pipe protected by two process level locks, one for reading and 
one for writing.
@@ -78,11 +89,49 @@ class OneWayChannel(object):
 self.reader, self.writer = mp.Pipe(duplex=False)
 self.rlock = mp.Lock()
 self.wlock = mp.Lock()
+self.feeding_thread = None
+self.pending_messages = None
+
+def init_feeding_thread(self):
+"""
+Initialize a thread that fetches messages from a queue and sends them 
to the channel.
+We initialize the feeding thread lazily to avoid the fork(), since the 
channels are passed to child processes.
+"""
+if self.feeding_thread is not None or self.pending_messages is not 
None:
+raise RuntimeError("Feeding thread already initialized")
+
+self.pending_messages = Queue()
+
+def feed():
+send = self._send
+pending_messages = self.pending_messages
+
+while True:
+try:
+msg = pending_messages.get()
+send(msg)
+except Exception, e:
+printmsg('%s: %s' % (e.__class__.__name__, e.message))
+
+feeding_thread = threading.Thread(target=feed)
+feeding_thread.setDaemon(True)
+feeding_thread.start()
+
+self.feeding_thread = feeding_thread
 
 def send(self, obj):
+if self.feeding_thread is None:
+self.init_feeding_thread()
+
+self.pending_messages.put(obj)
+
+def _send(self, obj):
 with self.wlock:
 self.writer.send(obj)
 
+def num_pending(self):
+return self.pending_messages.qsize() if self.pending_messages else 0
+
 def recv(self):
 with self.rlock:
 return self.reader.recv()
@@ -157,8 +206,15 @@ class CopyTask(object):
 self.fname = safe_normpath(fname)
 self.protocol_version = protocol_version
 self.config_file = config_file
-# do not display messages when exporting to STDOUT
-self.printmsg = 

[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-20 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dab1d578
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dab1d578
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dab1d578

Branch: refs/heads/cassandra-3.0
Commit: dab1d578d75151c597dc1e611b430ae99134f72c
Parents: 14f08e6 e865e39
Author: Tyler Hobbs 
Authored: Wed Apr 20 13:57:29 2016 -0500
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:57:29 2016 -0500

--

--




[4/4] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-20 Thread tylerhobbs
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a0e8de99
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a0e8de99
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a0e8de99

Branch: refs/heads/trunk
Commit: a0e8de99db87c76e60bc9a5241d8bbd6877acee2
Parents: a5e501f dab1d57
Author: Tyler Hobbs 
Authored: Wed Apr 20 13:57:42 2016 -0500
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:57:42 2016 -0500

--

--




[1/3] cassandra git commit: cqlsh: Fix potential COPY deadlock

2016-04-20 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 14f08e6f6 -> dab1d578d


cqlsh: Fix potential COPY deadlock

This deadlock could occur when the parent process is terminating child
processes (partial backport of CASSANDRA-11320).

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-11505


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4389c9cf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4389c9cf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4389c9cf

Branch: refs/heads/cassandra-3.0
Commit: 4389c9cfd86fb3f31a9419c44f0521604be3637b
Parents: 209ebd3
Author: Stefania Alborghetti 
Authored: Mon Apr 11 10:31:34 2016 +0800
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:52:08 2016 -0500

--
 CHANGES.txt|   2 +
 pylib/cqlshlib/copyutil.py | 171 +---
 2 files changed, 91 insertions(+), 82 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4389c9cf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76d3673..4a91a58 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.14
+ * (cqlsh) Fix potential COPY deadlock when parent process is terminating child
+   processes (CASSANDRA-11505)
  * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4389c9cf/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 28e08b1..12239d8 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -28,8 +28,8 @@ import random
 import re
 import struct
 import sys
-import time
 import threading
+import time
 import traceback
 
 from bisect import bisect_right
@@ -57,6 +57,7 @@ from sslhandling import ssl_settings
 
 PROFILE_ON = False
 STRACE_ON = False
+DEBUG = False  # This may be set to True when initializing the task
 IS_LINUX = platform.system() == 'Linux'
 
 CopyOptions = namedtuple('CopyOptions', 'copy dialect unrecognized')
@@ -70,6 +71,16 @@ def safe_normpath(fname):
 return os.path.normpath(os.path.expanduser(fname)) if fname else fname
 
 
+def printdebugmsg(msg):
+if DEBUG:
+printmsg(msg)
+
+
+def printmsg(msg, eol='\n'):
+sys.stdout.write(msg + eol)
+sys.stdout.flush()
+
+
 class OneWayChannel(object):
 """
 A one way pipe protected by two process level locks, one for reading and 
one for writing.
@@ -78,11 +89,49 @@ class OneWayChannel(object):
 self.reader, self.writer = mp.Pipe(duplex=False)
 self.rlock = mp.Lock()
 self.wlock = mp.Lock()
+self.feeding_thread = None
+self.pending_messages = None
+
+def init_feeding_thread(self):
+"""
+Initialize a thread that fetches messages from a queue and sends them 
to the channel.
+We initialize the feeding thread lazily to avoid the fork(), since the 
channels are passed to child processes.
+"""
+if self.feeding_thread is not None or self.pending_messages is not 
None:
+raise RuntimeError("Feeding thread already initialized")
+
+self.pending_messages = Queue()
+
+def feed():
+send = self._send
+pending_messages = self.pending_messages
+
+while True:
+try:
+msg = pending_messages.get()
+send(msg)
+except Exception, e:
+printmsg('%s: %s' % (e.__class__.__name__, e.message))
+
+feeding_thread = threading.Thread(target=feed)
+feeding_thread.setDaemon(True)
+feeding_thread.start()
+
+self.feeding_thread = feeding_thread
 
 def send(self, obj):
+if self.feeding_thread is None:
+self.init_feeding_thread()
+
+self.pending_messages.put(obj)
+
+def _send(self, obj):
 with self.wlock:
 self.writer.send(obj)
 
+def num_pending(self):
+return self.pending_messages.qsize() if self.pending_messages else 0
+
 def recv(self):
 with self.rlock:
 return self.reader.recv()
@@ -157,8 +206,15 @@ class CopyTask(object):
 self.fname = safe_normpath(fname)
 self.protocol_version = protocol_version
 self.config_file = config_file
-# do not display messages when exporting to STDOUT
-

[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-20 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e865e396
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e865e396
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e865e396

Branch: refs/heads/cassandra-2.2
Commit: e865e396cdd6281f7eaac9cde8b5528547cb8e98
Parents: 60997c2 4389c9c
Author: Tyler Hobbs 
Authored: Wed Apr 20 13:55:28 2016 -0500
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:55:28 2016 -0500

--
 CHANGES.txt|   3 +
 pylib/cqlshlib/copyutil.py | 171 +---
 2 files changed, 92 insertions(+), 82 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e865e396/CHANGES.txt
--
diff --cc CHANGES.txt
index baaf227,4a91a58..cf07e80
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,58 -1,12 +1,61 @@@
 -2.1.14
 +2.2.7
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
++Merged from 2.1:
+  * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
+processes (CASSANDRA-11505)
 - * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
 +
 +
 +2.2.6
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
 + * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
 + * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
 + * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
 + * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
 +   (CASSANDRA-10010)
 +Merged from 2.1:
   * Checking if an unlogged batch is local is 

[2/4] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-20 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e865e396
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e865e396
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e865e396

Branch: refs/heads/trunk
Commit: e865e396cdd6281f7eaac9cde8b5528547cb8e98
Parents: 60997c2 4389c9c
Author: Tyler Hobbs 
Authored: Wed Apr 20 13:55:28 2016 -0500
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:55:28 2016 -0500

--
 CHANGES.txt|   3 +
 pylib/cqlshlib/copyutil.py | 171 +---
 2 files changed, 92 insertions(+), 82 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e865e396/CHANGES.txt
--
diff --cc CHANGES.txt
index baaf227,4a91a58..cf07e80
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,58 -1,12 +1,61 @@@
 -2.1.14
 +2.2.7
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
++Merged from 2.1:
+  * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
+processes (CASSANDRA-11505)
 - * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
 +
 +
 +2.2.6
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
 + * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
 + * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
 + * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
 + * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
 +   (CASSANDRA-10010)
 +Merged from 2.1:
   * Checking if an unlogged batch is local is inefficient 

[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-20 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e865e396
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e865e396
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e865e396

Branch: refs/heads/cassandra-3.0
Commit: e865e396cdd6281f7eaac9cde8b5528547cb8e98
Parents: 60997c2 4389c9c
Author: Tyler Hobbs 
Authored: Wed Apr 20 13:55:28 2016 -0500
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:55:28 2016 -0500

--
 CHANGES.txt|   3 +
 pylib/cqlshlib/copyutil.py | 171 +---
 2 files changed, 92 insertions(+), 82 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e865e396/CHANGES.txt
--
diff --cc CHANGES.txt
index baaf227,4a91a58..cf07e80
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,58 -1,12 +1,61 @@@
 -2.1.14
 +2.2.7
 + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553)
++Merged from 2.1:
+  * (cqlsh) Fix potential COPY deadlock when parent process is terminating 
child
+processes (CASSANDRA-11505)
 - * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
 +
 +
 +2.2.6
 + * Allow only DISTINCT queries with partition keys restrictions 
(CASSANDRA-11339)
 + * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
 + * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
 + * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
 + * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
 + * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
 +   (CASSANDRA-10010)
 +Merged from 2.1:
   * Checking if an unlogged batch is local is 

[1/2] cassandra git commit: cqlsh: Fix potential COPY deadlock

2016-04-20 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 60997c2de -> e865e396c


cqlsh: Fix potential COPY deadlock

This deadlock could occur when the parent process is terminating child
processes (partial backport of CASSANDRA-11320).

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-11505


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4389c9cf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4389c9cf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4389c9cf

Branch: refs/heads/cassandra-2.2
Commit: 4389c9cfd86fb3f31a9419c44f0521604be3637b
Parents: 209ebd3
Author: Stefania Alborghetti 
Authored: Mon Apr 11 10:31:34 2016 +0800
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:52:08 2016 -0500

--
 CHANGES.txt|   2 +
 pylib/cqlshlib/copyutil.py | 171 +---
 2 files changed, 91 insertions(+), 82 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4389c9cf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76d3673..4a91a58 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.14
+ * (cqlsh) Fix potential COPY deadlock when parent process is terminating child
+   processes (CASSANDRA-11505)
  * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4389c9cf/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 28e08b1..12239d8 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -28,8 +28,8 @@ import random
 import re
 import struct
 import sys
-import time
 import threading
+import time
 import traceback
 
 from bisect import bisect_right
@@ -57,6 +57,7 @@ from sslhandling import ssl_settings
 
 PROFILE_ON = False
 STRACE_ON = False
+DEBUG = False  # This may be set to True when initializing the task
 IS_LINUX = platform.system() == 'Linux'
 
 CopyOptions = namedtuple('CopyOptions', 'copy dialect unrecognized')
@@ -70,6 +71,16 @@ def safe_normpath(fname):
 return os.path.normpath(os.path.expanduser(fname)) if fname else fname
 
 
+def printdebugmsg(msg):
+if DEBUG:
+printmsg(msg)
+
+
+def printmsg(msg, eol='\n'):
+sys.stdout.write(msg + eol)
+sys.stdout.flush()
+
+
 class OneWayChannel(object):
 """
 A one way pipe protected by two process level locks, one for reading and 
one for writing.
@@ -78,11 +89,49 @@ class OneWayChannel(object):
 self.reader, self.writer = mp.Pipe(duplex=False)
 self.rlock = mp.Lock()
 self.wlock = mp.Lock()
+self.feeding_thread = None
+self.pending_messages = None
+
+def init_feeding_thread(self):
+"""
+Initialize a thread that fetches messages from a queue and sends them 
to the channel.
+We initialize the feeding thread lazily to avoid the fork(), since the 
channels are passed to child processes.
+"""
+if self.feeding_thread is not None or self.pending_messages is not 
None:
+raise RuntimeError("Feeding thread already initialized")
+
+self.pending_messages = Queue()
+
+def feed():
+send = self._send
+pending_messages = self.pending_messages
+
+while True:
+try:
+msg = pending_messages.get()
+send(msg)
+except Exception, e:
+printmsg('%s: %s' % (e.__class__.__name__, e.message))
+
+feeding_thread = threading.Thread(target=feed)
+feeding_thread.setDaemon(True)
+feeding_thread.start()
+
+self.feeding_thread = feeding_thread
 
 def send(self, obj):
+if self.feeding_thread is None:
+self.init_feeding_thread()
+
+self.pending_messages.put(obj)
+
+def _send(self, obj):
 with self.wlock:
 self.writer.send(obj)
 
+def num_pending(self):
+return self.pending_messages.qsize() if self.pending_messages else 0
+
 def recv(self):
 with self.rlock:
 return self.reader.recv()
@@ -157,8 +206,15 @@ class CopyTask(object):
 self.fname = safe_normpath(fname)
 self.protocol_version = protocol_version
 self.config_file = config_file
-# do not display messages when exporting to STDOUT
-

cassandra git commit: cqlsh: Fix potential COPY deadlock

2016-04-20 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 209ebd380 -> 4389c9cfd


cqlsh: Fix potential COPY deadlock

This deadlock could occur when the parent process is terminating child
processes (partial backport of CASSANDRA-11320).

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-11505


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4389c9cf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4389c9cf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4389c9cf

Branch: refs/heads/cassandra-2.1
Commit: 4389c9cfd86fb3f31a9419c44f0521604be3637b
Parents: 209ebd3
Author: Stefania Alborghetti 
Authored: Mon Apr 11 10:31:34 2016 +0800
Committer: Tyler Hobbs 
Committed: Wed Apr 20 13:52:08 2016 -0500

--
 CHANGES.txt|   2 +
 pylib/cqlshlib/copyutil.py | 171 +---
 2 files changed, 91 insertions(+), 82 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4389c9cf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76d3673..4a91a58 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.14
+ * (cqlsh) Fix potential COPY deadlock when parent process is terminating child
+   processes (CASSANDRA-11505)
  * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4389c9cf/pylib/cqlshlib/copyutil.py
--
diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py
index 28e08b1..12239d8 100644
--- a/pylib/cqlshlib/copyutil.py
+++ b/pylib/cqlshlib/copyutil.py
@@ -28,8 +28,8 @@ import random
 import re
 import struct
 import sys
-import time
 import threading
+import time
 import traceback
 
 from bisect import bisect_right
@@ -57,6 +57,7 @@ from sslhandling import ssl_settings
 
 PROFILE_ON = False
 STRACE_ON = False
+DEBUG = False  # This may be set to True when initializing the task
 IS_LINUX = platform.system() == 'Linux'
 
 CopyOptions = namedtuple('CopyOptions', 'copy dialect unrecognized')
@@ -70,6 +71,16 @@ def safe_normpath(fname):
 return os.path.normpath(os.path.expanduser(fname)) if fname else fname
 
 
+def printdebugmsg(msg):
+if DEBUG:
+printmsg(msg)
+
+
+def printmsg(msg, eol='\n'):
+sys.stdout.write(msg + eol)
+sys.stdout.flush()
+
+
 class OneWayChannel(object):
 """
 A one way pipe protected by two process level locks, one for reading and 
one for writing.
@@ -78,11 +89,49 @@ class OneWayChannel(object):
 self.reader, self.writer = mp.Pipe(duplex=False)
 self.rlock = mp.Lock()
 self.wlock = mp.Lock()
+self.feeding_thread = None
+self.pending_messages = None
+
+def init_feeding_thread(self):
+"""
+Initialize a thread that fetches messages from a queue and sends them 
to the channel.
+We initialize the feeding thread lazily to avoid the fork(), since the 
channels are passed to child processes.
+"""
+if self.feeding_thread is not None or self.pending_messages is not 
None:
+raise RuntimeError("Feeding thread already initialized")
+
+self.pending_messages = Queue()
+
+def feed():
+send = self._send
+pending_messages = self.pending_messages
+
+while True:
+try:
+msg = pending_messages.get()
+send(msg)
+except Exception, e:
+printmsg('%s: %s' % (e.__class__.__name__, e.message))
+
+feeding_thread = threading.Thread(target=feed)
+feeding_thread.setDaemon(True)
+feeding_thread.start()
+
+self.feeding_thread = feeding_thread
 
 def send(self, obj):
+if self.feeding_thread is None:
+self.init_feeding_thread()
+
+self.pending_messages.put(obj)
+
+def _send(self, obj):
 with self.wlock:
 self.writer.send(obj)
 
+def num_pending(self):
+return self.pending_messages.qsize() if self.pending_messages else 0
+
 def recv(self):
 with self.rlock:
 return self.reader.recv()
@@ -157,8 +206,15 @@ class CopyTask(object):
 self.fname = safe_normpath(fname)
 self.protocol_version = protocol_version
 self.config_file = config_file
-# do not display messages when exporting to STDOUT
-

[jira] [Comment Edited] (CASSANDRA-11608) dtest failure in replace_address_test.TestReplaceAddress.replace_first_boot_test

2016-04-20 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250495#comment-15250495
 ] 

Philip Thompson edited comment on CASSANDRA-11608 at 4/20/16 6:50 PM:
--

Looking at the logs, it seems node4 was added before node3 was fully recognized 
as down. I tried setting {{wait-other-notice=True}} on the {{node3.stop()}} 
call, but that didn't fix the issue at all. I haven't checked the logs for 
those runs yet.

http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/74/testReport/
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/75/testReport/


was (Author: philipthompson):
Looking at the logs, it seems node4 was added before node3 was fully recognized 
as down. I tried setting {{wait-other-notice=True}} on the {{node3.stop()}} 
call, but that didn't fix the issue at all. I haven't checked the logs for 
those runs yet.

> dtest failure in 
> replace_address_test.TestReplaceAddress.replace_first_boot_test
> 
>
> Key: CASSANDRA-11608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11608
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> This looks like a timeout kind of flap. It's flapped once. Example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/344/testReport/replace_address_test/TestReplaceAddress/replace_first_boot_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #344 - 2.2.6-tentative
> {code}
> Error Message
> 15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:
> INFO  [main] 2016-04-15 16:21:32,345 Config.java:4.
> See system.log for remainder
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4i5qkE
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: Starting cluster with 3 nodes.
> dtest: DEBUG: 32
> dtest: DEBUG: Inserting Data...
> dtest: DEBUG: Stopping node 3.
> dtest: DEBUG: Testing node stoppage (query should fail).
> dtest: DEBUG: Retrying read after timeout. Attempt #0
> dtest: DEBUG: Retrying read after timeout. Attempt #1
> dtest: DEBUG: Retrying request after UE. Attempt #2
> dtest: DEBUG: Retrying request after UE. Attempt #3
> dtest: DEBUG: Retrying request after UE. Attempt #4
> dtest: DEBUG: Starting node 4 to replace node 3
> dtest: DEBUG: Verifying querying works again.
> dtest: DEBUG: Verifying tokens migrated sucessfully
> dtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 TokenMetadata.java:196 - 
> Token -3855903180169109916 changing ownership from /127.0.0.3 to 
> /127.0.0.4\n', <_sre.SRE_Match object at 0x7fd21c0e2370>)
> dtest: DEBUG: Try to restart node 3 (should fail)
> dtest: DEBUG: [('WARN  [GossipStage:1] 2016-04-15 16:21:22,942 
> StorageService.java:1962 - Host ID collision for 
> 75916cc0-86ec-4136-b336-862a49953616 between /127.0.0.3 and /127.0.0.4; 
> /127.0.0.4 is the new owner\n', <_sre.SRE_Match object at 0x7fd1f83555e0>)]
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/replace_address_test.py", line 212, 
> in replace_first_boot_test
> node4.start(wait_for_binary_proto=True)
>   File "/home/automaton/ccm/ccmlib/node.py", line 610, in start
> node.watch_log_for_alive(self, from_mark=mark)
>   File "/home/automaton/ccm/ccmlib/node.py", line 457, in watch_log_for_alive
> self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, 
> filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 425, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:\nINFO  [main] 
> 2016-04-15 16:21:32,345 Config.java:4.\nSee system.log for 
> remainder\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-4i5qkE\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n 

[jira] [Commented] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250492#comment-15250492
 ] 

Tyler Hobbs commented on CASSANDRA-11609:
-

This was probably triggered by CASSANDRA-7423, I'll look into it.

> cassandra won't start with schema complaint that does not appear to be valid
> 
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
>
> This was found in the upgrades user_types_test.
> Can also be repro'd with ccm.
> To repro using ccm:
> Create a 1 node cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11608) dtest failure in replace_address_test.TestReplaceAddress.replace_first_boot_test

2016-04-20 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250495#comment-15250495
 ] 

Philip Thompson commented on CASSANDRA-11608:
-

Looking at the logs, it seems node4 was added before node3 was fully recognized 
as down. I tried setting {{wait-other-notice=True}} on the {{node3.stop()}} 
call, but that didn't fix the issue at all. I haven't checked the logs for 
those runs yet.

> dtest failure in 
> replace_address_test.TestReplaceAddress.replace_first_boot_test
> 
>
> Key: CASSANDRA-11608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11608
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> This looks like a timeout kind of flap. It's flapped once. Example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/344/testReport/replace_address_test/TestReplaceAddress/replace_first_boot_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #344 - 2.2.6-tentative
> {code}
> Error Message
> 15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:
> INFO  [main] 2016-04-15 16:21:32,345 Config.java:4.
> See system.log for remainder
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4i5qkE
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: Starting cluster with 3 nodes.
> dtest: DEBUG: 32
> dtest: DEBUG: Inserting Data...
> dtest: DEBUG: Stopping node 3.
> dtest: DEBUG: Testing node stoppage (query should fail).
> dtest: DEBUG: Retrying read after timeout. Attempt #0
> dtest: DEBUG: Retrying read after timeout. Attempt #1
> dtest: DEBUG: Retrying request after UE. Attempt #2
> dtest: DEBUG: Retrying request after UE. Attempt #3
> dtest: DEBUG: Retrying request after UE. Attempt #4
> dtest: DEBUG: Starting node 4 to replace node 3
> dtest: DEBUG: Verifying querying works again.
> dtest: DEBUG: Verifying tokens migrated sucessfully
> dtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 TokenMetadata.java:196 - 
> Token -3855903180169109916 changing ownership from /127.0.0.3 to 
> /127.0.0.4\n', <_sre.SRE_Match object at 0x7fd21c0e2370>)
> dtest: DEBUG: Try to restart node 3 (should fail)
> dtest: DEBUG: [('WARN  [GossipStage:1] 2016-04-15 16:21:22,942 
> StorageService.java:1962 - Host ID collision for 
> 75916cc0-86ec-4136-b336-862a49953616 between /127.0.0.3 and /127.0.0.4; 
> /127.0.0.4 is the new owner\n', <_sre.SRE_Match object at 0x7fd1f83555e0>)]
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/replace_address_test.py", line 212, 
> in replace_first_boot_test
> node4.start(wait_for_binary_proto=True)
>   File "/home/automaton/ccm/ccmlib/node.py", line 610, in start
> node.watch_log_for_alive(self, from_mark=mark)
>   File "/home/automaton/ccm/ccmlib/node.py", line 457, in watch_log_for_alive
> self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, 
> filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 425, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:\nINFO  [main] 
> 2016-04-15 16:21:32,345 Config.java:4.\nSee system.log for 
> remainder\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-4i5qkE\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n'start_rpc': 
> 'true'}\ndtest: DEBUG: Starting cluster with 3 nodes.\ndtest: DEBUG: 
> 32\ndtest: DEBUG: Inserting Data...\ndtest: DEBUG: Stopping node 3.\ndtest: 
> DEBUG: Testing node stoppage (query should fail).\ndtest: DEBUG: Retrying 
> read after timeout. Attempt #0\ndtest: DEBUG: Retrying read after timeout. 
> Attempt #1\ndtest: DEBUG: Retrying request after UE. Attempt #2\ndtest: 
> DEBUG: Retrying request after UE. Attempt #3\ndtest: DEBUG: Retrying request 
> after UE. Attempt #4\ndtest: DEBUG: Starting 

[jira] [Assigned] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-11609:
---

Assignee: Tyler Hobbs

> cassandra won't start with schema complaint that does not appear to be valid
> 
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Tyler Hobbs
>
> This was found in the upgrades user_types_test.
> Can also be repro'd with ccm.
> To repro using ccm:
> Create a 1 node cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11623) Compactions w/ Short Rows Spending Time in getOnDiskFilePointer

2016-04-20 Thread Tom Petracca (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250483#comment-15250483
 ] 

Tom Petracca commented on CASSANDRA-11623:
--

A solution to this is to just estimate the size of the written file.
https://github.com/tpetracca/cassandra/commit/d9028ce6be8956279807b428ff55d38ae759b1de

> Compactions w/ Short Rows Spending Time in getOnDiskFilePointer
> ---
>
> Key: CASSANDRA-11623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11623
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom Petracca
>Priority: Minor
> Attachments: compactiontask_profile.png
>
>
> Been doing some performance tuning and profiling of my cassandra cluster and 
> noticed that compaction speeds for my tables that I know to have very short 
> rows were going particularly slowly.  Profiling shows a ton of time being 
> spent in BigTableWriter.getOnDiskFilePointer(), and attaching strace to a 
> CompactionTask shows that a majority of time is being spent lseek (called by 
> getOnDiskFilePointer), and not read or write.
> Going deeper it looks like we call getOnDiskFilePointer each row (sometimes 
> multiple times per row) in order to see if we've reached our expected sstable 
> size and should start a new writer.  This is pretty unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11623) Compactions w/ Short Rows Spending Time in getOnDiskFilePointer

2016-04-20 Thread Tom Petracca (JIRA)
Tom Petracca created CASSANDRA-11623:


 Summary: Compactions w/ Short Rows Spending Time in 
getOnDiskFilePointer
 Key: CASSANDRA-11623
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11623
 Project: Cassandra
  Issue Type: Improvement
Reporter: Tom Petracca
Priority: Minor
 Attachments: compactiontask_profile.png

Been doing some performance tuning and profiling of my cassandra cluster and 
noticed that compaction speeds for my tables that I know to have very short 
rows were going particularly slowly.  Profiling shows a ton of time being spent 
in BigTableWriter.getOnDiskFilePointer(), and attaching strace to a 
CompactionTask shows that a majority of time is being spent lseek (called by 
getOnDiskFilePointer), and not read or write.

Going deeper it looks like we call getOnDiskFilePointer each row (sometimes 
multiple times per row) in order to see if we've reached our expected sstable 
size and should start a new writer.  This is pretty unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10134) Always require replace_address to replace existing address

2016-04-20 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250341#comment-15250341
 ] 

Joel Knighton commented on CASSANDRA-10134:
---

This looks good to me for the most part. CI is clean as expected, and I like 
the implementation choices.

A few very small nits/suggestions:
* In {{GossipDigestSynVerbHandler}}, the log "Ignoring GossipDigestSynMessage 
because currently in gossip shadow round" would be more descriptive as 
"Ignoring non-empty GossipDigestSynMessage..."
* In {{GossipDigestSynVerbHandler}}, the log "Received a shadow round syn from 
{}" never includes the source as a second argument.
* In {{StorageService.prepareToJoin}}, we call 
{{MessagingService.instance().listen()}} down either branch of {{if 
(replacing)}} (in either {{prepareReplacementInfo}} or 
{{checkForEndpointCollision}}, since both use a shadow round). This makes the 
call at the end of {{prepareToJoin}} always redundant - it seems clearer to me 
to move this further up in {{prepareToJoin}} and remove the call to 
{{MessagingService.instance().listen()}} from {{Gossiper.doShadowRound}}.
* Minor whitespace fix in {{StorageService.prepareReplacementInfo}} at 
{{(replaceAddress)== null}}.

I'm not sure yet about the MV change. In {{CassandraDaemon}}, skipping 
{{view.build()}} and {{view.updateDefinition}} seems safe to me, since the 
definitions will be up-to-date on construction and we submit a build of all 
views after {{StorageService}} initialization. It seems to me that MV builds 
that were to be submitted while gossip is stopped (via JMX or nodetool) will 
never have them submitted, which feels like it might surprise someone down the 
road. Am I missing something here?  It might be worthwhile to just reload all 
ViewManagers in {{StorageService.startGossiping}}. At the same time, I have a 
hard time thinking of a scenario in which this will actually affect someone.

> Always require replace_address to replace existing address
> --
>
> Key: CASSANDRA-10134
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10134
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Tyler Hobbs
>Assignee: Sam Tunnicliffe
> Fix For: 3.x
>
>
> Normally, when a node is started from a clean state with the same address as 
> an existing down node, it will fail to start with an error like this:
> {noformat}
> ERROR [main] 2015-08-19 15:07:51,577 CassandraDaemon.java:554 - Exception 
> encountered during startup
> java.lang.RuntimeException: A node with address /127.0.0.3 already exists, 
> cancelling join. Use cassandra.replace_address if you want to replace this 
> node.
>   at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:783)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:720)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537)
>  [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:626) 
> [main/:na]
> {noformat}
> However, if {{auto_bootstrap}} is set to false or the node is in its own seed 
> list, it will not throw this error and will start normally.  The new node 
> then takes over the host ID of the old node (even if the tokens are 
> different), and the only message you will see is a warning in the other 
> nodes' logs:
> {noformat}
> logger.warn("Changing {}'s host ID from {} to {}", endpoint, storedId, 
> hostId);
> {noformat}
> This could cause an operator to accidentally wipe out the token information 
> for a down node without replacing it.  To fix this, we should check for an 
> endpoint collision even if {{auto_bootstrap}} is false or the node is a seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250296#comment-15250296
 ] 

Russ Hatch commented on CASSANDRA-11609:


CASSANDRA-11613 started happening recently as well (and involves upgrading UDT 
schema) so I wonder if they could have the same cause.

> cassandra won't start with schema complaint that does not appear to be valid
> 
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>
> This was found in the upgrades user_types_test.
> Can also be repro'd with ccm.
> To repro using ccm:
> Create a 1 node cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11615) cassandra-stress blocks when connecting to a big cluster

2016-04-20 Thread Eduard Tudenhoefner (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250280#comment-15250280
 ] 

Eduard Tudenhoefner commented on CASSANDRA-11615:
-

It was deadlocked and eventually failed with 
{code}
java.lang.RuntimeException: Timed out waiting for a timer thread - seems one 
got stuck. Check GC/Heap size
at org.apache.cassandra.stress.util.Timing.snap(Timing.java:98)
at 
org.apache.cassandra.stress.StressMetrics.update(StressMetrics.java:156)
at 
org.apache.cassandra.stress.StressMetrics.access$300(StressMetrics.java:37)
at 
org.apache.cassandra.stress.StressMetrics$2.run(StressMetrics.java:104)
at java.lang.Thread.run(Thread.java:745)
{code}

Yeah ok no problem, I can make it an option.

> cassandra-stress blocks when connecting to a big cluster
> 
>
> Key: CASSANDRA-11615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11615
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Eduard Tudenhoefner
>Assignee: Eduard Tudenhoefner
> Fix For: 3.0.x
>
> Attachments: 11615-3.0.patch
>
>
> I had a *100* node cluster and was running 
> {code}
> cassandra-stress read n=100 no-warmup cl=LOCAL_QUORUM -rate 'threads=20' 
> 'limit=1000/s'
> {code}
> Based on the thread dump it looks like it's been blocked at 
> https://github.com/apache/cassandra/blob/cassandra-3.0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java#L96
> {code}
> "Thread-20" #245 prio=5 os_prio=0 tid=0x7f3781822000 nid=0x46c4 waiting 
> for monitor entry [0x7f36cc788000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> "Thread-19" #244 prio=5 os_prio=0 tid=0x7f378182 nid=0x46c3 waiting 
> for monitor entry [0x7f36cc889000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> {code}
> I was trying the same with with a smaller cluster (50 nodes) and it was 
> working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250263#comment-15250263
 ] 

Russ Hatch commented on CASSANDRA-11609:


[~slebresne] Yes, it should repro easily with the steps above.

> cassandra won't start with schema complaint that does not appear to be valid
> 
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>
> This was found in the upgrades user_types_test.
> Can be repro'd with ccm.
> Create a 1 node ccm cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11609:
---
Description: 
This was found in the upgrades user_types_test.

Can also be repro'd with ccm.

Create a 1 node ccm cluster on 2.2.x

Create this schema:
{noformat}
create keyspace test2 with replication = {'class':'SimpleStrategy', 
'replication_factor':1};
use test2;
CREATE TYPE address (
 street text,
 city text,
 zip_code int,
 phones set
 );
CREATE TYPE fullname (
 irstname text,
 astname text
 );
CREATE TABLE users (
 d uuid PRIMARY KEY,
 ame frozen,
 ddresses map
 );
{noformat}

Upgrade the single node to trunk, attempt to start the node up. Start will fail 
with this exception:
{noformat}
ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
encountered during startup
org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
not allowed inside collections: map
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
 ~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652) 
~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
 ~[main/:na]
at 
org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
 ~[main/:na]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
 ~[main/:na]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
~[main/:na]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
[main/:na]
{noformat}

  was:
This was found in the upgrades user_types_test.

Can be repro'd with ccm.

Create a 1 node ccm cluster on 2.2.x

Create this schema:
{noformat}
create keyspace test2 with replication = {'class':'SimpleStrategy', 
'replication_factor':1};
use test2;
CREATE TYPE address (
 street text,
 city text,
 zip_code int,
 phones set
 );
CREATE TYPE fullname (
 irstname text,
 astname text
 );
CREATE TABLE users (
 d uuid PRIMARY KEY,
 ame frozen,
 ddresses map
 );
{noformat}

Upgrade the single node to trunk, attempt to start the node up. Start will fail 
with this exception:
{noformat}
ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
encountered during startup
org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
not allowed inside collections: map
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
 ~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652) 
~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
 ~[main/:na]
at 
org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
 ~[main/:na]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960) 
~[main/:na]
at 

[jira] [Updated] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11609:
---
Description: 
This was found in the upgrades user_types_test.

Can also be repro'd with ccm.

To repro using ccm:

Create a 1 node cluster on 2.2.x

Create this schema:
{noformat}
create keyspace test2 with replication = {'class':'SimpleStrategy', 
'replication_factor':1};
use test2;
CREATE TYPE address (
 street text,
 city text,
 zip_code int,
 phones set
 );
CREATE TYPE fullname (
 irstname text,
 astname text
 );
CREATE TABLE users (
 d uuid PRIMARY KEY,
 ame frozen,
 ddresses map
 );
{noformat}

Upgrade the single node to trunk, attempt to start the node up. Start will fail 
with this exception:
{noformat}
ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
encountered during startup
org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
not allowed inside collections: map
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
 ~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652) 
~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
 ~[main/:na]
at 
org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
 ~[main/:na]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
 ~[main/:na]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
~[main/:na]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
[main/:na]
{noformat}

  was:
This was found in the upgrades user_types_test.

Can also be repro'd with ccm.

Create a 1 node ccm cluster on 2.2.x

Create this schema:
{noformat}
create keyspace test2 with replication = {'class':'SimpleStrategy', 
'replication_factor':1};
use test2;
CREATE TYPE address (
 street text,
 city text,
 zip_code int,
 phones set
 );
CREATE TYPE fullname (
 irstname text,
 astname text
 );
CREATE TABLE users (
 d uuid PRIMARY KEY,
 ame frozen,
 ddresses map
 );
{noformat}

Upgrade the single node to trunk, attempt to start the node up. Start will fail 
with this exception:
{noformat}
ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
encountered during startup
org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
not allowed inside collections: map
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
 ~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652) 
~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
 ~[main/:na]
at 
org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
 ~[main/:na]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960) 
~[main/:na]
at 

[jira] [Updated] (CASSANDRA-11170) Uneven load can be created by cross DC mutation propagations, as remote forwarding node is not randomly picked

2016-04-20 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-11170:
-
Summary: Uneven load can be created by cross DC mutation propagations, as 
remote forwarding node is not randomly picked  (was: Uneven load can be created 
by cross DC mutation propagations, as remote coordinator is not randomly picked)

> Uneven load can be created by cross DC mutation propagations, as remote 
> forwarding node is not randomly picked
> --
>
> Key: CASSANDRA-11170
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11170
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Wei Deng
>
> I was looking at the o.a.c.service.StorageProxy code and realized that it 
> seems to be always picking the first IP in the remote DC target list as the 
> destination, whenever it needs to send the mutation to a remote DC. See these 
> lines in the code:
> https://github.com/apache/cassandra/blob/1944bf507d66b5c103c136319caeb4a9e3767a69/src/java/org/apache/cassandra/service/StorageProxy.java#L1280-L1301
> This could cause one node in the remote DC receiving more mutation messages 
> than the other nodes, and hence uneven workload distribution.
> A trivial test (with TRACE logging level enabled) on a 3+3 node cluster 
> proved the problem, see the system.log entries below:
> {code}
> INFO  [RMI TCP Connection(18)-54.173.227.52] 2016-02-13 09:54:55,948  
> StorageService.java:3353 - set log level to TRACE for classes under 
> 'org.apache.cassandra.service.StorageProxy' (if the level doesn't look like 
> 'TRACE' then the logger couldn't parse 'TRACE')
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:15,148  StorageProxy.java:1284 - 
> Adding FWD message to 8996@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:15,149  StorageProxy.java:1284 - 
> Adding FWD message to 8997@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:15,149  StorageProxy.java:1289 - 
> Sending message to 8998@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:22,939  StorageProxy.java:1284 - 
> Adding FWD message to 9032@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:22,940  StorageProxy.java:1284 - 
> Adding FWD message to 9033@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:22,941  StorageProxy.java:1289 - 
> Sending message to 9034@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:28,975  StorageProxy.java:1284 - 
> Adding FWD message to 9064@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:28,976  StorageProxy.java:1284 - 
> Adding FWD message to 9065@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:28,977  StorageProxy.java:1289 - 
> Sending message to 9066@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:33,464  StorageProxy.java:1284 - 
> Adding FWD message to 9094@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:33,465  StorageProxy.java:1284 - 
> Adding FWD message to 9095@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:33,478  StorageProxy.java:1289 - 
> Sending message to 9096@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:39,243  StorageProxy.java:1284 - 
> Adding FWD message to 9121@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:39,244  StorageProxy.java:1284 - 
> Adding FWD message to 9122@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:39,244  StorageProxy.java:1289 - 
> Sending message to 9123@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:44,248  StorageProxy.java:1284 - 
> Adding FWD message to 9145@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:44,249  StorageProxy.java:1284 - 
> Adding FWD message to 9146@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:44,249  StorageProxy.java:1289 - 
> Sending message to 9147@/54.183.209.219
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:49,731  StorageProxy.java:1284 - 
> Adding FWD message to 9170@/52.53.215.74
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:49,734  StorageProxy.java:1284 - 
> Adding FWD message to 9171@/54.183.23.201
> TRACE [SharedPool-Worker-1] 2016-02-13 09:55:49,735  StorageProxy.java:1289 - 
> Sending message to 9172@/54.183.209.219
> INFO  [RMI TCP Connection(22)-54.173.227.52] 2016-02-13 09:56:19,545  
> StorageService.java:3353 - set log level to INFO for classes under 
> 'org.apache.cassandra.service.StorageProxy' (if the level doesn't look like 
> 'INFO' then the logger couldn't parse 'INFO')
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11492) Crash during CREATE KEYSPACE immediately after drop

2016-04-20 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250223#comment-15250223
 ] 

Sylvain Lebresne commented on CASSANDRA-11492:
--

bq. This is actually on a single-node cluster, so I don't think it's a race 
condition.

Interesting. Then I admit having no particular idea on what the problem could 
be. Please do keep us posted if you can reproduce as I'm not entirely sure 
where we could start looking at without reproduction steps tbh.

> Crash during CREATE KEYSPACE immediately after drop
> ---
>
> Key: CASSANDRA-11492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11492
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Seth Rosenblum
>
> In a test environment, I run in immediate sequence:
> {code:sql}
> DROP KEYSPACE IF EXISTS "keyspacename";
> CREATE KEYSPACE "keyspacename"
> WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor': 
> 1};
> {code}
> And I end up with a {{java.lang.AssertionError: null}} error triggered during 
> the CREATE statement.
> https://gist.github.com/sethrosenblum/d6b450b4455eeb28550f4038a0928bcf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-20 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250217#comment-15250217
 ] 

Sylvain Lebresne commented on CASSANDRA-11521:
--

bq. the protocol is very wasteful for the cases where you stream all the data

While I agree that it's probably time to think about optimizing this further, I 
don't think it's specific to streaming so I'm in favor of just optimizing the 
format itself in general, and I've created CASSANDRA-11622 for that. I 
acknowledge that there may be some possible optimizations that would only 
provide gains when you're guaranteed to send large amount of data, but 
optimizing the format in general feels like a better first step in any case 
since it's more generally useful.

bq. there is so much more we can do, in general, to make streaming faster, if 
we go for something purpose-built instead

Making something purpose-built almost always allows for more optimization. But 
it also means more complexity, a completely new mechanism for driver authors 
and more code to maintain in general. I'm also not entirely convinced there is 
_that_ much it would allow over the "hint" idea (of course, how you value 
trade-offs between performance versus complexity is always somewhat 
subjective). In particular, I want to note that the "hint" would clearly mean 
that you intend to read it all and so we can still do a bunch of optimizations 
on that assumption. Like having those query not pollute our future user-space 
page cache, and maybe have the server start serializing at least one page in 
advance optimistically.

I also want to note that reusing the paging mechanism gives us fail-over for 
pretty much free (as in, almost no additional work from drivers) which is nice. 
And adding cancellation (which I agree would be nice) is also pretty simple.

Anyway, all this to say that I feel this "hint" idea would give a lot of the 
benefits for a lot less complexity (especially factoring the work required for 
all drivers). So while I'm curious to see some of the numbers Stefania is still 
working on, I (for what it's worth) really like the idea of starting with that 
simple idea and then focusing on other (non strictly protocol related) idea 
like CASSANDRA-11622 and CASSANDRA-11520. And only then re-evaluate if more 
complexity is justified/desirable.

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11622) Optimize native protocol result set serialization format

2016-04-20 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11622:

Labels: client-impacting protocolv5  (was: protocolv5)

> Optimize native protocol result set serialization format
> 
>
> Key: CASSANDRA-11622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11622
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>  Labels: client-impacting, protocolv5
>
> The format in which result sets are serialized in the native protocol has the 
> advantage of being very simple (which was, initially, a feature), but it 
> isn't very optimal. It's probably now time to think about optimizing it 
> further.
> At the very least, there is 2 simple optimizations we can do:
> # we can avoid the repetition of partition key columns (as well as duplicate 
> clustering column value when we have more than one clustering column, though 
> we'd have to recompute this at the CQL level since we don't have this 
> "optimization" internally (yet at least), contrarily to the partition key 
> case).
> # we can optimize the serialization of value of fixed-width type (like we now 
> do internally) by skipping the current 4 byte length. We could also maybe use 
> vints for the remaining case where there is a length.
> But of course, it's worth considering other potential optimization while 
> we're at it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11622) Optimize native protocol result set serialization format

2016-04-20 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-11622:


 Summary: Optimize native protocol result set serialization format
 Key: CASSANDRA-11622
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11622
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne


The format in which result sets are serialized in the native protocol has the 
advantage of being very simple (which was, initially, a feature), but it isn't 
very optimal. It's probably now time to think about optimizing it further.

At the very least, there is 2 simple optimizations we can do:
# we can avoid the repetition of partition key columns (as well as duplicate 
clustering column value when we have more than one clustering column, though 
we'd have to recompute this at the CQL level since we don't have this 
"optimization" internally (yet at least), contrarily to the partition key case).
# we can optimize the serialization of value of fixed-width type (like we now 
do internally) by skipping the current 4 byte length. We could also maybe use 
vints for the remaining case where there is a length.

But of course, it's worth considering other potential optimization while we're 
at it.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-20 Thread Andrew Jefferson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250132#comment-15250132
 ] 

Andrew Jefferson commented on CASSANDRA-11621:
--

I am sure that it is incomplete. But unfortunately the output to the log seems 
to truncate the stack trace before it gets to anything helpful. I just get a 
*lot* of lines of 

at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]

> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>Assignee: Alex Petrov
>
> I am using CQL to insert into a table that has ~4000 columns
> TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at 

[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-20 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250127#comment-15250127
 ] 

Jim Witschey commented on CASSANDRA-8844:
-

bq. Where are we with that Jim Witschey?

Currently auditing the commitlog dtests and extending them to work with CDC 
keyspaces.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named with a predictable naming schema, making it 
> triivial to process them in order.
> - Daemons should be able to checkpoint their work, and resume from where they 
> left off. This means they would have to leave some file artifact in the CDC 
> log's directory.
> - A sophisticated daemon should be able to be written that could 
> -- Catch up, in written-order, even when it is multiple logfiles behind in 
> processing
> -- Be able to continuously "tail" the most recent logfile and get 
> low-latency(ms?) access to the data as it is written.
> h2. Alternate approach
> In order to make consuming a change log easy and efficient to do with low 
> latency, the following could supplement the approach outlined above
> 

[jira] [Commented] (CASSANDRA-11492) Crash during CREATE KEYSPACE immediately after drop

2016-04-20 Thread Seth Rosenblum (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250121#comment-15250121
 ] 

Seth Rosenblum commented on CASSANDRA-11492:


This is actually on a single-node cluster, so I don't think it's a race 
condition.  This was being executed with the python datastax driver (I think it 
happened on both version 2.1.4 and 3.1.1, we were in the process of upgrading) 
but we were seeing this error in Cassandra's logs.  I'll keep an eye out for it 
happening again.

> Crash during CREATE KEYSPACE immediately after drop
> ---
>
> Key: CASSANDRA-11492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11492
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Seth Rosenblum
>
> In a test environment, I run in immediate sequence:
> {code:sql}
> DROP KEYSPACE IF EXISTS "keyspacename";
> CREATE KEYSPACE "keyspacename"
> WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor': 
> 1};
> {code}
> And I end up with a {{java.lang.AssertionError: null}} error triggered during 
> the CREATE statement.
> https://gist.github.com/sethrosenblum/d6b450b4455eeb28550f4038a0928bcf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-20 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250114#comment-15250114
 ] 

Sylvain Lebresne commented on CASSANDRA-11609:
--

It would sound like we're losing the frozeness information during the upgrade 
of the schema somehow.
Slightly confused by the "Can be repro'd with ccm" though: are you able to 
reproduce with the step you provide here or not?

> cassandra won't start with schema complaint that does not appear to be valid
> 
>
> Key: CASSANDRA-11609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>
> This was found in the upgrades user_types_test.
> Can be repro'd with ccm.
> Create a 1 node ccm cluster on 2.2.x
> Create this schema:
> {noformat}
> create keyspace test2 with replication = {'class':'SimpleStrategy', 
> 'replication_factor':1};
> use test2;
> CREATE TYPE address (
>  street text,
>  city text,
>  zip_code int,
>  phones set
>  );
> CREATE TYPE fullname (
>  irstname text,
>  astname text
>  );
> CREATE TABLE users (
>  d uuid PRIMARY KEY,
>  ame frozen,
>  ddresses map
>  );
> {noformat}
> Upgrade the single node to trunk, attempt to start the node up. Start will 
> fail with this exception:
> {noformat}
> ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
> encountered during startup
> org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
> not allowed inside collections: map
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-20 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11621:
-
Assignee: Alex Petrov

> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>Assignee: Alex Petrov
>
> I am using CQL to insert into a table that has ~4000 columns
> TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-20 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250103#comment-15250103
 ] 

Sylvain Lebresne commented on CASSANDRA-11621:
--

Are you sure that this stack trace is complete? Namely, there should be some 
remaining lines at the end, and those would be the most useful ones as they 
would tell us where the problem originates in Cassandra. Currently, every line 
we got are from guava, which is not too useful. Anyway, we can try to reproduce 
as it probably reproduce easily with a high number of columns, but getting the 
rest of the stack (assuming you do have more) would be appreciated.

> Stack Overflow inserting value with many columns
> 
>
> Key: CASSANDRA-11621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11621
> Project: Cassandra
>  Issue Type: Bug
> Environment: CQL 3
> C* 2.2.5
>Reporter: Andrew Jefferson
>
> I am using CQL to insert into a table that has ~4000 columns
> TABLE_DEFINITION = "
>   id uuid,
>   "dimension_n" for n in _.range(N_DIMENSIONS)
>   ETAG timeuuid,
>   PRIMARY KEY (id)
> "
> I am using the node.js library from Datastax to execute CQL. This creates a 
> prepared statement and then uses it to perform an insert. It works fine on C* 
> 2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.
> I know enough Java to think that recursing an iterator is bad form and should 
> be easy to fix.
> ERROR 14:59:01 Unexpected exception during request; channel = [id: 
> 0xaac42a5d, /10.0.7.182:58736 => /10.0.0.87:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
> ~[guava-16.0.jar:na]
>   at 

[jira] [Commented] (CASSANDRA-11304) Stack overflow when querying 2ndary index

2016-04-20 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250086#comment-15250086
 ] 

Sam Tunnicliffe commented on CASSANDRA-11304:
-

[~gpengtao] opened CASSANDRA-11616 for this regression, thanks.

> Stack overflow when querying 2ndary index
> -
>
> Key: CASSANDRA-11304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11304
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL
> Environment: 3 Node cluster / Ubuntu 14.04 / Cassandra 3.0.3
>Reporter: Job Tiel Groenestege
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.5
>
> Attachments: 11304-3.0.txt
>
>
> When reading data through a secondary index _select * from tableName where 
> secIndexField = 'foo'_  (from a Java application) I get the following 
> stacktrace on all nodes; after the query read fails. It happens repeatably 
> when  I rerun the same query:
> {quote}
> WARN  [SharedPool-Worker-8] 2016-03-04 13:26:28,041 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-8,5,main]: {}
> java.lang.StackOverflowError: null
> at 
> org.apache.cassandra.db.rows.BTreeRow$Builder.build(BTreeRow.java:653) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:436)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.readNext(UnfilteredDeserializer.java:211)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader.computeNext(SSTableIterator.java:266)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:153)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:340)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:128)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:133)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11606) Upgrade from 2.1.9 to 3.0.5 Fails with AssertionError

2016-04-20 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250084#comment-15250084
 ] 

Sylvain Lebresne commented on CASSANDRA-11606:
--

I'm afraid the stack trace is a bit low on info to understand why this happen 
here. I don't imagine you have an easy way to reproduce? If not, what could 
help narrowing it down is to know on which table this happen and what is the 
schema of this table.

> Upgrade from 2.1.9 to 3.0.5 Fails with AssertionError
> -
>
> Key: CASSANDRA-11606
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11606
> Project: Cassandra
>  Issue Type: Bug
> Environment: Fedora 20, Oracle Java 8, Apache Cassandra 2.1.9 -> 3.0.5
>Reporter: Anthony Verslues
> Fix For: 3.0.x
>
>
> I get this error while upgrading sstables. I got the same error when 
> upgrading to 3.0.2 and 3.0.4.
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.db.LegacyLayout$CellGrouper.addCell(LegacyLayout.java:1167)
> at 
> org.apache.cassandra.db.LegacyLayout$CellGrouper.addAtom(LegacyLayout.java:1142)
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.readRow(UnfilteredDeserializer.java:444)
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423)
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:289)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:65)
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109)
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100)
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:442)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:416)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:313)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11616) cassandra very high cpu rate

2016-04-20 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-11616:
---

Assignee: Sam Tunnicliffe

> cassandra very high cpu rate
> 
>
> Key: CASSANDRA-11616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: CentOS release 6.4
> 4 nodes cluster
> cassandra 3.0.5
> nodetool cfstats mykeyspace show the table data volume: Number of keys 
> (estimate): 77570676
>Reporter: PengtaoGeng
>Assignee: Sam Tunnicliffe
> Attachments: Image.png
>
>
> Under the very low speed of query CPU utilization of 100%.
> Query cql is only by partition key or by second index.
> Blow is the desc table info:
> CREATE TABLE mykeyspace.userlabel (
> id text PRIMARY KEY,
> cardno text,
> phone text,
> ccount text,
> username text
> ) ;
> CREATE INDEX userlabel_phone ON mykeyspace.userlabel (phone)
> top -H and jstack find the utilization cpu threads, they are all come from 
> "SharedPool-Worker".
> Show one thread jstack info:
> {quote}
> "SharedPool-Worker-28" #205 daemon prio=5 os_prio=0 tid=0x7f1610cc8780 
> nid=0xe7c0 runnable [0x7f0ed566f000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.cassandra.utils.MurmurHash.hash3_x64_128(MurmurHash.java:191)
> at 
> org.apache.cassandra.dht.Murmur3Partitioner.getHash(Murmur3Partitioner.java:181)
> at 
> org.apache.cassandra.dht.Murmur3Partitioner.decorateKey(Murmur3Partitioner.java:53)
> at 
> org.apache.cassandra.db.PartitionPosition$ForKey.get(PartitionPosition.java:49)
> at 
> org.apache.cassandra.db.marshal.PartitionerDefinedOrder.compareCustom(PartitionerDefinedOrder.java:93)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:158)
> at 
> org.apache.cassandra.db.ClusteringComparator.compareComponent(ClusteringComparator.java:166)
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:137)
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:126)
> at 
> org.apache.cassandra.db.ClusteringComparator.compare(ClusteringComparator.java:44)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.compareTo(MergeIterator.java:378)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.replaceAndSink(MergeIterator.java:266)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:428)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:288)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108)
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.prepareNext(CompositesSearcher.java:130)
> at 
> org.apache.cassandra.index.internal.composites.CompositesSearcher$1.hasNext(CompositesSearcher.java:83)
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289)
> at 
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:47)
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
> at java.lang.Thread.run(Thread.java:745)
> {quote}
> I'm using Cassandra 

[jira] [Updated] (CASSANDRA-11510) Clustering key and secondary index

2016-04-20 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11510:
---
Reproduced In: 3.4, 3.3  (was: 3.3, 3.4)
 Reviewer: Benjamin Lerer

> Clustering key and secondary index
> --
>
> Key: CASSANDRA-11510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11510
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: ubuntu 14.04 LTS
>Reporter: Julien Muller
>Assignee: Sam Tunnicliffe
> Fix For: 3.3, 3.4
>
>
> I noticed the following change in behavior while migrating from 2.0.11: 
> Elements of the clustering key seems to not be secondary indexable anymore.
> Using this table:
> {code:sql}
> CREATE TABLE table1 (
> name text,
> class int,
> inter text,
> power int,
> PRIMARY KEY (name, class, inter)
> ) WITH CLUSTERING ORDER BY (class DESC, inter ASC);
> INSERT INTO table1 (name, class, inter, power) VALUES ('R1',1, 'int1',13);
> INSERT INTO table1 (name, class, inter, power) VALUES ('R1',2, 'int1',18);
> INSERT INTO table1 (name, class, inter, power) VALUES ('R1',3, 'int1',37);
> INSERT INTO table1 (name, class, inter, power) VALUES ('R1',4, 'int1',49);
> {code}
> In version 2.0.11, I used to have a secondary index on inter, that allowed me 
> to make fast queries on the table:
> {code:sql}
> CREATE INDEX table1_inter ON table1 (inter);
> SELECT * FROM table1 where name='R1' AND class>0 AND class<4 AND inter='int1' 
> ALLOW FILTERING;
> {code}
> While testing on 3.3.0, I get the following message:
> Clustering column "inter" cannot be restricted (preceding column "class" is 
> restricted by a non-EQ relation)
> It seems to only be considered as a key and the index and ALLOW FILTERING are 
> not taken into account anymore (as it was in 2.0.11).
> I found the following workaround: 
> Duplicate the column inter as a regular column, and simply query it with the 
> secondary index and no ALLOW FILTERING. It looks like the behavior I would 
> anticipate and do not understand why it does not work on inter only because 
> it is a clustering key. The only answer on the ml evokes a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11615) cassandra-stress blocks when connecting to a big cluster

2016-04-20 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250080#comment-15250080
 ] 

T Jake Luciani commented on CASSANDRA-11615:


Was it deadlocked? Or just slow?

Regarding your fix
>From the driver javadocs:
{quote}
On the other hand, if that assumption turns out to be wrong and one host hasn't 
prepared a given statement, it needs to be re-prepared on the fly the first 
time it gets executed; this causes a performance penalty (one extra roundtrip 
to resend the query to prepare, and another to retry the execution).
{quote}

Since this is a benchmark tool I'd prefer to not make this the default. Can you 
make it an option?

> cassandra-stress blocks when connecting to a big cluster
> 
>
> Key: CASSANDRA-11615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11615
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Eduard Tudenhoefner
>Assignee: Eduard Tudenhoefner
> Fix For: 3.0.x
>
> Attachments: 11615-3.0.patch
>
>
> I had a *100* node cluster and was running 
> {code}
> cassandra-stress read n=100 no-warmup cl=LOCAL_QUORUM -rate 'threads=20' 
> 'limit=1000/s'
> {code}
> Based on the thread dump it looks like it's been blocked at 
> https://github.com/apache/cassandra/blob/cassandra-3.0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java#L96
> {code}
> "Thread-20" #245 prio=5 os_prio=0 tid=0x7f3781822000 nid=0x46c4 waiting 
> for monitor entry [0x7f36cc788000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> "Thread-19" #244 prio=5 os_prio=0 tid=0x7f378182 nid=0x46c3 waiting 
> for monitor entry [0x7f36cc889000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> {code}
> I was trying the same with with a smaller cluster (50 nodes) and it was 
> working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11499) dtest failure in commitlog_test.TestCommitLog.test_commitlog_replay_on_startup

2016-04-20 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-11499:
-
Assignee: DS Test Eng  (was: Jim Witschey)

> dtest failure in commitlog_test.TestCommitLog.test_commitlog_replay_on_startup
> --
>
> Key: CASSANDRA-11499
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11499
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/341/testReport/commitlog_test/TestCommitLog/test_commitlog_replay_on_startup
> Failed on CassCI build trunk_novnode_dtest #341
> {noformat}
> Error Message
> 03 Apr 2016 16:32:49 [node1] Missing: ['Log replay complete']:
> INFO  [main] 2016-04-03 16:22:39,826 YamlConfigura.
> See system.log for remainder
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-1UTelU
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 'true'}
> dtest: DEBUG: Insert data
> dtest: DEBUG: Verify data is present
> dtest: DEBUG: Stop node abruptly
> dtest: DEBUG: Verify commitlog was written before abrupt stop
> dtest: DEBUG: Verify no SSTables were flushed before abrupt stop
> dtest: DEBUG: Verify commit log was replayed on startup
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/commitlog_test.py", line 193, in 
> test_commitlog_replay_on_startup
> node1.watch_log_for("Log replay complete")
>   File "/home/automaton/ccm/ccmlib/node.py", line 425, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "03 Apr 2016 16:32:49 [node1] Missing: ['Log replay complete']:\nINFO  [main] 
> 2016-04-03 16:22:39,826 YamlConfigura.\nSee system.log for 
> remainder\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-1UTelU\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 'true'}\ndtest: 
> DEBUG: Insert data\ndtest: DEBUG: Verify data is present\ndtest: DEBUG: Stop 
> node abruptly\ndtest: DEBUG: Verify commitlog was written before abrupt 
> stop\ndtest: DEBUG: Verify no SSTables were flushed before abrupt 
> stop\ndtest: DEBUG: Verify commit log was replayed on 
> startup\n- >> end captured logging << 
> -"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11523) server side exception on secondary index query through thrift

2016-04-20 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11523:

   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.0.6
   3.6
   Status: Resolved  (was: Patch Available)

Thanks, committed to 3.0 in {{14f08e6f66ef96614fccd12d1eac482c00ee7dc5}} & 
merged to trunk

> server side exception on secondary index query through thrift
> -
>
> Key: CASSANDRA-11523
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11523
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: linux opensuse 13.2, jdk8
>Reporter: Ivan Georgiev
>Assignee: Sam Tunnicliffe
> Fix For: 3.6, 3.0.6
>
>
> Trying to upgrade from 2.x to 3.x, using 3.0.4 for the purpose. We are using 
> thrift interface for the time being. Everything works fine except for 
> secondary index queries. 
> When doing a get_range_slices call with row_filter set in the KeyRange we get 
> a server side exception. Here is a trace of the exception:
> INFO   | jvm 1| 2016/04/07 14:56:35 | 14:56:35.401 [Thrift:12] DEBUG 
> o.a.cassandra.service.ReadCallback - Failed; received 0 of 1 responses
> INFO   | jvm 1| 2016/04/07 14:56:35 | 14:56:35.401 [SharedPool-Worker-1] 
> WARN  o.a.c.c.AbstractLocalAwareExecutorService - Uncaught exception on 
> thread Thread[SharedPool-Worker-1,5,main]: {}
> INFO   | jvm 1| 2016/04/07 14:56:35 | java.lang.RuntimeException: 
> java.lang.NullPointerException
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2450)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_72]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
> INFO   | jvm 1| 2016/04/07 14:56:35 | Caused by: 
> java.lang.NullPointerException: null
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.index.internal.keys.KeysSearcher.filterIfStale(KeysSearcher.java:155)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.index.internal.keys.KeysSearcher.access$300(KeysSearcher.java:36)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.index.internal.keys.KeysSearcher$1.prepareNext(KeysSearcher.java:104)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.index.internal.keys.KeysSearcher$1.hasNext(KeysSearcher.java:70)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1792)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> INFO   | jvm 1| 2016/04/07 14:56:35 | at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2446)
>  

[jira] [Updated] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-20 Thread Andrew Jefferson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Jefferson updated CASSANDRA-11621:
-
Description: 
I am using CQL to insert into a table that has ~4000 columns

TABLE_DEFINITION = "
  id uuid,
  "dimension_n" for n in _.range(N_DIMENSIONS)
  ETAG timeuuid,
  PRIMARY KEY (id)
"

I am using the node.js library from Datastax to execute CQL. This creates a 
prepared statement and then uses it to perform an insert. It works fine on C* 
2.1 but after upgrading to C* 2.2.5 I get the stack overflow below.

I know enough Java to think that recursing an iterator is bad form and should 
be easy to fix.

ERROR 14:59:01 Unexpected exception during request; channel = [id: 0xaac42a5d, 
/10.0.7.182:58736 => /10.0.0.87:9042]
java.lang.StackOverflowError: null
at 
com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
 ~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
 ~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]

  was:
I am using CQL to insert into a table that has ~4000 columns

TABLE_DEFINITION = "
  id uuid,
  "dimension_n" for n in _.range(N_DIMENSIONS)
  ETAG timeuuid,
  PRIMARY KEY (id)
"

I am using the node.js library from Datastax to execute CQL. This creates a 
prepared statement and then uses it to perform an insert. It works fine on C* 
2.1 but after upgrading to 2.2.5 I get the stack overflow below.

I know enough Java to think that recursing an iterator is bad form and should 
be easy to fix.

ERROR 14:59:01 Unexpected exception during request; channel = [id: 0xaac42a5d, 

[jira] [Updated] (CASSANDRA-11621) Stack Overflow inserting value with many columns

2016-04-20 Thread Andrew Jefferson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Jefferson updated CASSANDRA-11621:
-
Description: 
I am using CQL to insert into a table that has ~4000 columns

TABLE_DEFINITION = "
  image_id uuid,
  "dimension_n" for n in _.range(N_DIMENSIONS)
  ETAG timeuuid,
  PRIMARY KEY (image_id)
"

I am using the node.js library from Datastax to execute CQL. This creates a 
prepared statement and then uses it to perform an insert. It works fine on C* 
2.1 but after upgrading to 2.2.5 I get the stack overflow below.

I know enough Java to think that recursing an iterator is bad form and should 
be easy to fix.

ERROR 14:59:01 Unexpected exception during request; channel = [id: 0xaac42a5d, 
/10.0.7.182:58736 => /10.0.0.87:9042]
java.lang.StackOverflowError: null
at 
com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
 ~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
 ~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]

  was:
I am using CQL to insert into a table that has ~4000 columns

TABLE_DEFINITION = "
  image_id uuid,
  "dimension_#{n}" for n in _.range(N_DIMENSIONS)
  ETAG timeuuid,
  PRIMARY KEY (image_id)
"

I am using the node.js library from Datastax to execute CQL. This creates a 
prepared statement and then uses it to perform an insert. It works fine on C* 
2.1 but after upgrading to 2.2.5 I get the stack overflow below.

I know enough Java to think that recursing an iterator is bad form and should 
be easy to fix.

ERROR 14:59:01 Unexpected exception during request; 

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-20 Thread samt
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a5e501f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a5e501f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a5e501f0

Branch: refs/heads/trunk
Commit: a5e501f094e882e1531abf9bf205e8a59730ed70
Parents: 7cd14d0 14f08e6
Author: Sam Tunnicliffe 
Authored: Wed Apr 20 16:04:18 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Wed Apr 20 16:06:51 2016 +0100

--
 CHANGES.txt |  1 +
 .../index/internal/keys/KeysSearcher.java   | 33 ++--
 2 files changed, 31 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5e501f0/CHANGES.txt
--
diff --cc CHANGES.txt
index 08efbfb,ae73437..ca679b2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,58 -1,5 +1,59 @@@
 -3.0.6
 +3.6
 + * Optimize the overlapping lookup by calculating all the
 +   bounds in advance (CASSANDRA-11571)
 + * Support json/yaml output in noetool tablestats (CASSANDRA-5977)
 + * (stress) Add datacenter option to -node options (CASSANDRA-11591)
 + * Fix handling of empty slices (CASSANDRA-11513)
 + * Make number of cores used by cqlsh COPY visible to testing code 
(CASSANDRA-11437)
 + * Allow filtering on clustering columns for queries without secondary 
indexes (CASSANDRA-11310)
 + * Refactor Restriction hierarchy (CASSANDRA-11354)
 + * Eliminate allocations in R/W path (CASSANDRA-11421)
 + * Update Netty to 4.0.36 (CASSANDRA-11567)
 + * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)
 + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818)
 + * Support UDT in CQLSSTableWriter (CASSANDRA-10624)
 + * Support for non-frozen user-defined types, updating
 +   individual fields of user-defined types (CASSANDRA-7423)
 + * Make LZ4 compression level configurable (CASSANDRA-11051)
 + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
 + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)
 + * Improve field-checking and error reporting in cassandra.yaml 
(CASSANDRA-10649)
 + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507)
 + * More user friendly error when providing an invalid token to nodetool 
(CASSANDRA-9348)
 + * Add static column support to SASI index (CASSANDRA-11183)
 + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization 
(CASSANDRA-11434)
 + * Support LIKE operator in prepared statements (CASSANDRA-11456)
 + * Add a command to see if a Materialized View has finished building 
(CASSANDRA-9967)
 + * Log endpoint and port associated with streaming operation (CASSANDRA-8777)
 + * Print sensible units for all log messages (CASSANDRA-9692)
 + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096)
 + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372)
 + * Compress only inter-dc traffic by default (CASSANDRA-)
 + * Add metrics to track write amplification (CASSANDRA-11420)
 + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
 + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
 + * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
 + * Add auto import java.util for UDF code block (CASSANDRA-11392)
 + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
 + * sstablemetadata should print sstable min/max token (CASSANDRA-7159)
 + * Do not wrap CassandraException in TriggerExecutor (CASSANDRA-9421)
 + * COPY TO should have higher double precision (CASSANDRA-11255)
 + * Stress should exit with non-zero status after failure (CASSANDRA-10340)
 + * Add client to cqlsh SHOW_SESSION (CASSANDRA-8958)
 + * Fix nodetool tablestats keyspace level metrics (CASSANDRA-11226)
 + * Store repair options in parent_repair_history (CASSANDRA-11244)
 + * Print current leveling in sstableofflinerelevel (CASSANDRA-9588)
 + * Change repair message for keyspaces with RF 1 (CASSANDRA-11203)
 + * Remove hard-coded SSL cipher suites and protocols (CASSANDRA-10508)
 + * Improve concurrency in CompactionStrategyManager (CASSANDRA-10099)
 + * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274)
 + * Refuse to start and print txn log information in case of disk
 +   corruption (CASSANDRA-10112)
 + * Resolve some eclipse-warnings (CASSANDRA-11086)
 + * (cqlsh) Show static columns in a different color (CASSANDRA-11059)
 + * Allow to remove TTLs on table with default_time_to_live (CASSANDRA-11207)
 +Merged from 3.0:
+  * Ensure columnfilter covers indexed columns for thrift 2i queries 
(CASSANDRA-11523)
   * Only open one sstable scanner per 

  1   2   >