[jira] [Updated] (CASSANDRA-10398) Allow dropping COMPACT STORAGE flag

2016-02-01 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10398:
--
Fix Version/s: (was: 3.x)

> Allow dropping COMPACT STORAGE flag
> ---
>
> Key: CASSANDRA-10398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>
> To provide a migration path from Thrift to CQL for mixed static/dynamic 
> column families, we need to be able to switch off the {{COMPACT STORAGE}} 
> flag. Otherwise CQL would only recognize the static columns.
> This should be relatively easy after CASSANDRA-8099, but needs extensive 
> testing first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10715) Filtering on NULL returns ReadFailure exception

2016-02-01 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-10715:

Description: 
oThis is an issue I first noticed through the C# driver, but I was able to 
repro on cqlsh, leading me to believe this is a Cassandra bug.

Given the following schema:
{noformat}
CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
unique_movie_title text,
movie_maker text,
director text,
list list,
"mainGuy" text,
"yearMade" int,
PRIMARY KEY ((unique_movie_title, movie_maker), director)
) WITH CLUSTERING ORDER BY (director ASC)
{noformat}

Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
argument:
{noformat}
SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
"yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
{noformat}

returns a ReadFailure exception:
{noformat}
cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
"unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
"mainGuy" = null ALLOW FILTERING;
←[0;1;31mTraceback (most recent call last):
  File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
perform_simple_statement
result = future.result()
  File 
"C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
 line 3118, in result
raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'cons
istency': 'ONE'}
←[0m
{noformat}

Cassandra log shows:
{noformat}
WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-2,10,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:288) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1692)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2346)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_60]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.0.jar:3.0.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
{noformat}
In C* < 3.0.0 (such as 2.2.3), this same 

[jira] [Commented] (CASSANDRA-11102) Data lost during compaction

2016-02-01 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126804#comment-15126804
 ] 

Marcus Eriksson commented on CASSANDRA-11102:
-

problem is that we collect the wrong maxLocalDeletionTime since CASSANDRA-8099 
if we don't have any regular columns (ie, columns not part of the clustering). 
(https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/rows/Rows.java#L77)

pushed an ugly "fix" to show the problem here: 
https://github.com/krummas/cassandra/commits/marcuse/11102

I'll have another go tomorrow without a fried brain, unless [~slebresne] has an 
obvious fix for this

> Data lost during compaction
> ---
>
> Key: CASSANDRA-11102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.2.1 (single node, 5 node cluster)
> JDK 8
>Reporter: Jaroslav Kamenik
>Assignee: Marcus Eriksson
>Priority: Blocker
> Fix For: 3.0.3, 3.3
>
>
> We have experienced data loses in some tables during few weeks since update 
> to cassandra 3.0. I thing I successfully found test case now. 
> Step one - test table:
> CREATE TABLE aaa (
> r int,
> c1 int,
> c2 ascii,
> PRIMARY KEY (r, c1, c2));
> Step two - run few queries:
>   insert into aaa (r, c1, c2) values (1,2,'A');
>   delete from aaa where r=1 and c1=2 and c2='B';
>   insert into aaa (r, c1, c2) values (2,3,'A');
>   delete from aaa where r=2 and c1=3 and c2='B';
>   insert into aaa (r, c1, c2) values (3,4,'A');
>   delete from aaa where r=3 and c1=4 and c2='B';
>   insert into aaa (r, c1, c2) values (4,5,'A');
>   delete from aaa where r=4 and c1=5 and c2='B';
> It creates 4 rows (select count says 4) and 4 tombstones.
> Step 3 - Restart Cassandra
> You will see new files written into C* data folder. I tried sstable-tools to 
> print table structure, it shows 4 rows, data and tombstones are there.
> Step 4 - set GC grace to 1 to force tombstone removing during compaction.
> alter table aaa with GC_GRACE_SECONDS = 1;
> Step 5 - Compact tables
> ./nodetool compact
> aaa files dissapeares during compaction. 
> select count(*) says 0
> compaction history says
> ... aaa  2016-02-01T14:24:01.433   329   0   {}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10715) Filtering on NULL returns ReadFailure exception

2016-02-01 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127000#comment-15127000
 ] 

Alex Petrov commented on CASSANDRA-10715:
-

It seems that the 2.2.3 error "Unsupported null value for indexed column" is 
not correct, as `mainGuy` isn't indexed. 

On current master, it'd throw the same exact exception if the column `mainGuy` 
is indexed. 

If the column isn't indexed and the table is empty, query simple returns 0 
results. In case there were any results, assertion is triggered so query fails:
```
CREATE TABLE foo (k1 int,
  k2 int,
  v1 int,
  v2 int,
  PRIMARY KEY ((k1, k2), v1));

> SELECT * FROM foo where v2 = null ALLOW FILTERING;
 k1 | k2 | v1 | v2
+++

> insert into foo (k1,k2,v1,v2) values (1,1,1,1);
> select * from foo where v2 = null allow filtering;
Traceback (most recent call last):
  File "./bin/cqlsh.py", line 1249, in perform_simple_statement
result = future.result()
  File 
"/Users/ifesdjeen/foss/java/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
```

If I understand the issue correctly, there's just no validation for query, 
therefore it fails during the iteration process. For example, `ColumnCondition` 
used for LWT supports null values for EQ/NEQ. `Operator` logic might be reused 
for the both cases.

> Filtering on NULL returns ReadFailure exception
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>
> oThis is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> 

[jira] [Commented] (CASSANDRA-10779) Mutations do not block for completion under view lock contention

2016-02-01 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126929#comment-15126929
 ] 

Carl Yeksigian commented on CASSANDRA-10779:


Also CommitLogReplay; made those changes and am rerunning the tests now.

> Mutations do not block for completion under view lock contention
> 
>
> Key: CASSANDRA-10779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10779
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Windows 7 64-bit, Cassandra v3.0.0, Java 1.8u60
>Reporter: Will Zhang
>Assignee: Tyler Hobbs
> Fix For: 3.0.x, 3.x
>
>
> Hi guys,
> I encountered the following warning message when I was testing to upgrade 
> from v2.2.2 to v3.0.0. 
> It looks like a write time-out but in an uncaught exception. Could this be an 
> easy fix?
> Log file section below. Thank you!
> {code}
>   WARN  [SharedPool-Worker-64] 2015-11-26 14:04:24,678 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-64,10,main]: {}
> org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - 
> received only 0 responses.
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:427) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:386) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.Keyspace.lambda$apply$59(Keyspace.java:435) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.0.jar:3.0.0]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
>   INFO  [IndexSummaryManager:1] 2015-11-26 14:41:10,527 
> IndexSummaryManager.java:257 - Redistributing index summaries
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10715) Filtering on NULL returns ReadFailure exception

2016-02-01 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127000#comment-15127000
 ] 

Alex Petrov edited comment on CASSANDRA-10715 at 2/1/16 8:49 PM:
-

It seems that the 2.2.3 error "Unsupported null value for indexed column" is 
not correct, as `mainGuy` isn't indexed. 

On current master, it'd throw the same exact exception if the column `mainGuy` 
is indexed. 

If the column isn't indexed and the table is empty, query simple returns 0 
results. In case there were any results, assertion is triggered so query fails:

{noformat}
CREATE TABLE foo (k1 int,
  k2 int,
  v1 int,
  v2 int,
  PRIMARY KEY ((k1, k2), v1));

> SELECT * FROM foo where v2 = null ALLOW FILTERING;
 k1 | k2 | v1 | v2
+++

> insert into foo (k1,k2,v1,v2) values (1,1,1,1);
> select * from foo where v2 = null allow filtering;
Traceback (most recent call last):
  File "./bin/cqlsh.py", line 1249, in perform_simple_statement
result = future.result()
  File 
"/Users/ifesdjeen/foss/java/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{/noformat}

If I understand the issue correctly, there's just no validation for query, 
therefore it fails during the iteration process. For example, `ColumnCondition` 
used for LWT supports null values for EQ/NEQ. `Operator` logic might be reused 
for the both cases.


was (Author: ifesdjeen):
It seems that the 2.2.3 error "Unsupported null value for indexed column" is 
not correct, as `mainGuy` isn't indexed. 

On current master, it'd throw the same exact exception if the column `mainGuy` 
is indexed. 

If the column isn't indexed and the table is empty, query simple returns 0 
results. In case there were any results, assertion is triggered so query fails:
```
CREATE TABLE foo (k1 int,
  k2 int,
  v1 int,
  v2 int,
  PRIMARY KEY ((k1, k2), v1));

> SELECT * FROM foo where v2 = null ALLOW FILTERING;
 k1 | k2 | v1 | v2
+++

> insert into foo (k1,k2,v1,v2) values (1,1,1,1);
> select * from foo where v2 = null allow filtering;
Traceback (most recent call last):
  File "./bin/cqlsh.py", line 1249, in perform_simple_statement
result = future.result()
  File 
"/Users/ifesdjeen/foss/java/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
```

If I understand the issue correctly, there's just no validation for query, 
therefore it fails during the iteration process. For example, `ColumnCondition` 
used for LWT supports null values for EQ/NEQ. `Operator` logic might be reused 
for the both cases.

> Filtering on NULL returns ReadFailure exception
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>
> oThis is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File 

[jira] [Updated] (CASSANDRA-11041) Make it clear what timestamp_resolution is used for with DTCS

2016-02-01 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-11041:
---
Labels: docs-impacting dtcs  (was: docs-impacting)

> Make it clear what timestamp_resolution is used for with DTCS
> -
>
> Key: CASSANDRA-11041
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11041
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>  Labels: docs-impacting, dtcs
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> We have had a few cases lately where users misunderstand what 
> timestamp_resolution does, we should;
> * make the option not autocomplete in cqlsh
> * update documentation
> * log a warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10715) Filtering on NULL returns ReadFailure exception

2016-02-01 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127000#comment-15127000
 ] 

Alex Petrov edited comment on CASSANDRA-10715 at 2/1/16 8:49 PM:
-

It seems that the 2.2.3 error "Unsupported null value for indexed column" is 
not correct, as `mainGuy` isn't indexed. 

On current master, it'd throw the same exact exception if the column `mainGuy` 
is indexed. 

If the column isn't indexed and the table is empty, query simple returns 0 
results. In case there were any results, assertion is triggered so query fails:

{noformat}
CREATE TABLE foo (k1 int,
  k2 int,
  v1 int,
  v2 int,
  PRIMARY KEY ((k1, k2), v1));

> SELECT * FROM foo where v2 = null ALLOW FILTERING;
 k1 | k2 | v1 | v2
+++

> insert into foo (k1,k2,v1,v2) values (1,1,1,1);
> select * from foo where v2 = null allow filtering;
Traceback (most recent call last):
  File "./bin/cqlsh.py", line 1249, in perform_simple_statement
result = future.result()
  File 
"/Users/ifesdjeen/foss/java/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{noformat}

If I understand the issue correctly, there's just no validation for query, 
therefore it fails during the iteration process. For example, `ColumnCondition` 
used for LWT supports null values for EQ/NEQ. `Operator` logic might be reused 
for the both cases.


was (Author: ifesdjeen):
It seems that the 2.2.3 error "Unsupported null value for indexed column" is 
not correct, as `mainGuy` isn't indexed. 

On current master, it'd throw the same exact exception if the column `mainGuy` 
is indexed. 

If the column isn't indexed and the table is empty, query simple returns 0 
results. In case there were any results, assertion is triggered so query fails:

{noformat}
CREATE TABLE foo (k1 int,
  k2 int,
  v1 int,
  v2 int,
  PRIMARY KEY ((k1, k2), v1));

> SELECT * FROM foo where v2 = null ALLOW FILTERING;
 k1 | k2 | v1 | v2
+++

> insert into foo (k1,k2,v1,v2) values (1,1,1,1);
> select * from foo where v2 = null allow filtering;
Traceback (most recent call last):
  File "./bin/cqlsh.py", line 1249, in perform_simple_statement
result = future.result()
  File 
"/Users/ifesdjeen/foss/java/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{/noformat}

If I understand the issue correctly, there's just no validation for query, 
therefore it fails during the iteration process. For example, `ColumnCondition` 
used for LWT supports null values for EQ/NEQ. `Operator` logic might be reused 
for the both cases.

> Filtering on NULL returns ReadFailure exception
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>
> oThis is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File 

[jira] [Resolved] (CASSANDRA-10398) Allow dropping COMPACT STORAGE flag

2016-02-01 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa resolved CASSANDRA-10398.

Resolution: Duplicate

> Allow dropping COMPACT STORAGE flag
> ---
>
> Key: CASSANDRA-10398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
> Fix For: 3.x
>
>
> To provide a migration path from Thrift to CQL for mixed static/dynamic 
> column families, we need to be able to switch off the {{COMPACT STORAGE}} 
> flag. Otherwise CQL would only recognize the static columns.
> This should be relatively easy after CASSANDRA-8099, but needs extensive 
> testing first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11104) KeysSearcher doesn't filter results by key range

2016-02-01 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-11104:
---

 Summary: KeysSearcher doesn't filter results by key range
 Key: CASSANDRA-11104
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11104
 Project: Cassandra
  Issue Type: Bug
  Components: Local Write-Read Paths
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 3.0.x, 3.x


In 3.0 the check in {{KeysSearcher}} to validate that a hit from the index is 
falls within the key range of command being executed was ommitted. The effect 
of this can be observed in a vnode cluster with > 1 node where nodes contain 
non-contiguous ranges. 

Because of the lack of range checking, each range command sent to a given 
replica will return all matching rows, resulting in duplicates in the result 
set (i.e. one duplicate per merged range).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11103) In CQL, can not create table with no predefined column

2016-02-01 Thread Robert Li (JIRA)
Robert Li created CASSANDRA-11103:
-

 Summary: In CQL, can not create table with no predefined column
 Key: CASSANDRA-11103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11103
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Li


We have a service layer that provides Cassandra access to our (thousands of) 
edge and backend servers. The service provides simple API to set/get data in 
the form of List, while Tag is a structure of (name, value, ttl, 
timestamp) that maps to the data of a Cassandra column.
This service layer acts as a connection pool proxy to Cassandra, provides easy 
access, central usage / resource / performance monitoring, access control. Apps 
accessing this layer can create column family through an admin tool which 
creates the CF using Thrift client, and set/get data (using List) 
into/from the column family.
With the latest CQL, it seems not possible to create column family without 
predetermined column names. One option for us is to create table with a column 
of type Map. However, a Map column has two unpleasant implications:
1. Every column has to be prefixed with the name of the map column, which is 
unnatural and redundant. 
2. The data type of all columns has to be the same. The ability to store data 
in native format is lost.
It seems the fact that CQL can not create table without predefined column 
represents loss of function that is available in Thrift based client. It's 
almost a show stopper for us, preventing us to migrate from Thrift base client 
to the new Java client.
Attachments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10779) Mutations do not block for completion under view lock contention

2016-02-01 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian resolved CASSANDRA-10779.

   Resolution: Fixed
 Assignee: Carl Yeksigian  (was: Tyler Hobbs)
 Reviewer: Tyler Hobbs  (was: Carl Yeksigian)
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.3
   3.0.3

Test run was clean. Committed as 
[839a5ba|https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=commit;h=839a5bab2a7f5385a878e5dc5f8b01bda28fa777]
 and merged forward.

> Mutations do not block for completion under view lock contention
> 
>
> Key: CASSANDRA-10779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10779
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Windows 7 64-bit, Cassandra v3.0.0, Java 1.8u60
>Reporter: Will Zhang
>Assignee: Carl Yeksigian
> Fix For: 3.0.3, 3.3
>
>
> Hi guys,
> I encountered the following warning message when I was testing to upgrade 
> from v2.2.2 to v3.0.0. 
> It looks like a write time-out but in an uncaught exception. Could this be an 
> easy fix?
> Log file section below. Thank you!
> {code}
>   WARN  [SharedPool-Worker-64] 2015-11-26 14:04:24,678 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-64,10,main]: {}
> org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - 
> received only 0 responses.
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:427) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:386) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.Keyspace.lambda$apply$59(Keyspace.java:435) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.0.jar:3.0.0]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
>   INFO  [IndexSummaryManager:1] 2015-11-26 14:41:10,527 
> IndexSummaryManager.java:257 - Redistributing index summaries
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Fix build

2016-02-01 Thread carl
Fix build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30d3b29a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30d3b29a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30d3b29a

Branch: refs/heads/trunk
Commit: 30d3b29ab2b4aea486323a966d6169cfdc2e2113
Parents: 2e965f0
Author: Carl Yeksigian 
Authored: Mon Feb 1 17:44:05 2016 -0500
Committer: Carl Yeksigian 
Committed: Mon Feb 1 17:44:05 2016 -0500

--
 src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/30d3b29a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
index 3a9f5e6..55bdf07 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
@@ -105,7 +105,7 @@ public class CommitLogReplayer
 {
 Runnable runnable = new WrappedRunnable()
 {
-public void runMayThrow() throws IOException
+public void runMayThrow() throws ExecutionException
 {
 if 
(Schema.instance.getKSMetaData(mutation.getKeyspaceName()) == null)
 return;



[1/3] cassandra git commit: Fix build

2016-02-01 Thread carl
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.3 2e965f0e4 -> 30d3b29ab
  refs/heads/trunk 6e7d739d1 -> b24076d11


Fix build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30d3b29a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30d3b29a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30d3b29a

Branch: refs/heads/cassandra-3.3
Commit: 30d3b29ab2b4aea486323a966d6169cfdc2e2113
Parents: 2e965f0
Author: Carl Yeksigian 
Authored: Mon Feb 1 17:44:05 2016 -0500
Committer: Carl Yeksigian 
Committed: Mon Feb 1 17:44:05 2016 -0500

--
 src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/30d3b29a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
index 3a9f5e6..55bdf07 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
@@ -105,7 +105,7 @@ public class CommitLogReplayer
 {
 Runnable runnable = new WrappedRunnable()
 {
-public void runMayThrow() throws IOException
+public void runMayThrow() throws ExecutionException
 {
 if 
(Schema.instance.getKSMetaData(mutation.getKeyspaceName()) == null)
 return;



[3/3] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-02-01 Thread carl
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b24076d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b24076d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b24076d1

Branch: refs/heads/trunk
Commit: b24076d117ba6ab95159bd9bb83b01fc620e2681
Parents: 6e7d739 30d3b29
Author: Carl Yeksigian 
Authored: Mon Feb 1 17:44:20 2016 -0500
Committer: Carl Yeksigian 
Committed: Mon Feb 1 17:44:20 2016 -0500

--
 src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b24076d1/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--



[jira] [Updated] (CASSANDRA-11011) DateTieredCompactionStrategy not compacting sstables in 2.1.12

2016-02-01 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-11011:
---
Labels: dtcs  (was: )

> DateTieredCompactionStrategy not compacting  sstables in 2.1.12
> ---
>
> Key: CASSANDRA-11011
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11011
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Ubuntu 14.04.3 LTS
> 2.1.12
>Reporter: Alexander Piavlo
>  Labels: dtcs
>
> The following CF is never compacting from day one
> CREATE TABLE globaldb."DynamicParameter" (
> dp_id bigint PRIMARY KEY,
> dp_advertiser_id int,
> dp_application_id int,
> dp_application_user_id bigint,
> dp_banner_id int,
> dp_campaign_id int,
> dp_click_timestamp timestamp,
> dp_country text,
> dp_custom_parameters text,
> dp_flags bigint,
> dp_ip int,
> dp_machine_id text,
> dp_string text
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'max_sstable_age_days': '30', 'base_time_seconds': 
> '3600', 'timestamp_resolution': 'MILLISECONDS', 'enabled': 'true', 
> 'min_threshold': '2', 'class': 
> 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.2
> AND default_time_to_live = 10713600
> AND gc_grace_seconds = 1209600
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/6] cassandra git commit: Mutations do not block for completion under view lock contention

2016-02-01 Thread carl
Mutations do not block for completion under view lock contention

Patch by Carl Yeksigian; reviewed by Tyler Hobbs for CASSANDRA-10779


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/839a5bab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/839a5bab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/839a5bab

Branch: refs/heads/cassandra-3.3
Commit: 839a5bab2a7f5385a878e5dc5f8b01bda28fa777
Parents: b554cb3
Author: Carl Yeksigian 
Authored: Mon Feb 1 16:51:15 2016 -0500
Committer: Carl Yeksigian 
Committed: Mon Feb 1 16:59:57 2016 -0500

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Keyspace.java  | 32 +---
 src/java/org/apache/cassandra/db/Mutation.java  | 19 ++--
 .../cassandra/db/MutationVerbHandler.java   | 25 ---
 .../db/commitlog/CommitLogReplayer.java |  7 +++--
 .../cassandra/service/paxos/PaxosState.java | 12 ++--
 6 files changed, 73 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/839a5bab/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7a42916..bed8703 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
  * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
(CASSANDRA-11003)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/839a5bab/src/java/org/apache/cassandra/db/Keyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/Keyspace.java 
b/src/java/org/apache/cassandra/db/Keyspace.java
index 7b4f79b..2b62f0e 100644
--- a/src/java/org/apache/cassandra/db/Keyspace.java
+++ b/src/java/org/apache/cassandra/db/Keyspace.java
@@ -379,19 +379,19 @@ public class Keyspace
 }
 }
 
-public void apply(Mutation mutation, boolean writeCommitLog)
+public CompletableFuture apply(Mutation mutation, boolean 
writeCommitLog)
 {
-apply(mutation, writeCommitLog, true, false);
+return apply(mutation, writeCommitLog, true, false, null);
 }
 
-public void apply(Mutation mutation, boolean writeCommitLog, boolean 
updateIndexes)
+public CompletableFuture apply(Mutation mutation, boolean 
writeCommitLog, boolean updateIndexes)
 {
-apply(mutation, writeCommitLog, updateIndexes, false);
+return apply(mutation, writeCommitLog, updateIndexes, false, null);
 }
 
-public void applyFromCommitLog(Mutation mutation)
+public CompletableFuture applyFromCommitLog(Mutation mutation)
 {
-apply(mutation, false, true, true);
+return apply(mutation, false, true, true, null);
 }
 
 /**
@@ -403,13 +403,18 @@ public class Keyspace
  * @param updateIndexes  false to disable index updates (used by 
CollationController "defragmenting")
  * @param isClReplay true if caller is the commitlog replayer
  */
-public void apply(final Mutation mutation, final boolean writeCommitLog, 
boolean updateIndexes, boolean isClReplay)
+public CompletableFuture apply(final Mutation mutation,
+  final boolean writeCommitLog,
+  boolean updateIndexes,
+  boolean isClReplay,
+  CompletableFuture future)
 {
 if (TEST_FAIL_WRITES && metadata.name.equals(TEST_FAIL_WRITES_KS))
 throw new RuntimeException("Testing write failures");
 
 Lock lock = null;
 boolean requiresViewUpdate = updateIndexes && 
viewManager.updatesAffectView(Collections.singleton(mutation), false);
+final CompletableFuture mark = future == null ? new 
CompletableFuture<>() : future;
 
 if (requiresViewUpdate)
 {
@@ -422,7 +427,10 @@ public class Keyspace
 {
 logger.trace("Could not acquire lock for {}", 
ByteBufferUtil.bytesToHex(mutation.key().getKey()));
 Tracing.trace("Could not acquire MV lock");
-throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
+if (future != null)
+future.completeExceptionally(new 
WriteTimeoutException(WriteType.VIEW, ConsistencyLevel.LOCAL_ONE, 0, 1));
+else
+throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
 }
 else
 

[3/6] cassandra git commit: Mutations do not block for completion under view lock contention

2016-02-01 Thread carl
Mutations do not block for completion under view lock contention

Patch by Carl Yeksigian; reviewed by Tyler Hobbs for CASSANDRA-10779


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/839a5bab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/839a5bab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/839a5bab

Branch: refs/heads/trunk
Commit: 839a5bab2a7f5385a878e5dc5f8b01bda28fa777
Parents: b554cb3
Author: Carl Yeksigian 
Authored: Mon Feb 1 16:51:15 2016 -0500
Committer: Carl Yeksigian 
Committed: Mon Feb 1 16:59:57 2016 -0500

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Keyspace.java  | 32 +---
 src/java/org/apache/cassandra/db/Mutation.java  | 19 ++--
 .../cassandra/db/MutationVerbHandler.java   | 25 ---
 .../db/commitlog/CommitLogReplayer.java |  7 +++--
 .../cassandra/service/paxos/PaxosState.java | 12 ++--
 6 files changed, 73 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/839a5bab/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7a42916..bed8703 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
  * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
(CASSANDRA-11003)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/839a5bab/src/java/org/apache/cassandra/db/Keyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/Keyspace.java 
b/src/java/org/apache/cassandra/db/Keyspace.java
index 7b4f79b..2b62f0e 100644
--- a/src/java/org/apache/cassandra/db/Keyspace.java
+++ b/src/java/org/apache/cassandra/db/Keyspace.java
@@ -379,19 +379,19 @@ public class Keyspace
 }
 }
 
-public void apply(Mutation mutation, boolean writeCommitLog)
+public CompletableFuture apply(Mutation mutation, boolean 
writeCommitLog)
 {
-apply(mutation, writeCommitLog, true, false);
+return apply(mutation, writeCommitLog, true, false, null);
 }
 
-public void apply(Mutation mutation, boolean writeCommitLog, boolean 
updateIndexes)
+public CompletableFuture apply(Mutation mutation, boolean 
writeCommitLog, boolean updateIndexes)
 {
-apply(mutation, writeCommitLog, updateIndexes, false);
+return apply(mutation, writeCommitLog, updateIndexes, false, null);
 }
 
-public void applyFromCommitLog(Mutation mutation)
+public CompletableFuture applyFromCommitLog(Mutation mutation)
 {
-apply(mutation, false, true, true);
+return apply(mutation, false, true, true, null);
 }
 
 /**
@@ -403,13 +403,18 @@ public class Keyspace
  * @param updateIndexes  false to disable index updates (used by 
CollationController "defragmenting")
  * @param isClReplay true if caller is the commitlog replayer
  */
-public void apply(final Mutation mutation, final boolean writeCommitLog, 
boolean updateIndexes, boolean isClReplay)
+public CompletableFuture apply(final Mutation mutation,
+  final boolean writeCommitLog,
+  boolean updateIndexes,
+  boolean isClReplay,
+  CompletableFuture future)
 {
 if (TEST_FAIL_WRITES && metadata.name.equals(TEST_FAIL_WRITES_KS))
 throw new RuntimeException("Testing write failures");
 
 Lock lock = null;
 boolean requiresViewUpdate = updateIndexes && 
viewManager.updatesAffectView(Collections.singleton(mutation), false);
+final CompletableFuture mark = future == null ? new 
CompletableFuture<>() : future;
 
 if (requiresViewUpdate)
 {
@@ -422,7 +427,10 @@ public class Keyspace
 {
 logger.trace("Could not acquire lock for {}", 
ByteBufferUtil.bytesToHex(mutation.key().getKey()));
 Tracing.trace("Could not acquire MV lock");
-throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
+if (future != null)
+future.completeExceptionally(new 
WriteTimeoutException(WriteType.VIEW, ConsistencyLevel.LOCAL_ONE, 0, 1));
+else
+throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
 }
 else
 

[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-02-01 Thread carl
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6e7d739d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6e7d739d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6e7d739d

Branch: refs/heads/trunk
Commit: 6e7d739d12f6e7cab5fc9d33a28b40ad150c84e6
Parents: 4a241f6 2e965f0
Author: Carl Yeksigian 
Authored: Mon Feb 1 17:04:15 2016 -0500
Committer: Carl Yeksigian 
Committed: Mon Feb 1 17:04:15 2016 -0500

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Keyspace.java  | 32 +---
 src/java/org/apache/cassandra/db/Mutation.java  | 19 ++--
 .../cassandra/db/MutationVerbHandler.java   | 25 ---
 .../db/commitlog/CommitLogReplayer.java |  4 ++-
 .../cassandra/service/paxos/PaxosState.java | 12 ++--
 6 files changed, 72 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6e7d739d/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6e7d739d/src/java/org/apache/cassandra/db/Keyspace.java
--
diff --cc src/java/org/apache/cassandra/db/Keyspace.java
index 6122479,2b62f0e..72bcc82
--- a/src/java/org/apache/cassandra/db/Keyspace.java
+++ b/src/java/org/apache/cassandra/db/Keyspace.java
@@@ -423,48 -412,25 +427,52 @@@ public class Keyspac
  if (TEST_FAIL_WRITES && metadata.name.equals(TEST_FAIL_WRITES_KS))
  throw new RuntimeException("Testing write failures");
  
 -Lock lock = null;
 +Lock[] locks = null;
  boolean requiresViewUpdate = updateIndexes && 
viewManager.updatesAffectView(Collections.singleton(mutation), false);
+ final CompletableFuture mark = future == null ? new 
CompletableFuture<>() : future;
  
  if (requiresViewUpdate)
  {
  mutation.viewLockAcquireStart.compareAndSet(0L, 
System.currentTimeMillis());
 -lock = ViewManager.acquireLockFor(mutation.key().getKey());
  
 -if (lock == null)
 +// the order of lock acquisition doesn't matter (from a deadlock 
perspective) because we only use tryLock()
 +Collection columnFamilyIds = mutation.getColumnFamilyIds();
 +Iterator idIterator = columnFamilyIds.iterator();
 +locks = new Lock[columnFamilyIds.size()];
 +
 +for (int i = 0; i < columnFamilyIds.size(); i++)
  {
 -if ((System.currentTimeMillis() - mutation.createdAt) > 
DatabaseDescriptor.getWriteRpcTimeout())
 +UUID cfid = idIterator.next();
 +int lockKey = Objects.hash(mutation.key().getKey(), cfid);
 +Lock lock = ViewManager.acquireLockFor(lockKey);
 +if (lock == null)
  {
 -logger.trace("Could not acquire lock for {}", 
ByteBufferUtil.bytesToHex(mutation.key().getKey()));
 -Tracing.trace("Could not acquire MV lock");
 -if (future != null)
 -future.completeExceptionally(new 
WriteTimeoutException(WriteType.VIEW, ConsistencyLevel.LOCAL_ONE, 0, 1));
 +// we will either time out or retry, so release all 
acquired locks
 +for (int j = 0; j < i; j++)
 +locks[j].unlock();
 +
 +if ((System.currentTimeMillis() - mutation.createdAt) > 
DatabaseDescriptor.getWriteRpcTimeout())
 +{
 +logger.trace("Could not acquire lock for {} and table 
{}", ByteBufferUtil.bytesToHex(mutation.key().getKey()), 
columnFamilyStores.get(cfid).name);
 +Tracing.trace("Could not acquire MV lock");
- throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
++if (future != null)
++future.completeExceptionally(new 
WriteTimeoutException(WriteType.VIEW, ConsistencyLevel.LOCAL_ONE, 0, 1));
++else
++throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
 +}
  else
 -throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
 +{
 +// This view update can't happen right now. so rather 
than keep this thread busy
 +// we will re-apply ourself to the queue and try 
again later
 +

[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-01 Thread carl
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e965f0e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e965f0e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e965f0e

Branch: refs/heads/trunk
Commit: 2e965f0e43ef0d7d282338fca130b4b545effe7b
Parents: 4d5873d 839a5ba
Author: Carl Yeksigian 
Authored: Mon Feb 1 17:01:03 2016 -0500
Committer: Carl Yeksigian 
Committed: Mon Feb 1 17:01:03 2016 -0500

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Keyspace.java  | 32 +---
 src/java/org/apache/cassandra/db/Mutation.java  | 19 ++--
 .../cassandra/db/MutationVerbHandler.java   | 25 ---
 .../db/commitlog/CommitLogReplayer.java |  4 ++-
 .../cassandra/service/paxos/PaxosState.java | 12 ++--
 6 files changed, 72 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e965f0e/CHANGES.txt
--
diff --cc CHANGES.txt
index aa4f981,bed8703..bad296b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
   * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
   * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
 (CASSANDRA-11003)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e965f0e/src/java/org/apache/cassandra/db/Mutation.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e965f0e/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--
diff --cc src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
index e97b36e,b4472ed..3a9f5e6
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
@@@ -88,65 -84,6 +90,65 @@@ public class CommitLogReplaye
  private final ReplayFilter replayFilter;
  private final CommitLogArchiver archiver;
  
 +/*
 + * Wrapper around initiating mutations read from the log to make it 
possible
 + * to spy on initiated mutations for test
 + */
 +@VisibleForTesting
 +public static class MutationInitiator
 +{
 +protected Future initiateMutation(final Mutation mutation,
 +   final long segmentId,
 +   final int serializedSize,
 +   final long entryLocation,
 +   final CommitLogReplayer 
clr)
 +{
 +Runnable runnable = new WrappedRunnable()
 +{
 +public void runMayThrow() throws IOException
 +{
 +if 
(Schema.instance.getKSMetaData(mutation.getKeyspaceName()) == null)
 +return;
 +if (clr.pointInTimeExceeded(mutation))
 +return;
 +
 +final Keyspace keyspace = 
Keyspace.open(mutation.getKeyspaceName());
 +
 +// Rebuild the mutation, omitting column families that
 +//a) the user has requested that we ignore,
 +//b) have already been flushed,
 +// or c) are part of a cf that was dropped.
 +// Keep in mind that the cf.name() is suspect. do every 
thing based on the cfid instead.
 +Mutation newMutation = null;
 +for (PartitionUpdate update : 
clr.replayFilter.filter(mutation))
 +{
 +if (Schema.instance.getCF(update.metadata().cfId) == 
null)
 +continue; // dropped
 +
 +ReplayPosition rp = 
clr.cfPositions.get(update.metadata().cfId);
 +
 +// replay if current segment is newer than last 
flushed one or,
 +// if it is the last known segment, if we are after 
the replay position
 +if (segmentId > rp.segment || (segmentId == 
rp.segment && entryLocation > rp.position))
 +{
 +if (newMutation == null)
 +  

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-01 Thread carl
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e965f0e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e965f0e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e965f0e

Branch: refs/heads/cassandra-3.3
Commit: 2e965f0e43ef0d7d282338fca130b4b545effe7b
Parents: 4d5873d 839a5ba
Author: Carl Yeksigian 
Authored: Mon Feb 1 17:01:03 2016 -0500
Committer: Carl Yeksigian 
Committed: Mon Feb 1 17:01:03 2016 -0500

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Keyspace.java  | 32 +---
 src/java/org/apache/cassandra/db/Mutation.java  | 19 ++--
 .../cassandra/db/MutationVerbHandler.java   | 25 ---
 .../db/commitlog/CommitLogReplayer.java |  4 ++-
 .../cassandra/service/paxos/PaxosState.java | 12 ++--
 6 files changed, 72 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e965f0e/CHANGES.txt
--
diff --cc CHANGES.txt
index aa4f981,bed8703..bad296b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
   * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
   * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
 (CASSANDRA-11003)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e965f0e/src/java/org/apache/cassandra/db/Mutation.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e965f0e/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--
diff --cc src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
index e97b36e,b4472ed..3a9f5e6
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
@@@ -88,65 -84,6 +90,65 @@@ public class CommitLogReplaye
  private final ReplayFilter replayFilter;
  private final CommitLogArchiver archiver;
  
 +/*
 + * Wrapper around initiating mutations read from the log to make it 
possible
 + * to spy on initiated mutations for test
 + */
 +@VisibleForTesting
 +public static class MutationInitiator
 +{
 +protected Future initiateMutation(final Mutation mutation,
 +   final long segmentId,
 +   final int serializedSize,
 +   final long entryLocation,
 +   final CommitLogReplayer 
clr)
 +{
 +Runnable runnable = new WrappedRunnable()
 +{
 +public void runMayThrow() throws IOException
 +{
 +if 
(Schema.instance.getKSMetaData(mutation.getKeyspaceName()) == null)
 +return;
 +if (clr.pointInTimeExceeded(mutation))
 +return;
 +
 +final Keyspace keyspace = 
Keyspace.open(mutation.getKeyspaceName());
 +
 +// Rebuild the mutation, omitting column families that
 +//a) the user has requested that we ignore,
 +//b) have already been flushed,
 +// or c) are part of a cf that was dropped.
 +// Keep in mind that the cf.name() is suspect. do every 
thing based on the cfid instead.
 +Mutation newMutation = null;
 +for (PartitionUpdate update : 
clr.replayFilter.filter(mutation))
 +{
 +if (Schema.instance.getCF(update.metadata().cfId) == 
null)
 +continue; // dropped
 +
 +ReplayPosition rp = 
clr.cfPositions.get(update.metadata().cfId);
 +
 +// replay if current segment is newer than last 
flushed one or,
 +// if it is the last known segment, if we are after 
the replay position
 +if (segmentId > rp.segment || (segmentId == 
rp.segment && entryLocation > rp.position))
 +{
 +if (newMutation == null)
 +  

[1/6] cassandra git commit: Mutations do not block for completion under view lock contention

2016-02-01 Thread carl
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 b554cb3da -> 839a5bab2
  refs/heads/cassandra-3.3 4d5873d81 -> 2e965f0e4
  refs/heads/trunk 4a241f626 -> 6e7d739d1


Mutations do not block for completion under view lock contention

Patch by Carl Yeksigian; reviewed by Tyler Hobbs for CASSANDRA-10779


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/839a5bab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/839a5bab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/839a5bab

Branch: refs/heads/cassandra-3.0
Commit: 839a5bab2a7f5385a878e5dc5f8b01bda28fa777
Parents: b554cb3
Author: Carl Yeksigian 
Authored: Mon Feb 1 16:51:15 2016 -0500
Committer: Carl Yeksigian 
Committed: Mon Feb 1 16:59:57 2016 -0500

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Keyspace.java  | 32 +---
 src/java/org/apache/cassandra/db/Mutation.java  | 19 ++--
 .../cassandra/db/MutationVerbHandler.java   | 25 ---
 .../db/commitlog/CommitLogReplayer.java |  7 +++--
 .../cassandra/service/paxos/PaxosState.java | 12 ++--
 6 files changed, 73 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/839a5bab/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7a42916..bed8703 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
  * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
(CASSANDRA-11003)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/839a5bab/src/java/org/apache/cassandra/db/Keyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/Keyspace.java 
b/src/java/org/apache/cassandra/db/Keyspace.java
index 7b4f79b..2b62f0e 100644
--- a/src/java/org/apache/cassandra/db/Keyspace.java
+++ b/src/java/org/apache/cassandra/db/Keyspace.java
@@ -379,19 +379,19 @@ public class Keyspace
 }
 }
 
-public void apply(Mutation mutation, boolean writeCommitLog)
+public CompletableFuture apply(Mutation mutation, boolean 
writeCommitLog)
 {
-apply(mutation, writeCommitLog, true, false);
+return apply(mutation, writeCommitLog, true, false, null);
 }
 
-public void apply(Mutation mutation, boolean writeCommitLog, boolean 
updateIndexes)
+public CompletableFuture apply(Mutation mutation, boolean 
writeCommitLog, boolean updateIndexes)
 {
-apply(mutation, writeCommitLog, updateIndexes, false);
+return apply(mutation, writeCommitLog, updateIndexes, false, null);
 }
 
-public void applyFromCommitLog(Mutation mutation)
+public CompletableFuture applyFromCommitLog(Mutation mutation)
 {
-apply(mutation, false, true, true);
+return apply(mutation, false, true, true, null);
 }
 
 /**
@@ -403,13 +403,18 @@ public class Keyspace
  * @param updateIndexes  false to disable index updates (used by 
CollationController "defragmenting")
  * @param isClReplay true if caller is the commitlog replayer
  */
-public void apply(final Mutation mutation, final boolean writeCommitLog, 
boolean updateIndexes, boolean isClReplay)
+public CompletableFuture apply(final Mutation mutation,
+  final boolean writeCommitLog,
+  boolean updateIndexes,
+  boolean isClReplay,
+  CompletableFuture future)
 {
 if (TEST_FAIL_WRITES && metadata.name.equals(TEST_FAIL_WRITES_KS))
 throw new RuntimeException("Testing write failures");
 
 Lock lock = null;
 boolean requiresViewUpdate = updateIndexes && 
viewManager.updatesAffectView(Collections.singleton(mutation), false);
+final CompletableFuture mark = future == null ? new 
CompletableFuture<>() : future;
 
 if (requiresViewUpdate)
 {
@@ -422,7 +427,10 @@ public class Keyspace
 {
 logger.trace("Could not acquire lock for {}", 
ByteBufferUtil.bytesToHex(mutation.key().getKey()));
 Tracing.trace("Could not acquire MV lock");
-throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
+if (future != null)
+future.completeExceptionally(new 
WriteTimeoutException(WriteType.VIEW, ConsistencyLevel.LOCAL_ONE, 

[jira] [Commented] (CASSANDRA-10938) test_bulk_round_trip_blogposts is failing occasionally

2016-02-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127344#comment-15127344
 ] 

Stefania commented on CASSANDRA-10938:
--

Thank you for the tests and the review Paulo!

bq. Still didn't reproduce, so I think it was some temporary environmental 
problem in my machine.

It was probably the consistency level that was not set to ALL. It would have 
affected both tests since the replication factor was changed from 1 to 3 in 
both tests.



> test_bulk_round_trip_blogposts is failing occasionally
> --
>
> Key: CASSANDRA-10938
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10938
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 6452.nps, 6452.png, 7300.nps, 7300a.png, 7300b.png, 
> node1_debug.log, node2_debug.log, node3_debug.log, recording_127.0.0.1.jfr
>
>
> We get timeouts occasionally that cause the number of records to be incorrect:
> http://cassci.datastax.com/job/trunk_dtest/858/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10428) cqlsh: Include sub-second precision in timestamps by default

2016-02-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125619#comment-15125619
 ] 

Stefania edited comment on CASSANDRA-10428 at 2/2/16 1:38 AM:
--

Thanks Paulo. 

Repeating links to patch and dtest pull request here:

|[patch|https://github.com/stef1927/cassandra/commits/10428]|
|[dtest pull request|https://github.com/riptano/cassandra-dtest/pull/773]|

The patch is only for trunk. 

The dtest pull request should also be merged to prevent some existing tests 
from failing, -but it has not yet been reviewed by a TE as of now-.


was (Author: stefania):
Thanks Paulo. 

Repeating links to patch and dtest pull request here:

|[patch|https://github.com/stef1927/cassandra/commits/10428]|
|[dtest pull request|https://github.com/riptano/cassandra-dtest/pull/773]|

The patch is only for trunk. 

The dtest pull request should also be merged to prevent some existing tests 
from failing, but it has not yet been reviewed by a TE as of now.

> cqlsh: Include sub-second precision in timestamps by default
> 
>
> Key: CASSANDRA-10428
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10428
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
> Environment: OSX 10.10.2
>Reporter: Chandran Anjur Narasimhan
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 3.x
>
>
> Query with >= timestamp works. But the exact timestamp value is not working.
> {noformat}
> NCHAN-M-D0LZ:bin nchan$ ./cqlsh
> Connected to CCC Multi-Region Cassandra Cluster at :.
> [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3]
> Use HELP for help.
> cqlsh>
> {noformat}
> {panel:title=Schema|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> cqlsh:ccc> desc COLUMNFAMILY ez_task_result ;
> CREATE TABLE ccc.ez_task_result (
> submissionid text,
> ezid text,
> name text,
> time timestamp,
> analyzed_index_root text,
> ...
> ...
> PRIMARY KEY (submissionid, ezid, name, time)
> {panel}
> {panel:title=Working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> cqlsh:ccc> select submissionid, ezid, name, time, state, status, 
> translated_criteria_status from ez_task_result where 
> submissionid='760dd154670811e58c04005056bb6ff0' and 
> ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and 
> time>='2015-09-29 20:54:23-0700';
>  submissionid | ezid | name   
>   | time | state | status  | 
> translated_criteria_status
> --+--+--+--+---+-+
>  760dd154670811e58c04005056bb6ff0 | 760dd6de670811e594fc005056bb6ff0 | 
> run-sanities | 2015-09-29 20:54:23-0700 | EXECUTING | IN_PROGRESS |   
> run-sanities started
> (1 rows)
> cqlsh:ccc>
> {panel}
> {panel:title=Not 
> working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> cqlsh:ccc> select submissionid, ezid, name, time, state, status, 
> translated_criteria_status from ez_task_result where 
> submissionid='760dd154670811e58c04005056bb6ff0' and 
> ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and 
> time='2015-09-29 20:54:23-0700';
>  submissionid | ezid | name | time | analyzed_index_root | analyzed_log_path 
> | clientid | end_time | jenkins_path | log_file_path | path_available | 
> path_to_task | required_for_overall_status | start_time | state | status | 
> translated_criteria_status | type
> --+--+--+--+-+---+--+--+--+---++--+-++---+++--
> (0 rows)
> cqlsh:ccc>
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows

2016-02-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127558#comment-15127558
 ] 

Stefania edited comment on CASSANDRA-11030 at 2/2/16 3:10 AM:
--

You are correct, it finally works. I think I inserted the data initially by 
copy and paste in a git bash terminal (launched via ConEmu), the only one where 
I could paste a unicode character, but for this terminal the default encoding 
was cp1252 since I only worked out today how to change it to cp65001. So even 
if I inserted the data with --encoding=UTF-8 it would have probably caused 
problems. From other terminals (command prompt, power shell) I could not paste 
the character into cqlsh and trying to insert something like u'\u' would 
give a syntax error. 

The following works however (unicode.cql is encoded with utf-8):

{code}
chcp 65001
C:\Users\stefania\git\cstar\cassandra>type unicode.cql
INSERT INTO test.test (val) VALUES ('não');
C:\Users\stefania\git\cstar\cassandra>bin\cqlsh.bat --encoding=UTF-8 
--file=unicode.cql
C:\Users\stefania\git\cstar\cassandra>bin\cqlsh.bat --encoding=UTF-8
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.5-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh> select * from test.test;

 val
-
 não
{code}

The source command also works *provided the encoding specified via the command 
line is the same as the file encoding*, otherwise we get a missing character 
glyph (a square). 

Inserting the character directly from git bash also works now, but because I 
changed the code page to 65001 for it, otherwise it causes the original problem.

You are probably right regarding changing default encoding, I'm + 1 to change 
it to 'utf-8' if you want. Also, shouldn't {{do_source}} use the same encoding 
as the file encoding? I think we should also stress that whichever terminal 
people are using on Windows, it should have the same encoding as the one used 
by cqlsh.

We can commit this ticket as is and open a new ticket re. default encoding or 
change it here, up to you.


was (Author: stefania):
You are correct, it finally works. I think I inserted the data initially by 
copy and paste in a git bash terminal (launched via ConEmu), the only one where 
I could paste a unicode character, but for this terminal the default encoding 
was cp1252 since I only worked out today how to change it to cp65001. So even 
if I inserted the data with --encoding=UTF-8 it would have probably caused 
problems. From other terminals (command prompt, power shell) I could not paste 
the character into cqlsh and trying to insert something like u'\u' would 
give a syntax error. 

The following works however (unicode.cql is encoded with utf-8):

{code}
chcp 65001
C:\Users\stefania\git\cstar\cassandra>type unicode.cql
INSERT INTO test.test (val) VALUES ('não');
C:\Users\stefania\git\cstar\cassandra>bin\cqlsh.bat --encoding=UTF-8 
--file=unicode.cql
C:\Users\stefania\git\cstar\cassandra>bin\cqlsh.bat --encoding=UTF-8
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.5-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh> select * from test.test;

 val
-
 não
{code}

The source command also works *provided the encoding specified via the command 
line is the same as the file encoding*, otherwise we get a missing character 
glyph (a square). 

Inserting the character directly from git bash also works now, but because I 
changed the code page to 65001 for it, otherwise it causes the original problem.

You are probably right regarding changing default encoding, I'm + 1 to change 
it to 'utf-8' if you want. Also, shouldn't {{do_source}} use the same encoding 
as the file encoding? I think we should also stress that whichever terminal 
people are using on Windows, it should have the same encoding as the one used 
by cqlsh.

We can commit this ticket as it and open a new ticket re. default encoding or 
change it here, up to you.

> utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
> ---
>
> Key: CASSANDRA-11030
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11030
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: cqlsh, windows
>
> {noformat}
> C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat 
> --encoding utf-8
> Connected to test at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
> Use HELP for help.
> cqlsh> INSERT INTO bla.test (bla ) VALUES  ('não') ;
> cqlsh> select * from bla.test;
>  bla
> -
>  n?o
> (1 rows)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows

2016-02-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127558#comment-15127558
 ] 

Stefania commented on CASSANDRA-11030:
--

You are correct, it finally works. I think I inserted the data initially by 
copy and paste in a git bash terminal (launched via ConEmu), the only one where 
I could paste a unicode character, but for this terminal the default encoding 
was cp1252 since I only worked out today how to change it to cp65001. So even 
if I inserted the data with --encoding=UTF-8 it would have probably caused 
problems. From other terminals (command prompt, power shell) I could not paste 
the character into cqlsh and trying to insert something like u'\u' would 
give a syntax error. 

The following works however (unicode.cql is encoded with utf-8):

{code}
chcp 65001
C:\Users\stefania\git\cstar\cassandra>type unicode.cql
INSERT INTO test.test (val) VALUES ('não');
C:\Users\stefania\git\cstar\cassandra>bin\cqlsh.bat --encoding=UTF-8 
--file=unicode.cql
C:\Users\stefania\git\cstar\cassandra>bin\cqlsh.bat --encoding=UTF-8
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.5-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh> select * from test.test;

 val
-
 não
{code}

The source command also works *provided the encoding specified via the command 
line is the same as the file encoding*, otherwise we get a missing character 
glyph (a square). 

Inserting the character directly from git bash also works now, but because I 
changed the code page to 65001 for it, otherwise it causes the original problem.

You are probably right regarding changing default encoding, I'm + 1 to change 
it to 'utf-8' if you want. Also, shouldn't {{do_source}} use the same encoding 
as the file encoding? I think we should also stress that whichever terminal 
people are using on Windows, it should have the same encoding as the one used 
by cqlsh.

We can commit this ticket as it and open a new ticket re. default encoding or 
change it here, up to you.

> utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
> ---
>
> Key: CASSANDRA-11030
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11030
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: cqlsh, windows
>
> {noformat}
> C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat 
> --encoding utf-8
> Connected to test at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
> Use HELP for help.
> cqlsh> INSERT INTO bla.test (bla ) VALUES  ('não') ;
> cqlsh> select * from bla.test;
>  bla
> -
>  n?o
> (1 rows)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10938) test_bulk_round_trip_blogposts is failing occasionally

2016-02-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127393#comment-15127393
 ] 

Stefania commented on CASSANDRA-10938:
--

Please merge the [pull 
request|https://github.com/riptano/cassandra-dtest/pull/776] as well after 
merging the patch (details 3rd comment up). 

> test_bulk_round_trip_blogposts is failing occasionally
> --
>
> Key: CASSANDRA-10938
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10938
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 6452.nps, 6452.png, 7300.nps, 7300a.png, 7300b.png, 
> node1_debug.log, node2_debug.log, node3_debug.log, recording_127.0.0.1.jfr
>
>
> We get timeouts occasionally that cause the number of records to be incorrect:
> http://cassci.datastax.com/job/trunk_dtest/858/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-01 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bdcc059a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bdcc059a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bdcc059a

Branch: refs/heads/cassandra-3.3
Commit: bdcc059a477fd17a90e5d48ac8256a8dc5798c1f
Parents: 8bc8fa3 b554cb3
Author: Sam Tunnicliffe 
Authored: Mon Feb 1 08:44:54 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Feb 1 08:44:54 2016 +

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java | 1 +
 2 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdcc059a/CHANGES.txt
--
diff --cc CHANGES.txt
index 9d58926,7a42916..394faaf
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
   * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
 (CASSANDRA-11003)
   * Fix DISTINCT queries in mixed version clusters (CASSANDRA-10762)



[3/6] cassandra git commit: Invalidate legacy schema tables when unloading them

2016-02-01 Thread samt
Invalidate legacy schema tables when unloading them

Patch by Mike Adamson; reviewed by Sam Tunnicliffe for CASSANDRA-11071


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b554cb3d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b554cb3d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b554cb3d

Branch: refs/heads/trunk
Commit: b554cb3da327a7522dbd60209421073bbe10317c
Parents: 682812d
Author: Mike Adamson 
Authored: Wed Jan 27 12:36:36 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Feb 1 08:41:41 2016 +

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java | 1 +
 2 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b554cb3d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1a1abc0..7a42916 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
  * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
(CASSANDRA-11003)
  * Fix DISTINCT queries in mixed version clusters (CASSANDRA-10762)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b554cb3d/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
--
diff --git a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java 
b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
index 3588a92..afa0f38 100644
--- a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
+++ b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
@@ -133,6 +133,7 @@ public final class LegacySchemaMigrator
 systemTables = systemTables.without(table.cfName);
 
 LegacySchemaTables.forEach(Schema.instance::unload);
+LegacySchemaTables.forEach((cfm) -> 
org.apache.cassandra.db.Keyspace.openAndGetStore(cfm).invalidate());
 
 
Schema.instance.setKeyspaceMetadata(systemKeyspace.withSwapped(systemTables));
 }



[1/6] cassandra git commit: Invalidate legacy schema tables when unloading them

2016-02-01 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 682812d1f -> b554cb3da
  refs/heads/cassandra-3.3 8bc8fa369 -> bdcc059a4
  refs/heads/trunk c7829a0a6 -> 32e2e83cf


Invalidate legacy schema tables when unloading them

Patch by Mike Adamson; reviewed by Sam Tunnicliffe for CASSANDRA-11071


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b554cb3d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b554cb3d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b554cb3d

Branch: refs/heads/cassandra-3.0
Commit: b554cb3da327a7522dbd60209421073bbe10317c
Parents: 682812d
Author: Mike Adamson 
Authored: Wed Jan 27 12:36:36 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Feb 1 08:41:41 2016 +

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java | 1 +
 2 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b554cb3d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1a1abc0..7a42916 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
  * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
(CASSANDRA-11003)
  * Fix DISTINCT queries in mixed version clusters (CASSANDRA-10762)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b554cb3d/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
--
diff --git a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java 
b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
index 3588a92..afa0f38 100644
--- a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
+++ b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
@@ -133,6 +133,7 @@ public final class LegacySchemaMigrator
 systemTables = systemTables.without(table.cfName);
 
 LegacySchemaTables.forEach(Schema.instance::unload);
+LegacySchemaTables.forEach((cfm) -> 
org.apache.cassandra.db.Keyspace.openAndGetStore(cfm).invalidate());
 
 
Schema.instance.setKeyspaceMetadata(systemKeyspace.withSwapped(systemTables));
 }



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-01 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bdcc059a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bdcc059a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bdcc059a

Branch: refs/heads/trunk
Commit: bdcc059a477fd17a90e5d48ac8256a8dc5798c1f
Parents: 8bc8fa3 b554cb3
Author: Sam Tunnicliffe 
Authored: Mon Feb 1 08:44:54 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Feb 1 08:44:54 2016 +

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java | 1 +
 2 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdcc059a/CHANGES.txt
--
diff --cc CHANGES.txt
index 9d58926,7a42916..394faaf
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
   * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
 (CASSANDRA-11003)
   * Fix DISTINCT queries in mixed version clusters (CASSANDRA-10762)



[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-02-01 Thread samt
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32e2e83c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32e2e83c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32e2e83c

Branch: refs/heads/trunk
Commit: 32e2e83cfc704220708f83a0519e08d95e9eccd1
Parents: c7829a0 bdcc059
Author: Sam Tunnicliffe 
Authored: Mon Feb 1 08:46:38 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Feb 1 08:46:38 2016 +

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java | 1 +
 2 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32e2e83c/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/32e2e83c/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
--



[2/6] cassandra git commit: Invalidate legacy schema tables when unloading them

2016-02-01 Thread samt
Invalidate legacy schema tables when unloading them

Patch by Mike Adamson; reviewed by Sam Tunnicliffe for CASSANDRA-11071


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b554cb3d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b554cb3d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b554cb3d

Branch: refs/heads/cassandra-3.3
Commit: b554cb3da327a7522dbd60209421073bbe10317c
Parents: 682812d
Author: Mike Adamson 
Authored: Wed Jan 27 12:36:36 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Feb 1 08:41:41 2016 +

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java | 1 +
 2 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b554cb3d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1a1abc0..7a42916 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
  * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
(CASSANDRA-11003)
  * Fix DISTINCT queries in mixed version clusters (CASSANDRA-10762)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b554cb3d/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
--
diff --git a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java 
b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
index 3588a92..afa0f38 100644
--- a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
+++ b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
@@ -133,6 +133,7 @@ public final class LegacySchemaMigrator
 systemTables = systemTables.without(table.cfName);
 
 LegacySchemaTables.forEach(Schema.instance::unload);
+LegacySchemaTables.forEach((cfm) -> 
org.apache.cassandra.db.Keyspace.openAndGetStore(cfm).invalidate());
 
 
Schema.instance.setKeyspaceMetadata(systemKeyspace.withSwapped(systemTables));
 }



[jira] [Updated] (CASSANDRA-11071) Invalidate legacy schema CFSs at startup

2016-02-01 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11071:

Reviewer: Sam Tunnicliffe

> Invalidate legacy schema CFSs at startup
> 
>
> Key: CASSANDRA-11071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata, Lifecycle
>Reporter: Sam Tunnicliffe
>Assignee: Mike Adamson
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> {{ColumnFamilyStore}} instances are created for legacy schema tables at 
> startup when {{SystemKeyspace}} is initialized as they may be required for 
> schema migration during upgrade. Before startup completes, the schema info 
> for these is expunged from {{system_schema}}, but the {{CFS}} instances are 
> not invalidated, which leaves their mbeans registered and visible via 
> nodetool & JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11102) Data lost during compaction

2016-02-01 Thread Jaroslav Kamenik (JIRA)
Jaroslav Kamenik created CASSANDRA-11102:


 Summary: Data lost during compaction
 Key: CASSANDRA-11102
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11102
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
 Environment: Cassandra 3.2.1 (single node, 5 node cluster)
JDK 8
Reporter: Jaroslav Kamenik
Priority: Blocker


We have experienced data loses in some tables during few weeks since update to 
cassandra 3.0. I thing I successfully found test case now. 

Step one - test table:

CREATE TABLE aaa (
r int,
c1 int,
c2 ascii,
PRIMARY KEY (r, c1, c2));

Step two - run few queries:

insert into aaa (r, c1, c2) values (1,2,'A');
delete from aaa where r=1 and c1=2 and c2='B';
insert into aaa (r, c1, c2) values (2,3,'A');
delete from aaa where r=2 and c1=3 and c2='B';
insert into aaa (r, c1, c2) values (3,4,'A');
delete from aaa where r=3 and c1=4 and c2='B';
insert into aaa (r, c1, c2) values (4,5,'A');
delete from aaa where r=4 and c1=5 and c2='B';

It creates 4 rows (select count says 4) and 4 tombstones.

Step 3 - Restart Cassandra

You will see new files written into C* data folder. I tried sstable-tools to 
print table structure, it shows 4 rows, data and tombstones are there.

Step 4 - set GC grace to 1 to force tombstone removing during compaction.

alter table aaa with GC_GRACE_SECONDS = 1;

Step 5 - Compact tables

./nodetool compact

aaa files dissapeares during compaction. 
select count(*) says 0
compaction history says
... aaa  2016-02-01T14:24:01.433   329   0   {}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11087) Queries on compact storage tables in mixed version clusters can return incorrect results

2016-02-01 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11087:

Component/s: (was: Local Write-Read Paths)
 Coordination

> Queries on compact storage tables in mixed version clusters can return 
> incorrect results
> 
>
> Key: CASSANDRA-11087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11087
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.x, 3.x
>
>
> Whilst writing a dtest for CASSANDRA-11045, it becomes apparent that queries 
> on compact storage tables are broken during the 3.0 upgrade (and this has 
> probably been the case since day 1). 
> tl;dr In a cluster with a mix of < 3.0 and 3.0 nodes, reads on COMPACT 
> STORAGE tables may not include all results. 
> To repro: tables are created and data written before any nodes are upgraded 
> to 3.0+, some nodes are then upgraded putting the cluster into a mixed state.
> Now, when a query is run where the coordinator is a < 3.0 node, any 3.0+ 
> replica which has not yet run upgradesstables always returns 0 results.  Once 
> upgradesstables is run, the replica returns the correct results. Likewise, if 
> the data is inserted after the node is upgraded, the results are correct. If 
> the 3.0 node acts as the coordinator, the results are also correct and so 
> once all nodes are upgraded, the problem goes away.
> The behaviour can be seen for both single partition and range requests as 
> [this 
> dtest|https://github.com/beobal/cassandra-dtest/commit/91bb9ffd8fb761ad3454187d2f05f05a6e7af930]
>  demonstrates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Fix CHANGES.txt

2016-02-01 Thread samt
Fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4d5873d8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4d5873d8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4d5873d8

Branch: refs/heads/trunk
Commit: 4d5873d81bc13cbe0c8cc6cf04836a58801c204c
Parents: bdcc059
Author: Sam Tunnicliffe 
Authored: Mon Feb 1 16:12:02 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Feb 1 16:12:02 2016 +

--
 CHANGES.txt | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d5873d8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 394faaf..aa4f981 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -358,8 +358,7 @@ Merged from 2.2:
  * Fall back to 1/4 commitlog volume for commitlog_total_space on small disks
(CASSANDRA-10199)
 Merged from 2.1:
- * ure SSTables for legacy KEYS indexes can be read (CASSANDRA-11045)
-dded configurable warning threshold for GC duration (CASSANDRA-8907)
+ * Added configurable warning threshold for GC duration (CASSANDRA-8907)
  * Fix handling of streaming EOF (CASSANDRA-10206)
  * Only check KeyCache when it is enabled
  * Change streaming_socket_timeout_in_ms default to 1 hour (CASSANDRA-8611)



[1/3] cassandra git commit: Fix CHANGES.txt

2016-02-01 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.3 bdcc059a4 -> 4d5873d81
  refs/heads/trunk 32e2e83cf -> 4a241f626


Fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4d5873d8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4d5873d8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4d5873d8

Branch: refs/heads/cassandra-3.3
Commit: 4d5873d81bc13cbe0c8cc6cf04836a58801c204c
Parents: bdcc059
Author: Sam Tunnicliffe 
Authored: Mon Feb 1 16:12:02 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Feb 1 16:12:02 2016 +

--
 CHANGES.txt | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d5873d8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 394faaf..aa4f981 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -358,8 +358,7 @@ Merged from 2.2:
  * Fall back to 1/4 commitlog volume for commitlog_total_space on small disks
(CASSANDRA-10199)
 Merged from 2.1:
- * ure SSTables for legacy KEYS indexes can be read (CASSANDRA-11045)
-dded configurable warning threshold for GC duration (CASSANDRA-8907)
+ * Added configurable warning threshold for GC duration (CASSANDRA-8907)
  * Fix handling of streaming EOF (CASSANDRA-10206)
  * Only check KeyCache when it is enabled
  * Change streaming_socket_timeout_in_ms default to 1 hour (CASSANDRA-8611)



[3/3] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-02-01 Thread samt
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4a241f62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4a241f62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4a241f62

Branch: refs/heads/trunk
Commit: 4a241f626e42f3759c17551f316d99d230384b5a
Parents: 32e2e83 4d5873d
Author: Sam Tunnicliffe 
Authored: Mon Feb 1 16:12:09 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon Feb 1 16:12:09 2016 +

--
 CHANGES.txt | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a241f62/CHANGES.txt
--



[jira] [Updated] (CASSANDRA-10070) Automatic repair scheduling

2016-02-01 Thread Marcus Olsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Olsson updated CASSANDRA-10070:
--
Attachment: Distributed Repair Scheduling.doc

This is a draft of the proposal, it would be great to get some comments on it! 
:)

> Automatic repair scheduling
> ---
>
> Key: CASSANDRA-10070
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10070
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
> Fix For: 3.x
>
> Attachments: Distributed Repair Scheduling.doc
>
>
> Scheduling and running repairs in a Cassandra cluster is most often a 
> required task, but this can both be hard for new users and it also requires a 
> bit of manual configuration. There are good tools out there that can be used 
> to simplify things, but wouldn't this be a good feature to have inside of 
> Cassandra? To automatically schedule and run repairs, so that when you start 
> up your cluster it basically maintains itself in terms of normal 
> anti-entropy, with the possibility for manual configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11102) Data lost during compaction

2016-02-01 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11102:
---
Fix Version/s: 3.0.3

> Data lost during compaction
> ---
>
> Key: CASSANDRA-11102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.2.1 (single node, 5 node cluster)
> JDK 8
>Reporter: Jaroslav Kamenik
>Assignee: Marcus Eriksson
>Priority: Blocker
> Fix For: 3.0.3, 3.3
>
>
> We have experienced data loses in some tables during few weeks since update 
> to cassandra 3.0. I thing I successfully found test case now. 
> Step one - test table:
> CREATE TABLE aaa (
> r int,
> c1 int,
> c2 ascii,
> PRIMARY KEY (r, c1, c2));
> Step two - run few queries:
>   insert into aaa (r, c1, c2) values (1,2,'A');
>   delete from aaa where r=1 and c1=2 and c2='B';
>   insert into aaa (r, c1, c2) values (2,3,'A');
>   delete from aaa where r=2 and c1=3 and c2='B';
>   insert into aaa (r, c1, c2) values (3,4,'A');
>   delete from aaa where r=3 and c1=4 and c2='B';
>   insert into aaa (r, c1, c2) values (4,5,'A');
>   delete from aaa where r=4 and c1=5 and c2='B';
> It creates 4 rows (select count says 4) and 4 tombstones.
> Step 3 - Restart Cassandra
> You will see new files written into C* data folder. I tried sstable-tools to 
> print table structure, it shows 4 rows, data and tombstones are there.
> Step 4 - set GC grace to 1 to force tombstone removing during compaction.
> alter table aaa with GC_GRACE_SECONDS = 1;
> Step 5 - Compact tables
> ./nodetool compact
> aaa files dissapeares during compaction. 
> select count(*) says 0
> compaction history says
> ... aaa  2016-02-01T14:24:01.433   329   0   {}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10406) Nodetool supports to rebuild from specific ranges.

2016-02-01 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126498#comment-15126498
 ] 

Yuki Morishita commented on CASSANDRA-10406:


Sorry, it tool me awhile to get back to you. I uploaded your patch against 
trunk [here|https://github.com/yukim/cassandra/tree/10406-trunk].
(I modified a bit to retain old JMX API and not break anything outside of 
cassandra.)

With the patch you can give only tokens without keyspace, but this will be the 
same as rebuilding all keyspaces with all local ranges, not the ranges given.
We should either add argument check not to allow '-ts' without specifying 
keyspace, or allow '-ts' without keyspace and rebuild all keyspace for given 
range.
Check needs to be done in both nodetool and StorageService.

> Nodetool supports to rebuild from specific ranges.
> --
>
> Key: CASSANDRA-10406
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10406
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-10406.patch, rebuildranges-2.1.patch
>
>
> Add the 'nodetool rebuildrange' command, so that if `nodetool rebuild` 
> failed, we do not need to rebuild all the ranges, and can just rebuild those 
> failed ones.
> Should be easily ported to all versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9472) Reintroduce off heap memtables

2016-02-01 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9472:
--
Fix Version/s: (was: 3.x)
   3.4

> Reintroduce off heap memtables
> --
>
> Key: CASSANDRA-9472
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9472
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 3.4
>
>
> CASSANDRA-8099 removes off heap memtables. We should reintroduce them ASAP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11101) Yammer metrics upgrade required

2016-02-01 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian resolved CASSANDRA-11101.

Resolution: Duplicate

We have already upgraded to use metrics 3.1.0 in Cassandra 2.2.

> Yammer metrics upgrade required
> ---
>
> Key: CASSANDRA-11101
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11101
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>
> I see a large heap usage of yammer metrics library when profiling cassandra 
> memory allocations. There seems to have been discussion on this in past well 
> https://groups.google.com/forum/#!topic/metrics-user/aQAhqOqMwh8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11102) Data lost during compaction

2016-02-01 Thread Jaroslav Kamenik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126476#comment-15126476
 ] 

Jaroslav Kamenik commented on CASSANDRA-11102:
--

Another case:

CREATE TABLE bbb (
r int,
c int,
PRIMARY KEY (r, c));

insert into bbb (r, c) values (1, 2);

... pause

delete from bbb where r=1 and c=2;
insert into bbb (r, c) values (2, 3);

... pause

delete from bbb where r=2 and c=3;
insert into bbb (r, c) values (1, 2);

... pause

delete from bbb where r=1 and c=2;
insert into bbb (r, c) values (2, 3);

... pause

delete from bbb where r=2 and c=3;
insert into bbb (r, c) values (1, 2);

select * from bbb;

 r | c
---+---
 1 | 2

alter table bbb with GC_GRACE_SECONDS = 1;
restart
compact

table is empty...

> Data lost during compaction
> ---
>
> Key: CASSANDRA-11102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.2.1 (single node, 5 node cluster)
> JDK 8
>Reporter: Jaroslav Kamenik
>Assignee: Marcus Eriksson
>Priority: Blocker
>
> We have experienced data loses in some tables during few weeks since update 
> to cassandra 3.0. I thing I successfully found test case now. 
> Step one - test table:
> CREATE TABLE aaa (
> r int,
> c1 int,
> c2 ascii,
> PRIMARY KEY (r, c1, c2));
> Step two - run few queries:
>   insert into aaa (r, c1, c2) values (1,2,'A');
>   delete from aaa where r=1 and c1=2 and c2='B';
>   insert into aaa (r, c1, c2) values (2,3,'A');
>   delete from aaa where r=2 and c1=3 and c2='B';
>   insert into aaa (r, c1, c2) values (3,4,'A');
>   delete from aaa where r=3 and c1=4 and c2='B';
>   insert into aaa (r, c1, c2) values (4,5,'A');
>   delete from aaa where r=4 and c1=5 and c2='B';
> It creates 4 rows (select count says 4) and 4 tombstones.
> Step 3 - Restart Cassandra
> You will see new files written into C* data folder. I tried sstable-tools to 
> print table structure, it shows 4 rows, data and tombstones are there.
> Step 4 - set GC grace to 1 to force tombstone removing during compaction.
> alter table aaa with GC_GRACE_SECONDS = 1;
> Step 5 - Compact tables
> ./nodetool compact
> aaa files dissapeares during compaction. 
> select count(*) says 0
> compaction history says
> ... aaa  2016-02-01T14:24:01.433   329   0   {}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11048) JSON queries are not thread safe

2016-02-01 Thread Henry Manasseh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126527#comment-15126527
 ] 

Henry Manasseh commented on CASSANDRA-11048:


I agree. Thanks. I am new to the code. The patch from Ivan is pointing me to 
the places to look at in the code.

> JSON queries are not thread safe
> 
>
> Key: CASSANDRA-11048
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11048
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sergio Bossa
>Priority: Critical
>  Labels: easyfix, newbie, patch
> Attachments: 
> 0001-Fix-thread-unsafe-usage-of-JsonStringEncoder-see-CAS.patch
>
>
> {{org.apache.cassandra.cql3.Json}} uses a shared instance of 
> {{JsonStringEncoder}} which is not thread safe (see 1), while 
> {{JsonStringEncoder#getInstance()}} should be used (see 2).
> As a consequence, concurrent {{select JSON}} queries often produce wrong 
> (sometimes unreadable) results.
> 1. 
> http://grepcode.com/file/repo1.maven.org/maven2/org.codehaus.jackson/jackson-core-asl/1.9.2/org/codehaus/jackson/io/JsonStringEncoder.java
> 2. 
> http://grepcode.com/file/repo1.maven.org/maven2/org.codehaus.jackson/jackson-core-asl/1.9.2/org/codehaus/jackson/io/JsonStringEncoder.java#JsonStringEncoder.getInstance%28%29



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10526) Add dtest for CASSANDRA-10406

2016-02-01 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126504#comment-15126504
 ] 

Yuki Morishita commented on CASSANDRA-10526:


Are you running OSX?
If so, you need to add alias to loop back manually, as described in ccm 
README(https://github.com/pcmanus/ccm/#longer-version).

> Add dtest for CASSANDRA-10406
> -
>
> Key: CASSANDRA-10526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10526
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Yuki Morishita
>Assignee: Dikang Gu
>Priority: Minor
> Fix For: 2.1.x
>
>
> CASSANDRA-10406 adds new function to nodetool rebuild, so it needs to be 
> tested with cassandra-dtest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11102) Data lost during compaction

2016-02-01 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11102:
---
Fix Version/s: 3.3

> Data lost during compaction
> ---
>
> Key: CASSANDRA-11102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.2.1 (single node, 5 node cluster)
> JDK 8
>Reporter: Jaroslav Kamenik
>Assignee: Marcus Eriksson
>Priority: Blocker
> Fix For: 3.0.3, 3.3
>
>
> We have experienced data loses in some tables during few weeks since update 
> to cassandra 3.0. I thing I successfully found test case now. 
> Step one - test table:
> CREATE TABLE aaa (
> r int,
> c1 int,
> c2 ascii,
> PRIMARY KEY (r, c1, c2));
> Step two - run few queries:
>   insert into aaa (r, c1, c2) values (1,2,'A');
>   delete from aaa where r=1 and c1=2 and c2='B';
>   insert into aaa (r, c1, c2) values (2,3,'A');
>   delete from aaa where r=2 and c1=3 and c2='B';
>   insert into aaa (r, c1, c2) values (3,4,'A');
>   delete from aaa where r=3 and c1=4 and c2='B';
>   insert into aaa (r, c1, c2) values (4,5,'A');
>   delete from aaa where r=4 and c1=5 and c2='B';
> It creates 4 rows (select count says 4) and 4 tombstones.
> Step 3 - Restart Cassandra
> You will see new files written into C* data folder. I tried sstable-tools to 
> print table structure, it shows 4 rows, data and tombstones are there.
> Step 4 - set GC grace to 1 to force tombstone removing during compaction.
> alter table aaa with GC_GRACE_SECONDS = 1;
> Step 5 - Compact tables
> ./nodetool compact
> aaa files dissapeares during compaction. 
> select count(*) says 0
> compaction history says
> ... aaa  2016-02-01T14:24:01.433   329   0   {}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127778#comment-15127778
 ] 

Stefania edited comment on CASSANDRA-11053 at 2/2/16 6:51 AM:
--

I've repeated the test in the exact some conditions as described above with 
{{cProfile}} profiling all processes. I am attaching full profile results 
(_worker_profiles.txt_ and _parent_profile.txt_). 

The total test time was approx 15 minutes (900 seconds), of which 15 seconds 
were an artificial sleep in the parent to allow workers to dump their profile 
results.

It is clear that with these large datasets we can no longer afford to read all 
data in the parent and dish out rows as it has been the approach so far. We 
spend in fact over 600 seconds in {{read_rows}}. We also spend significant time 
in the worker processes receiving data (30 seconds). Distributing file names to 
workers and letting them do all the work is pretty easy to do and would solve 
these two issues. However it comes with some consequences:

* We would end up with one process per file unless we somehow split large files 
but splitting large files would take time and users can prepare their data 
themselves. Further, COPY TO can now export to multiple files. Therefore I 
think we should keep things simple and adapt our bulk tests to export to 
multiple files.
* Either we change the meaning of the *max ingest rate* and make it per worker 
process, or we would need to use a global lock which could become a bottleneck. 
I would prefer changing the meaning of max ingest rate as users can always 
specify a rate that is equal to {{max_rate / num_processes}} if they really 
need to.
* To keep things simple, retries would be best handled by worker processes and 
therefore if one process fails then the import fails at least partially; I 
think we can live with this. 

In terms of the worker processes, there is room for improvement there too but 
it is not as straightforward. One interesting thing to do would be to use a 
cythonized driver version but this would not work out of the box due to the 
formatting hooks we inject in the driver. We spend a lot of time batching 
records, getting the replicas, binding parameters and hashing (_murmur3).

WDYT [~pauloricardomg] and [~thobbs]?


was (Author: stefania):
I've repeated the test in the exact some conditions as described above with 
{{cProfile}} profiling all processes. I am attaching full profile results 
(_worker_profiles.txt_ and _parent_profile.txt_). 

The total test time was approx 15 minutes (900 seconds), of which 15 seconds 
were an artificial sleep in the parent to allow workers to dump their profile 
results.

It is clear that with these large datasets we can no longer afford to read all 
data in the parent and dish out rows as it has been the approach so far. We 
spend in fact over 600 seconds in {{read_rows}}. We also spend significant time 
in the worker processes receiving data (30 seconds). Distributing file names to 
workers and letting them do all the work is pretty easy to do and would solve 
these two issues. However it comes with some consequences:

* We would end up with one process per file unless we somehow split large files 
but splitting large files would take time and users can prepare their data 
themselves. Further, COPY TO can now export to multiple files. Therefore I 
think we should keep things simple and adapt our bulk tests to export to 
multiple files.
* Either we change the meaning of the *max ingest rate* and make it per worker 
process, or we would need to use a global lock which could become a bottleneck. 
I would prefer changing the meaning of max ingest rate as users can always 
specify a rate that is equal to {{max_rate / num_processes}} if they really 
need to.
* To keep things simple, retries would be best handled by worker processes and 
therefore if one process fails then the import fails at least partially; I 
think we can live with this. 

In terms of the worker processes, there is room for improvement there too but 
it is not as straightforward. One interesting thing to do would be to use a 
cythonized driver version but this would not work out of the box due to the 
formatting hooks we inject in the driver. We spend a lot of time batching 
records, getting the replicas, binding parameters and hashing (_murmur3).

WDYK [~pauloricardomg] and [~thobbs]?

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 

[jira] [Updated] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-01 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11053:
-
Attachment: parent_profile.txt
worker_profiles.txt

I've repeated the test in the exact some conditions as described above with 
{{cProfile}} profiling all processes. I am attaching full profile results 
(_worker_profiles.txt_ and _parent_profile.txt_). 

The total test time was approx 15 minutes (900 seconds), of which 15 seconds 
were an artificial sleep in the parent to allow workers to dump their profile 
results.

It is clear that with these large datasets we can no longer afford to read all 
data in the parent and dish out rows as it has been the approach so far. We 
spend in fact over 600 seconds in {{read_rows}}. We also spend significant time 
in the worker processes receiving data (30 seconds). Distributing file names to 
workers and letting them do all the work is pretty easy to do and would solve 
these two issues. However it comes with some consequences:

* We would end up with one process per file unless we somehow split large files 
but splitting large files would take time and users can prepare their data 
themselves. Further, COPY TO can now export to multiple files. Therefore I 
think we should keep things simple and adapt our bulk tests to export to 
multiple files.
* Either we change the meaning of the *max ingest rate* and make it per worker 
process, or we would need to use a global lock which could become a bottleneck. 
I would prefer changing the meaning of max ingest rate as users can always 
specify a rate that is equal to {{max_rate / num_processes}} if they really 
need to.
* To keep things simple, retries would be best handled by worker processes and 
therefore if one process fails then the import fails at least partially; I 
think we can live with this. 

In terms of the worker processes, there is room for improvement there too but 
it is not as straightforward. One interesting thing to do would be to use a 
cythonized driver version but this would not work out of the box due to the 
formatting hooks we inject in the driver. We spend a lot of time batching 
records, getting the replicas, binding parameters and hashing (_murmur3).

WDYK [~pauloricardomg] and [~thobbs]?

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, parent_profile.txt, 
> worker_profiles.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-01 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11053:
-
Attachment: (was: copy_from_large_benchmark.txt)

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, parent_profile.txt, 
> worker_profiles.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-01 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11053:
-
Attachment: copy_from_large_benchmark.txt

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, parent_profile.txt, 
> worker_profiles.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10368) Support Restricting non-PK Cols in Materialized View Select Statements

2016-02-01 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126775#comment-15126775
 ] 

Tyler Hobbs commented on CASSANDRA-10368:
-

Yes, but it's not high priority at the moment.

> Support Restricting non-PK Cols in Materialized View Select Statements
> --
>
> Key: CASSANDRA-10368
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10368
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> CASSANDRA-9664 allows materialized views to restrict primary key columns in 
> the select statement.  Due to CASSANDRA-10261, the patch did not include 
> support for restricting non-PK columns.  Now that the timestamp issue has 
> been resolved, we can add support for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11073) Cannot contact other nodes on Windows 7 ccm

2016-02-01 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126304#comment-15126304
 ] 

Paulo Motta commented on CASSANDRA-11073:
-

bq. Do you by any chance also have TrendMicro installed?

Spot on! :) I never knew this was installed. I had disabled all anti-viruses 
and firewalls on the Win admin console, but this trendmicro was silently 
running on the background. Thank you so much!

Problem fixed after disabling TrendMicro firewall, closing as not a problem.

> Cannot contact other nodes on Windows 7 ccm
> ---
>
> Key: CASSANDRA-11073
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11073
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: windows 7
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: windows
>
> Before CASSANDRA-9309 was fixed the {{OutboundTcpConnectionPool}} did not 
> bind the client socket to a specific ip/port, so the Windows kernel always 
> picked {{127.0.0.1:random_port}} as client socket address for ccm nodes, 
> regardless of the {{listen_address}} value.
> After fixing CASSANDRA-9309 the {{OutboundTcpConnectionPool}} now binds 
> outgoing client sockets to {{listen_address:random_port}}.
> So any ccm cluster with more than one node will bind client sockets to 
> {{127.0.0.n}} where n is the node id.
> However, the nodes cannot contact each other because connections remain in 
> the {{SYN_SENT}} state on Windows 7, as shown by netstats:
> {noformat}
>   TCP127.0.0.2:50908127.0.0.1:7000 SYN_SENT
> {noformat}
> This bug is preventing the execution of dtests on Windows 7, and was also 
> experienced by [~Stefania].
> I suspect its a configuration/environment problem, but firewall and group 
> policies are disabled. The funny thing is that it does not happen on cassci, 
> but afaik there are no Windows 7 nodes there
> Commenting [this 
> line|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java#L139]
>  fixes the issue, but it's definitely not a solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows

2016-02-01 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126326#comment-15126326
 ] 

Paulo Motta commented on CASSANDRA-11030:
-

how was the data inserted (this is not shown in the excerpt you posted)? 
because if data was inserted with the wrong encoding, it may not be read with 
the correct encoding. 

I have a feeling the system preferred encoding (in your case cp1252) might be 
the culprit here, since its different from the terminal encoding 437 and also 
from cp65001. I wonder if we should use utf-8 as default encoding if one is not 
specified. Can you try replacing:

{noformat}
if encoding is None:
encoding = locale.getpreferredencoding()
if encoding is None:
encoding = 'utf-8'
{noformat}
with
{noformat}
if encoding is None:
 encoding = 'utf-8'
{noformat}
and check if that changes anything?

> utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
> ---
>
> Key: CASSANDRA-11030
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11030
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: cqlsh, windows
>
> {noformat}
> C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat 
> --encoding utf-8
> Connected to test at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
> Use HELP for help.
> cqlsh> INSERT INTO bla.test (bla ) VALUES  ('não') ;
> cqlsh> select * from bla.test;
>  bla
> -
>  n?o
> (1 rows)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11041) Make it clear what timestamp_resolution is used for with DTCS

2016-02-01 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-11041:

Fix Version/s: 3.x
   3.0.x
   2.2.x
   2.1.x

> Make it clear what timestamp_resolution is used for with DTCS
> -
>
> Key: CASSANDRA-11041
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11041
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>  Labels: docs-impacting
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> We have had a few cases lately where users misunderstand what 
> timestamp_resolution does, we should;
> * make the option not autocomplete in cqlsh
> * update documentation
> * log a warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11102) Data lost during compaction

2016-02-01 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11102:
-
Assignee: Marcus Eriksson

> Data lost during compaction
> ---
>
> Key: CASSANDRA-11102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.2.1 (single node, 5 node cluster)
> JDK 8
>Reporter: Jaroslav Kamenik
>Assignee: Marcus Eriksson
>Priority: Blocker
>
> We have experienced data loses in some tables during few weeks since update 
> to cassandra 3.0. I thing I successfully found test case now. 
> Step one - test table:
> CREATE TABLE aaa (
> r int,
> c1 int,
> c2 ascii,
> PRIMARY KEY (r, c1, c2));
> Step two - run few queries:
>   insert into aaa (r, c1, c2) values (1,2,'A');
>   delete from aaa where r=1 and c1=2 and c2='B';
>   insert into aaa (r, c1, c2) values (2,3,'A');
>   delete from aaa where r=2 and c1=3 and c2='B';
>   insert into aaa (r, c1, c2) values (3,4,'A');
>   delete from aaa where r=3 and c1=4 and c2='B';
>   insert into aaa (r, c1, c2) values (4,5,'A');
>   delete from aaa where r=4 and c1=5 and c2='B';
> It creates 4 rows (select count says 4) and 4 tombstones.
> Step 3 - Restart Cassandra
> You will see new files written into C* data folder. I tried sstable-tools to 
> print table structure, it shows 4 rows, data and tombstones are there.
> Step 4 - set GC grace to 1 to force tombstone removing during compaction.
> alter table aaa with GC_GRACE_SECONDS = 1;
> Step 5 - Compact tables
> ./nodetool compact
> aaa files dissapeares during compaction. 
> select count(*) says 0
> compaction history says
> ... aaa  2016-02-01T14:24:01.433   329   0   {}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11073) Cannot contact other nodes on Windows 7 ccm

2016-02-01 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta resolved CASSANDRA-11073.
-
Resolution: Not A Problem

> Cannot contact other nodes on Windows 7 ccm
> ---
>
> Key: CASSANDRA-11073
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11073
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: windows 7
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: windows
>
> Before CASSANDRA-9309 was fixed the {{OutboundTcpConnectionPool}} did not 
> bind the client socket to a specific ip/port, so the Windows kernel always 
> picked {{127.0.0.1:random_port}} as client socket address for ccm nodes, 
> regardless of the {{listen_address}} value.
> After fixing CASSANDRA-9309 the {{OutboundTcpConnectionPool}} now binds 
> outgoing client sockets to {{listen_address:random_port}}.
> So any ccm cluster with more than one node will bind client sockets to 
> {{127.0.0.n}} where n is the node id.
> However, the nodes cannot contact each other because connections remain in 
> the {{SYN_SENT}} state on Windows 7, as shown by netstats:
> {noformat}
>   TCP127.0.0.2:50908127.0.0.1:7000 SYN_SENT
> {noformat}
> This bug is preventing the execution of dtests on Windows 7, and was also 
> experienced by [~Stefania].
> I suspect its a configuration/environment problem, but firewall and group 
> policies are disabled. The funny thing is that it does not happen on cassci, 
> but afaik there are no Windows 7 nodes there
> Commenting [this 
> line|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java#L139]
>  fixes the issue, but it's definitely not a solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10938) test_bulk_round_trip_blogposts is failing occasionally

2016-02-01 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126316#comment-15126316
 ] 

Paulo Motta commented on CASSANDRA-10938:
-

Still didn't reproduce, so I think it was some temporary environmental problem 
in my machine. New version looks good, also tested on Linux and Windows.

Marking as ready to commit. Great investigative work, [~Stefania]. Thanks!

Commit information in comment above /\

> test_bulk_round_trip_blogposts is failing occasionally
> --
>
> Key: CASSANDRA-10938
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10938
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 6452.nps, 6452.png, 7300.nps, 7300a.png, 7300b.png, 
> node1_debug.log, node2_debug.log, node3_debug.log, recording_127.0.0.1.jfr
>
>
> We get timeouts occasionally that cause the number of records to be incorrect:
> http://cassci.datastax.com/job/trunk_dtest/858/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11087) Queries on compact storage tables in mixed version clusters can return incorrect results

2016-02-01 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126390#comment-15126390
 ] 

Sylvain Lebresne commented on CASSANDRA-11087:
--

Mostly lgtm but a few remarks (the 1st one matter, the 2 others are nits 
really):
* Pretty sure we only need to add this for compact tables (in fact, compact 
static ones since that's the only kind that can have statics) so it. In fact, 
for non-compact tables, {{metadata.compactValueColumn()}} returns {{null}} and 
I suspect that might be a problem.
* Style wise, I'd prefer brackets for the {{else}} part of the modified {{if}} 
condition since the {{then}} part has some.
* Might be worth a small comment as to why we need this (if only to link back 
to this issue).

> Queries on compact storage tables in mixed version clusters can return 
> incorrect results
> 
>
> Key: CASSANDRA-11087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11087
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.0.x, 3.x
>
>
> Whilst writing a dtest for CASSANDRA-11045, it becomes apparent that queries 
> on compact storage tables are broken during the 3.0 upgrade (and this has 
> probably been the case since day 1). 
> tl;dr In a cluster with a mix of < 3.0 and 3.0 nodes, reads on COMPACT 
> STORAGE tables may not include all results. 
> To repro: tables are created and data written before any nodes are upgraded 
> to 3.0+, some nodes are then upgraded putting the cluster into a mixed state.
> Now, when a query is run where the coordinator is a < 3.0 node, any 3.0+ 
> replica which has not yet run upgradesstables always returns 0 results.  Once 
> upgradesstables is run, the replica returns the correct results. Likewise, if 
> the data is inserted after the node is upgraded, the results are correct. If 
> the 3.0 node acts as the coordinator, the results are also correct and so 
> once all nodes are upgraded, the problem goes away.
> The behaviour can be seen for both single partition and range requests as 
> [this 
> dtest|https://github.com/beobal/cassandra-dtest/commit/91bb9ffd8fb761ad3454187d2f05f05a6e7af930]
>  demonstrates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11078) upgrade_supercolumns_test dtests failing on 2.1

2016-02-01 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126689#comment-15126689
 ] 

Ryan McGuire commented on CASSANDRA-11078:
--

Here's a few runs from CassCI: 

 - 
http://cassci.datastax.com/job/cassandra-2.1_dtest/414/testReport/upgrade_supercolumns_test/TestSCUpgrade/upgrade_with_counters_test/

 - 
http://cassci.datastax.com/job/cassandra-2.1_dtest/414/testReport/upgrade_supercolumns_test/TestSCUpgrade/upgrade_with_index_creation_test/

> upgrade_supercolumns_test dtests failing on 2.1
> ---
>
> Key: CASSANDRA-11078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11078
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: DS Test Eng
> Fix For: 3.x
>
>
> This tests in this module fail 
> [here|https://github.com/riptano/cassandra-dtest/blob/18647a3e167f127795e2fe63d73305dddf103716/upgrade_supercolumns_test.py#L213]
>  and 
> [here|https://github.com/riptano/cassandra-dtest/blob/529cd71ad5ac4c2f28ccb5560ddc068f604c7b28/upgrade_supercolumns_test.py#L106]
>  when a call to {{start}} with {{wait_other_notice=True}} times out. It 
> happens consistently on the upgrade path from cassandra-2.1 to 2.2. I haven't 
> seen clear evidence as to whether this is a test failure or a C* bug, so I'll 
> mark it as a test error for the TE team to debug.
> I don't have a CassCI link for this failure - the changes to the tests 
> haven't been merged yet.
> EDIT: changing the title of this ticket since there are multiple similar 
> failures. The failing tests are
> {code}
> upgrade_supercolumns_test.py:TestSCUpgrade.upgrade_with_counters_test failing
> upgrade_supercolumns_test.py:TestSCUpgrade.upgrade_with_index_creation_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10779) Mutations do not block for completion under view lock contention

2016-02-01 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15126617#comment-15126617
 ] 

Tyler Hobbs commented on CASSANDRA-10779:
-

There's still one more call path that needs to be updated:  
{{PaxosState.commit()}} calls {{Keyspace.apply()}} and doesn't block on the 
future.  Other than that, the latest patch looks good to me.

> Mutations do not block for completion under view lock contention
> 
>
> Key: CASSANDRA-10779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10779
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Windows 7 64-bit, Cassandra v3.0.0, Java 1.8u60
>Reporter: Will Zhang
>Assignee: Tyler Hobbs
> Fix For: 3.0.x, 3.x
>
>
> Hi guys,
> I encountered the following warning message when I was testing to upgrade 
> from v2.2.2 to v3.0.0. 
> It looks like a write time-out but in an uncaught exception. Could this be an 
> easy fix?
> Log file section below. Thank you!
> {code}
>   WARN  [SharedPool-Worker-64] 2015-11-26 14:04:24,678 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-64,10,main]: {}
> org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - 
> received only 0 responses.
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:427) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:386) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.Keyspace.lambda$apply$59(Keyspace.java:435) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.0.jar:3.0.0]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
>   INFO  [IndexSummaryManager:1] 2015-11-26 14:41:10,527 
> IndexSummaryManager.java:257 - Redistributing index summaries
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)