[1/2] cassandra git commit: ninja-fix issue number

2016-01-04 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk f54eab71d -> 691627bb2


ninja-fix issue number


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5416e38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5416e38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5416e38

Branch: refs/heads/trunk
Commit: f5416e38bc66de36be207891f2a7411f847a23dd
Parents: e4eabd9
Author: Robert Stupp 
Authored: Mon Jan 4 20:10:29 2016 +0100
Committer: Robert Stupp 
Committed: Mon Jan 4 20:10:29 2016 +0100

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5416e38/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 03b9ecb..de5c3ed 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -7,7 +7,7 @@
  * Reduce heap spent when receiving many SSTables (CASSANDRA-10797)
  * Add back support for 3rd party auth providers to bulk loader 
(CASSANDRA-10873)
  * Eliminate the dependency on jgrapht for UDT resolution (CASSANDRA-10653)
- * (Hadoop) Close Clusters and Sessions in Hadoop Input/Output classes 
(CASSANDRA-1837)
+ * (Hadoop) Close Clusters and Sessions in Hadoop Input/Output classes 
(CASSANDRA-10837)
  * Fix sstableloader not working with upper case keyspace name 
(CASSANDRA-10806)
 Merged from 2.2:
  * jemalloc detection fails due to quoting issues in regexv (CASSANDRA-10946)



[jira] [Commented] (CASSANDRA-10902) Skip saved cache directory when checking SSTables at startup

2016-01-04 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081717#comment-15081717
 ] 

Carl Yeksigian commented on CASSANDRA-10902:


Those changes look good to me.

> Skip saved cache directory when checking SSTables at startup
> 
>
> Key: CASSANDRA-10902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10902
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> The SSTable StartupCheck looks for all files which end with "*.db" and 
> compares the version. This causes problems if {{saved_cache_directory}} is a 
> subdirectory of a {{data_file_directories}}. We should make sure that we are 
> not checking any subdirectory where we might be writing *.db files.
> This is the cause of not being able to restart in CASSANDRA-10821.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10960) Compaction should delete old files from incremental backups folder

2016-01-04 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081772#comment-15081772
 ] 

Carl Yeksigian commented on CASSANDRA-10960:


We can't delete the backups because we don't know where the backup process is. 
Also, since compactions don't just combine new sstables that are in the backups 
folder with each other, if we used the newly compacted SSTables, we would be 
including data that has already been backed up, so they wouldn't be incremental 
backup files.

> Compaction should delete old files from incremental backups folder
> --
>
> Key: CASSANDRA-10960
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10960
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>
> When compaction runs the old flushed SS Tables from backups folder are not 
> deleted. If folks need to move the backups folder somewhere outside the 
> cluster, recovery becomes slower because unnecessary files need to be copied 
> back. 
> Is this behavior by design ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10805) Additional Compaction Logging

2016-01-04 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081762#comment-15081762
 ] 

Carl Yeksigian commented on CASSANDRA-10805:


* I was initially using logback, but changed because I was getting incomplete 
files (since I didn't know when a new file was created). Looking at the 
[logback docs|http://logback.qos.ch/manual/appenders.html#RollingFileAppender] 
it seems like I probably just need to implement these two classes to make sure 
the logs are complete
* That will work well; I'll make sure the logger has the name of the table it 
is assigned to in order to capture just the output from one table
* Good point; this could also simplify some of the multiple-line events

> Additional Compaction Logging
> -
>
> Key: CASSANDRA-10805
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10805
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Observability
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
>Priority: Minor
>
> Currently, viewing the results of past compactions requires parsing the log 
> and looking at the compaction history system table, which doesn't have 
> information about, for example, flushed sstables not previously compacted.
> This is a proposal to extend the information captured for compaction. 
> Initially, this would be done through a JMX call, but if it proves to be 
> useful and not much overhead, it might be a feature that could be enabled for 
> the compaction strategy all the time.
> Initial log information would include:
> - The compaction strategy type controlling each column family
> - The set of sstables included in each compaction strategy
> - Information about flushes and compactions, including times and all involved 
> sstables
> - Information about sstables, including generation, size, and tokens
> - Any additional metadata the strategy wishes to add to a compaction or an 
> sstable, like the level of an sstable or the type of compaction being 
> performed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7739) cassandra-stress: cannot handle "value-less" tables

2016-01-04 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-7739:

Reviewer: Robert Stupp

> cassandra-stress: cannot handle "value-less" tables
> ---
>
> Key: CASSANDRA-7739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7739
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>  Labels: lhf, stress
> Fix For: 2.1.x
>
> Attachments: cassandra-2.1.12-7739.txt
>
>
> Given a table, that only has primary-key columns, cassandra-stress fails with 
> this exception.
> The bug is, that 
> https://github.com/apache/cassandra/blob/trunk/tools/stress/src/org/apache/cassandra/stress/StressProfile.java#L281
>  always adds the {{SET}} even if there are no "value columns" to update.
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: 
> InvalidRequestException(why:line 1:24 no viable alternative at input 'WHERE')
>   at 
> org.apache.cassandra.stress.StressProfile.getInsert(StressProfile.java:352)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommandUser$1.get(SettingsCommandUser.java:66)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommandUser$1.get(SettingsCommandUser.java:62)
>   at 
> org.apache.cassandra.stress.operations.SampledOpDistributionFactory$1.get(SampledOpDistributionFactory.java:76)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.(StressAction.java:248)
>   at org.apache.cassandra.stress.StressAction.run(StressAction.java:188)
>   at org.apache.cassandra.stress.StressAction.warmup(StressAction.java:92)
>   at org.apache.cassandra.stress.StressAction.run(StressAction.java:62)
>   at org.apache.cassandra.stress.Stress.main(Stress.java:109)
> Caused by: InvalidRequestException(why:line 1:24 no viable alternative at 
> input 'WHERE')
>   at 
> org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result$prepare_cql3_query_resultStandardScheme.read(Cassandra.java:52282)
>   at 
> org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result$prepare_cql3_query_resultStandardScheme.read(Cassandra.java:52259)
>   at 
> org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:52198)
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1797)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1783)
>   at 
> org.apache.cassandra.stress.util.SimpleThriftClient.prepare_cql3_query(SimpleThriftClient.java:79)
>   at 
> org.apache.cassandra.stress.StressProfile.getInsert(StressProfile.java:348)
>   ... 8 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10965) Shadowable tombstones can continue to shadow view results when timestamps match

2016-01-04 Thread Carl Yeksigian (JIRA)
Carl Yeksigian created CASSANDRA-10965:
--

 Summary: Shadowable tombstones can continue to shadow view results 
when timestamps match
 Key: CASSANDRA-10965
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10965
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
 Fix For: 3.0.x
 Attachments: shadow-ts.cql

I've attached a script which reproduces the issue. The first time we insert 
with {{TIMESTAMP 2}}, we are inserting a new row which has the same timestamp 
as the previous shadow tombstone, and it continues to be shadowed by that 
tombstone because we shadow values with the same timestamp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Update of "ContributorsGroup" by BrandonWilliams

2016-01-04 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ContributorsGroup" page has been changed by BrandonWilliams:
https://wiki.apache.org/cassandra/ContributorsGroup?action=diff=55=56

   * AaronMorton
   * achilleasa
   * AdamHolmberg
-  * al_shopov
   * AlekseyYeschenko
   * Alexis Wilke
   * AlicePorfirio


[Cassandra Wiki] Update of "ContributorsGroup" by BrandonWilliams

2016-01-04 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ContributorsGroup" page has been changed by BrandonWilliams:
https://wiki.apache.org/cassandra/ContributorsGroup?action=diff=55=56

   * AaronMorton
   * achilleasa
   * AdamHolmberg
-  * al_shopov
   * AlekseyYeschenko
   * Alexis Wilke
   * AlicePorfirio


[jira] [Commented] (CASSANDRA-10910) Materialized view remained rows

2016-01-04 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081690#comment-15081690
 ] 

Carl Yeksigian commented on CASSANDRA-10910:


I pushed [a new dtest 
branch|https://github.com/carlyeks/cassandra-dtest/tree/10910] which includes a 
change to the shadowable tombstone that demonstrates this issue.

> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.0.x, 3.x
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8755) Replace trivial uses of String.replace/replaceAll/split with StringUtils methods

2016-01-04 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-8755:

Assignee: Alexander Shopov

> Replace trivial uses of String.replace/replaceAll/split with StringUtils 
> methods
> 
>
> Key: CASSANDRA-8755
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8755
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jaroslav Kamenik
>Assignee: Alexander Shopov
>Priority: Trivial
>  Labels: lhf
> Attachments: 8755.tar.gz, trunk-8755.patch, trunk-8755.txt
>
>
> There are places in the code where those regex based methods are  used with 
> plain, not regexp, strings, so StringUtils alternatives should be faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-01-04 Thread snazy
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/691627bb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/691627bb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/691627bb

Branch: refs/heads/trunk
Commit: 691627bb2336fbcddebf814fb5c675a1039ee370
Parents: f54eab7 f5416e3
Author: Robert Stupp 
Authored: Mon Jan 4 20:10:46 2016 +0100
Committer: Robert Stupp 
Committed: Mon Jan 4 20:10:46 2016 +0100

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/691627bb/CHANGES.txt
--



cassandra git commit: ninja-fix issue number

2016-01-04 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 e4eabd901 -> f5416e38b


ninja-fix issue number


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5416e38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5416e38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5416e38

Branch: refs/heads/cassandra-3.0
Commit: f5416e38bc66de36be207891f2a7411f847a23dd
Parents: e4eabd9
Author: Robert Stupp 
Authored: Mon Jan 4 20:10:29 2016 +0100
Committer: Robert Stupp 
Committed: Mon Jan 4 20:10:29 2016 +0100

--
 CHANGES.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5416e38/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 03b9ecb..de5c3ed 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -7,7 +7,7 @@
  * Reduce heap spent when receiving many SSTables (CASSANDRA-10797)
  * Add back support for 3rd party auth providers to bulk loader 
(CASSANDRA-10873)
  * Eliminate the dependency on jgrapht for UDT resolution (CASSANDRA-10653)
- * (Hadoop) Close Clusters and Sessions in Hadoop Input/Output classes 
(CASSANDRA-1837)
+ * (Hadoop) Close Clusters and Sessions in Hadoop Input/Output classes 
(CASSANDRA-10837)
  * Fix sstableloader not working with upper case keyspace name 
(CASSANDRA-10806)
 Merged from 2.2:
  * jemalloc detection fails due to quoting issues in regexv (CASSANDRA-10946)



[jira] [Commented] (CASSANDRA-10392) Allow Cassandra to trace to custom tracing implementations

2016-01-04 Thread Chris Burroughs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081708#comment-15081708
 ] 

Chris Burroughs commented on CASSANDRA-10392:
-

One minor suggestion:  It would be nice if multiple tracers could be enabled at 
the same time.  For example, to use 'normal' tracing to debug a problem *with* 
a zipkin cluster, or to enable both zipkin and SOME_OTHER_TRACER for comparison.

> Allow Cassandra to trace to custom tracing implementations 
> ---
>
> Key: CASSANDRA-10392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10392
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: mck
>Assignee: mck
> Fix For: 3.x
>
> Attachments: 10392-trunk.txt
>
>
> It can be possible to use an external tracing solution in Cassandra by 
> abstracting out the writing of tracing to system_traces tables in the tracing 
> package to separate implementation classes and leaving abstract classes in 
> place that define the interface and behaviour otherwise of C* tracing.
> Then via a system property "cassandra.custom_tracing_class" the Tracing class 
> implementation could be swapped out with something third party.
> An example of this is adding Zipkin tracing into Cassandra in the Summit 
> [presentation|http://thelastpickle.com/files/2015-09-24-using-zipkin-for-full-stack-tracing-including-cassandra/presentation/tlp-reveal.js/tlp-cassandra-zipkin.html].
>  Code for the implemented Zipkin plugin can be found at 
> https://github.com/thelastpickle/cassandra-zipkin-tracing/
> In addition this patch passes the custom payload through into the tracing 
> session allowing a third party tracing solution like Zipkin to do full-stack 
> tracing from clients through and into Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10829) cleanup + repair generates a lot of logs

2016-01-04 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081735#comment-15081735
 ] 

Carl Yeksigian commented on CASSANDRA-10829:


Looks good, but the test failures look like we need to use a 
{{ConcurrentHashSet}} for {{finished}} since we're getting 
{{IllegalStateException}}s.

> cleanup + repair generates a lot of logs
> 
>
> Key: CASSANDRA-10829
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10829
> Project: Cassandra
>  Issue Type: Bug
> Environment: 5 nodes on Cassandra 2.1.11 (on Debian)
>Reporter: Fabien Rousseau
>Assignee: Marcus Eriksson
> Fix For: 2.1.x
>
>
> One of our node generates a lot of cassandra logs (int the 10 MB/s) and CPU 
> usage has increased (by a factor 2-3).
> This was most probably triggered by a "nodetool snapshot" while a cleanup was 
> already running on this node.
> An example of those logs:
> 2015-12-08 09:15:17,794 INFO  
> [ValidationExecutor:689]ColumnFamilyStore.java:1923 Spinning trying to 
> capture released readers [...]
> 2015-12-08 09:15:17,794 INFO  
> [ValidationExecutor:689]ColumnFamilyStore.java:1924 Spinning trying to 
> capture all readers [...]
> 2015-12-08 09:15:17,795 INFO  
> [ValidationExecutor:689]ColumnFamilyStore.java:1923 Spinning trying to 
> capture released readers [...]
> 2015-12-08 09:15:17,795 INFO  
> [ValidationExecutor:689]ColumnFamilyStore.java:1924 Spinning trying to 
> capture all readers [...]
> (I removed SSTableReader information because it's rather long... I can share 
> it privately if needed)
> Note that the date has not been changed (only 1ms between logs)
> It should not generate that gigantic amount of logs :)
> This is probably linked to: 
> https://issues.apache.org/jira/browse/CASSANDRA-9637



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Move static JVM options to jvm.options file

2016-01-04 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 691627bb2 -> 869bdabf4


Move static JVM options to jvm.options file

Patch by pmotta; reviewed by jmckenzie for CASSANDRA-10494


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/869bdabf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/869bdabf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/869bdabf

Branch: refs/heads/trunk
Commit: 869bdabf424df9f95651f4553b5b358919d0e799
Parents: 691627b
Author: Paulo Motta 
Authored: Thu Dec 24 21:13:58 2015 -0200
Committer: Joshua McKenzie 
Committed: Mon Jan 4 14:17:00 2016 -0500

--
 CHANGES.txt|   1 +
 NEWS.txt   |   1 +
 conf/cassandra-env.ps1 |  45 
 conf/cassandra-env.sh  |  45 
 conf/jvm.options   | 127 
 5 files changed, 129 insertions(+), 90 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/869bdabf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cb3630f..9c3a50f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 3.2
  * Add requireAuthorization method to IAuthorizer (CASSANDRA-10852)
+ * Move static JVM options to conf/jvm.options file (CASSANDRA-10494)
  * Fix CassandraVersion to accept x.y version string (CASSANDRA-10931)
  * Add forceUserDefinedCleanup to allow more flexible cleanup (CASSANDRA-10708)
  * (cqlsh) allow setting TTL with COPY (CASSANDRA-9494)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/869bdabf/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 9464637..33fef1f 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -29,6 +29,7 @@ Upgrading
 -
- The compression ratio metrics computation has been modified to be more 
accurate.
- Running Cassandra as root is prevented by default.
+   - JVM options are moved from cassandra-env.(sh|ps1) to jvm.options file
 
 
 3.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/869bdabf/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index a38429e..0326199 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1
@@ -416,10 +416,6 @@ Function SetCassandraEnvironment
 exit
 }
 
-# enable assertions.  disabling this in production will give a modest
-# performance benefit (around 5%).
-$env:JVM_OPTS = "$env:JVM_OPTS -ea"
-
 # Specifies the default port over which Cassandra will be available for
 # JMX connections.
 $JMX_PORT="7199"
@@ -427,50 +423,11 @@ Function SetCassandraEnvironment
 # store in env to check if it's avail in verification
 $env:JMX_PORT=$JMX_PORT
 
-# enable thread priorities, primarily so we can give periodic tasks
-# a lower priority to avoid interfering with client workload
-$env:JVM_OPTS="$env:JVM_OPTS -XX:+UseThreadPriorities"
-# allows lowering thread priority without being root on linux - probably
-# not necessary on Windows but doesn't harm anything.
-# see http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workar
-$env:JVM_OPTS="$env:JVM_OPTS -XX:ThreadPriorityPolicy=42"
-
-$env:JVM_OPTS="$env:JVM_OPTS -XX:+HeapDumpOnOutOfMemoryError"
-
-# Per-thread stack size.
-$env:JVM_OPTS="$env:JVM_OPTS -Xss256k"
-
-# Larger interned string table, for gossip's benefit (CASSANDRA-6410)
-$env:JVM_OPTS="$env:JVM_OPTS -XX:StringTableSize=103"
-
-# Make sure all memory is faulted and zeroed on startup.
-# This helps prevent soft faults in containers and makes
-# transparent hugepage allocation more effective.
-#$env:JVM_OPTS="$env:JVM_OPTS -XX:+AlwaysPreTouch"
-
-# Biased locking does not benefit Cassandra.
-$env:JVM_OPTS="$env:JVM_OPTS -XX:-UseBiasedLocking"
-
-# Enable thread-local allocation blocks and allow the JVM to automatically
-# resize them at runtime.
-$env:JVM_OPTS="$env:JVM_OPTS -XX:+UseTLAB -XX:+ResizeTLAB"
-
-# http://www.evanjones.ca/jvm-mmap-pause.html
-$env:JVM_OPTS="$env:JVM_OPTS -XX:+PerfDisableSharedMem"
-
 # Configure the following for JEMallocAllocator and if jemalloc is not 
available in the system
 # library path.
 # set LD_LIBRARY_PATH=/lib/
 # $env:JVM_OPTS="$env:JVM_OPTS -Djava.library.path=/lib/"
 
-# uncomment to have Cassandra JVM listen for remote debuggers/profilers on 
port 1414
-# $env:JVM_OPTS="$env:JVM_OPTS 
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=1414"
-
-# Prefer binding to IPv4 network intefaces (when 

[jira] [Resolved] (CASSANDRA-10960) Compaction should delete old files from incremental backups folder

2016-01-04 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian resolved CASSANDRA-10960.

Resolution: Not A Problem

Yes, this is how backups work; you have to manually delete the backups after 
you are finished backing them up.

> Compaction should delete old files from incremental backups folder
> --
>
> Key: CASSANDRA-10960
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10960
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>
> When compaction runs the old flushed SS Tables from backups folder are not 
> deleted. If folks need to move the backups folder somewhere outside the 
> cluster, recovery becomes slower because unnecessary files need to be copied 
> back. 
> Is this behavior by design ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10960) Compaction should delete old files from incremental backups folder

2016-01-04 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081741#comment-15081741
 ] 

Anubhav Kale commented on CASSANDRA-10960:
--

This is not about manually deleting old backup folders (that's okay). This is 
about C* not deleting the files from backups when those were deleted as part of 
compaction. Why is that by design -- can you please elaborate ?

> Compaction should delete old files from incremental backups folder
> --
>
> Key: CASSANDRA-10960
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10960
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>
> When compaction runs the old flushed SS Tables from backups folder are not 
> deleted. If folks need to move the backups folder somewhere outside the 
> cluster, recovery becomes slower because unnecessary files need to be copied 
> back. 
> Is this behavior by design ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-10960) Compaction should delete old files from incremental backups folder

2016-01-04 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale reopened CASSANDRA-10960:
--

> Compaction should delete old files from incremental backups folder
> --
>
> Key: CASSANDRA-10960
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10960
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>
> When compaction runs the old flushed SS Tables from backups folder are not 
> deleted. If folks need to move the backups folder somewhere outside the 
> cluster, recovery becomes slower because unnecessary files need to be copied 
> back. 
> Is this behavior by design ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10898) Migrate Compaction Strategy Node by Node

2016-01-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080936#comment-15080936
 ] 

Sylvain Lebresne commented on CASSANDRA-10898:
--

bq.  it looks like in your example 1.1.1.1 and 1.1.1.2 would be the IP 
addresses of which Cassandra Nodes have been overridden to use a different 
compaction strategy?

Right.

bq. And honestly for how little you would need this a quick dirty script might 
be fine.

Just to clarify, my suggestion is not meant to be _only_ for the use case you 
opened the ticket for. I just think it could be a neat option to be able to 
test specific settings on a single (or a couple of) node easily for settings 
that are intrinsically local (so at least compaction strategy, caching options 
and sstable compression for now). And while those changes can be done through 
JMX through scripts, it's slightly involved and not having it persist across 
can be operationally annoying. But this is nothing more than a nice to have in 
my mind.

bq. Another option I thought of is having a mechanism to control the maximum 
simultaneous compactions for table in the cluster.

Sure, though that solve a slightly different problem than the one I made my 
suggestion for (as explained above). But being able to somewhat limit the 
amount of concurrent compaction going on in the cluster is also an interesting 
suggestion, and not just for when changing compaction strategy in fact. It's 
probably a bit more involved however.

> Migrate Compaction Strategy Node by Node
> 
>
> Key: CASSANDRA-10898
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10898
> Project: Cassandra
>  Issue Type: Wish
>  Components: Compaction, Tools
>Reporter: Andrew From
>
> It would be a great feature to be able to slowly change the compaction 
> strategy of a ColumnFamily node-by-node instead of cluster wide. Currently if 
> you change it cluster wide there's no good way to predict how long it will 
> take. Thus the process could run for days while you still need the live data, 
> but the cluster responds much more slowly due to the compaction strategy 
> migration.
> I stumbled across 
> http://blog.alteroot.org/articles/2015-04-20/change-cassandra-compaction-strategy-on-production-cluster.html
>  which gave me the idea. I was thinking this would be a nice feature to add 
> to NodeTool, provided that the strategy in the blog is sound I wouldn't mind 
> going ahead with the dev work to automate it. If not I would love to hear 
> other ideas on how to best make this happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080970#comment-15080970
 ] 

Sylvain Lebresne commented on CASSANDRA-10961:
--

[~xiaost] I suspect we'll need more info to track the underlying problem. 

At least a confirmation of which node is the added one (I strongly suspect 
HostB but can't hurt to be sure) and the schema of the table for which this 
happen might give some clue. And when you say "all the time", could you 
elaborate on how many time we're talking here, as well as whether the error 
always happen for the same table?

[~Jack Doo] as far as I can tell from your trace, this is not the same error 
(you don't have the {{IllegalArgumentException: Not enough bytes}} which is 
indicative of a server side bug) and from what you posted, this might be a 
network problem. In any case, tracking 2 different problems on the same ticket 
will be confusing so if you can reproduce your own problem, please open a 
separate ticket.

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> 

[jira] [Commented] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread xiaost (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080995#comment-15080995
 ] 

xiaost commented on CASSANDRA-10961:


[~slebresne] 

the node receiving data is the added one. 
we only have one table in the cluster :-(
I tried to add node to the cluster  but failed with the errors above and I kill 
the process.  
the progress seems always sucks in streaming file which larger than 10GB 
(after abort 4~5 times?) 
I use
{code}nodetool bootstrap resume{code}
to resume the bootstrap, but it failed and logs like the other issue: 
https://issues.apache.org/jira/browse/CASSANDRA-10448

the cluster now have 3 nodes(UN status) with abort 600GB*3 data on HDD*3, and 
one node(UJ status) with ~1GB load

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> 

[jira] [Updated] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10961:
-
Assignee: Paulo Motta

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> ... 1 common frames omitted
> ERROR [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:524 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] 

[jira] [Commented] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081042#comment-15081042
 ] 

Sylvain Lebresne commented on CASSANDRA-10961:
--

Thanks for the info. The {{IllegalArgumentException}} means that either the 
comparator on the new node is wrong, or the data streamed has a problem. Given 
that you don't get errors while reading data on the nodes, it's unlikely the 
original sstable that is corrupted so it might be something streaming specific. 
Especially since CASSANDRA-10448 suggests there is something wrong with 
streaming in 2.2. In fact, if streaming was somewhat corrupting data while 
streaming, that would explain both this and the different stacks found on 
CASSANDRA-10448 so this might all be the same problem. We'll have someone 
investigate, but any info that could help reproducing would be really helpful.

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> 

[jira] [Updated] (CASSANDRA-10448) "Unknown type 0" Stream failure on Repair

2016-01-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10448:
-
Assignee: Paulo Motta

> "Unknown type 0" Stream failure on Repair
> -
>
> Key: CASSANDRA-10448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10448
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.2.2
> 5 Nodes in Google Compute Engine
> Java 1.8.0_60
>Reporter: Omri Iluz
>Assignee: Paulo Motta
> Fix For: 2.2.x
>
> Attachments: apache-cassandra-2.2.4-SNAPSHOT.jar, casslogs.txt, 
> receiversystem.log, sendersystem.log
>
>
> While running repair after upgrading to 2.2.2 I am getting many stream fail 
> errors:
> {noformat}
> [2015-10-05 23:52:30,353] Repair session 4c181051-6bbb-11e5-acdb-d9a8bbd39330 
> for range (59694553044959221,86389982480621619] failed with error [repair 
> #4c181051-6bbb-11e5-acdb-d9a8bbd39330 on px/acti
> vities, (59694553044959221,86389982480621619]] Sync failed between 
> /10.240.81.104 and /10.240.134.221 (progress: 4%)
> {noformat}
> Logs from both sides of the stream:
> Sides 1 -
> {noformat}
> INFO  [STREAM-INIT-/10.240.81.104:52722] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:111 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Creating new streaming plan for Repair
> INFO  [STREAM-INIT-/10.240.81.104:52722] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:118 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Received streaming plan for Repair
> INFO  [STREAM-INIT-/10.240.81.104:52723] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:118 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Received streaming plan for Repair
> INFO  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,098 
> StreamResultFuture.java:168 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Prepare completed. Receiving 13 files(517391317 bytes), sending 10 
> files(469491729 bytes)
> ERROR [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,234 
> StreamSession.java:524 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Streaming error occurred
> java.lang.IllegalArgumentException: Unknown type 0
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:96)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:57)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> INFO  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,302 
> StreamResultFuture.java:182 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Session with /10.240.81.104 is complete
> WARN  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,302 
> StreamResultFuture.java:209 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Stream failed
> {noformat}
> Side 2 -
> {noformat}
> INFO  [AntiEntropyStage:1] 2015-10-05 23:52:30,060 StreamResultFuture.java:86 
> - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] Executing streaming plan for 
> Repair
> INFO  [StreamConnectionEstablisher:6] 2015-10-05 23:52:30,061 
> StreamSession.java:232 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Starting streaming to /10.240.134.221
> INFO  [StreamConnectionEstablisher:6] 2015-10-05 23:52:30,063 
> StreamCoordinator.java:213 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Beginning stream session with /10.240.134.221
> INFO  [STREAM-IN-/10.240.134.221] 2015-10-05 23:52:30,098 
> StreamResultFuture.java:168 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Prepare completed. Receiving 10 files(469491729 bytes), sending 13 
> files(517391317 bytes)
> INFO  [STREAM-IN-/10.240.134.221] 2015-10-05 23:52:30,349 
> StreamResultFuture.java:182 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Session with /10.240.134.221 is complete
> ERROR [STREAM-OUT-/10.240.134.221] 2015-10-05 23:52:30,349 
> StreamSession.java:524 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Streaming error occurred
> org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe
>   at 
> org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:144) 
> ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:79)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:76)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> 

[jira] [Commented] (CASSANDRA-6737) A batch statements on a single partition should not create a new CF object for each update

2016-01-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080952#comment-15080952
 ] 

Sylvain Lebresne commented on CASSANDRA-6737:
-

bq. Would you say there's any limitation/recommendation regarding the number of 
statements contained in a single partition batch (or the summarized size in kb)?

A single partition batch is internally a single mutation, so unless I've missed 
some recent changes to the commit log, you're hard-limited by the size of a 
commit log segment, and believe by default we actually limit that to half of 
the segment, so 16MB (see {{max_mutation_size_in_kb}} in the yaml).

Now, I'd really appreciate it if you could use the mailing list for such 
question as it is a more appropriate venue (especially since the question is 
barely related to the original ticket).

> A batch statements on a single partition should not create a new CF object 
> for each update
> --
>
> Key: CASSANDRA-6737
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6737
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>  Labels: performance
> Fix For: 2.0.6
>
> Attachments: 6737.2.patch, 6737.txt
>
>
> BatchStatement creates a new ColumnFamily object (as well as a new 
> RowMutation object) for every update in the batch, even if all those update 
> are actually on the same partition. This is particularly inefficient when 
> bulkloading data into a single partition (which is not all that uncommon).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10944) ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread

2016-01-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10944:
-
Reviewer: Marcus Eriksson

> ERROR [CompactionExecutor] CassandraDaemon.java  Exception in thread 
> -
>
> Key: CASSANDRA-10944
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10944
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Alexey Ivanchin
>Assignee: Sylvain Lebresne
>  Labels: error
> Fix For: 3.0.x, 3.x
>
>
> Hey. Please help me with a problem. Recently I updated to 3.0.1 and this 
> problem appeared in the logs.
> ERROR [CompactionExecutor:2596] 2015-12-28 08:30:27,733 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:2596,1,main]
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$100(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData$$Lambda$53/1339741213.apply(Unknown
>  Source) ~[na:na]
>   at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:614)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:657) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:632) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BTreeRow.lambda$purge$95(BTreeRow.java:333) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BTreeRow$$Lambda$52/1236900032.apply(Unknown 
> Source) ~[na:na]
>   at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:614)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:657) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:632) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.rows.BTreeRow.transformAndFilter(BTreeRow.java:338) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:333) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> 

[jira] [Commented] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081060#comment-15081060
 ] 

Paulo Motta commented on CASSANDRA-10961:
-

[~xiaost] can you please try reproducing the issue replacing your cassandra 
2.2.4 jar with this  [modified 
jar|https://issues.apache.org/jira/secure/attachment/12776132/apache-cassandra-2.2.4-SNAPSHOT.jar],
 that contains more detailed debug logging? If you prefer to build your own 
jar, you can clone [this 
branch|https://github.com/pauloricardomg/cassandra/tree/2.2-10488] and run 
{{ant jar}} to generate the jar.

Please attach the debug.log of source and destination nodes after replacing the 
jar and reproducing the issue so we can investigate it. Thanks!

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
>

[jira] [Commented] (CASSANDRA-10938) test_bulk_round_trip_blogposts is failing occasionally

2016-01-04 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080872#comment-15080872
 ] 

Stefania commented on CASSANDRA-10938:
--

The flight recorder file attached, _recording_127.0.0.1.jfr_, provides the best 
information to understand the problem: about 15 shared pool worker threads are 
busy copying the {{NonBlockingHashMap}} that we use to store the query states 
in {{ServerConnection}}. This consumes 99% of the CPU on the machine (note that 
I lowered the priority of the process when I recorded that file).

We store one entry per stream id and we never clean this map but this is not 
the issue. When inserting data with cassandra-stress, we use up to 33k stream 
ids whilst when inserting data with COPY FROM the python driver is careful to 
reuse stream ids and we only use around 300 of them. So the map should not be 
resized as much and yet the problem occurs with COPY FROM and not with 
cassandra-stress. The difference between the two is probably that in COPY FROM 
we have may more concurrent requests, hence a higher concurrency level on the 
map.

Of all hot threads in the flight recorder file, only one is doing a 
{{putIfAbsent}} whist the other ones are simply accessing a value via a 
{{get}}. However the map is designed so that all threads help with the copy and 
this is what's happening here. I suspect a bug that prevents threads from 
making progress and keeps them spinning.

We are currently using the latest available version of {{NonBlockingHashMap}}, 
version 1.0.6, from [this 
repository|https://github.com/boundary/high-scale-lib].

We have a number of options:

- Fix {{NonBlockingHashMap}}
- Replace it
- Instantiate it with an initial size to prevent resizing (4K fixes this 
specific case). 


> test_bulk_round_trip_blogposts is failing occasionally
> --
>
> Key: CASSANDRA-10938
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10938
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x
>
> Attachments: 6452.nps, 6452.png, 7300.nps, 7300a.png, 7300b.png, 
> node1_debug.log, node2_debug.log, node3_debug.log, recording_127.0.0.1.jfr
>
>
> We get timeouts occasionally that cause the number of records to be incorrect:
> http://cassci.datastax.com/job/trunk_dtest/858/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10448) "Unknown type 0" Stream failure on Repair

2016-01-04 Thread xiaost (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080888#comment-15080888
 ] 

xiaost commented on CASSANDRA-10448:


+1, in 2.2.4

3 nodes cluster while adding new one.

> "Unknown type 0" Stream failure on Repair
> -
>
> Key: CASSANDRA-10448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10448
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.2.2
> 5 Nodes in Google Compute Engine
> Java 1.8.0_60
>Reporter: Omri Iluz
> Fix For: 2.2.x
>
> Attachments: apache-cassandra-2.2.4-SNAPSHOT.jar, casslogs.txt, 
> receiversystem.log, sendersystem.log
>
>
> While running repair after upgrading to 2.2.2 I am getting many stream fail 
> errors:
> {noformat}
> [2015-10-05 23:52:30,353] Repair session 4c181051-6bbb-11e5-acdb-d9a8bbd39330 
> for range (59694553044959221,86389982480621619] failed with error [repair 
> #4c181051-6bbb-11e5-acdb-d9a8bbd39330 on px/acti
> vities, (59694553044959221,86389982480621619]] Sync failed between 
> /10.240.81.104 and /10.240.134.221 (progress: 4%)
> {noformat}
> Logs from both sides of the stream:
> Sides 1 -
> {noformat}
> INFO  [STREAM-INIT-/10.240.81.104:52722] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:111 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Creating new streaming plan for Repair
> INFO  [STREAM-INIT-/10.240.81.104:52722] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:118 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Received streaming plan for Repair
> INFO  [STREAM-INIT-/10.240.81.104:52723] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:118 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Received streaming plan for Repair
> INFO  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,098 
> StreamResultFuture.java:168 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Prepare completed. Receiving 13 files(517391317 bytes), sending 10 
> files(469491729 bytes)
> ERROR [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,234 
> StreamSession.java:524 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Streaming error occurred
> java.lang.IllegalArgumentException: Unknown type 0
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:96)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:57)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> INFO  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,302 
> StreamResultFuture.java:182 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Session with /10.240.81.104 is complete
> WARN  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,302 
> StreamResultFuture.java:209 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Stream failed
> {noformat}
> Side 2 -
> {noformat}
> INFO  [AntiEntropyStage:1] 2015-10-05 23:52:30,060 StreamResultFuture.java:86 
> - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] Executing streaming plan for 
> Repair
> INFO  [StreamConnectionEstablisher:6] 2015-10-05 23:52:30,061 
> StreamSession.java:232 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Starting streaming to /10.240.134.221
> INFO  [StreamConnectionEstablisher:6] 2015-10-05 23:52:30,063 
> StreamCoordinator.java:213 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Beginning stream session with /10.240.134.221
> INFO  [STREAM-IN-/10.240.134.221] 2015-10-05 23:52:30,098 
> StreamResultFuture.java:168 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Prepare completed. Receiving 10 files(469491729 bytes), sending 13 
> files(517391317 bytes)
> INFO  [STREAM-IN-/10.240.134.221] 2015-10-05 23:52:30,349 
> StreamResultFuture.java:182 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Session with /10.240.134.221 is complete
> ERROR [STREAM-OUT-/10.240.134.221] 2015-10-05 23:52:30,349 
> StreamSession.java:524 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Streaming error occurred
> org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe
>   at 
> org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:144) 
> ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:79)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:76)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> 

[jira] [Commented] (CASSANDRA-10960) Compaction should delete old files from incremental backups folder

2016-01-04 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081833#comment-15081833
 ] 

Anubhav Kale commented on CASSANDRA-10960:
--

Here is a scenario:

Time t1: KS/CF/s1.db s2.db KS/CF/backups/s1.db s2.db
Time t2: KS/CF/s1.db s2.db s3.db KS/CF/backups/s1.db s2.db s3.db [Since anytime 
SS Table is flushed its written to backups as well]
Time t3 (Compaction ran): KS/CF/s4.db KS/CF/backups/s1.db s2.db s3.db s4.db 

This is existing behavior - correct ? The data hasn't changed in here, its 
simply represented via s4. It is reasonable to keep s1,s2,s3,s4 in backups so 
that folks can go back to any point in time. However, if folks want to move 
data from backups to elsewhere outside C* and copy it back during recovery -- 
it adds unnecessary burden of copying the same data multiple times (copying 
back s4 should have been enough here for recovery). 

Does this make sense ? Please let me know if I did not understand something 
correctly here.

> Compaction should delete old files from incremental backups folder
> --
>
> Key: CASSANDRA-10960
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10960
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>
> When compaction runs the old flushed SS Tables from backups folder are not 
> deleted. If folks need to move the backups folder somewhere outside the 
> cluster, recovery becomes slower because unnecessary files need to be copied 
> back. 
> Is this behavior by design ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2016-01-04 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081936#comment-15081936
 ] 

Brandon Williams commented on CASSANDRA-10726:
--

Let's make it configurable for a) blocking read repair, b) non-blocking read 
repair, and c) no read repair at all.

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: add deprecation warning for Thrift

2016-01-04 Thread jbellis
add deprecation warning for Thrift


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1bc147ce
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1bc147ce
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1bc147ce

Branch: refs/heads/trunk
Commit: 1bc147ceabc2e423e31c3810d6efcbfba5f57b02
Parents: 01d26dd
Author: Jonathan Ellis 
Authored: Mon Jan 4 17:34:33 2016 -0600
Committer: Jonathan Ellis 
Committed: Mon Jan 4 17:34:49 2016 -0600

--
 NEWS.txt | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bc147ce/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 33fef1f..1269c98 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -31,6 +31,10 @@ Upgrading
- Running Cassandra as root is prevented by default.
- JVM options are moved from cassandra-env.(sh|ps1) to jvm.options file
 
+Deprecation
+---
+   - The Thrift API is deprecated and will be removed in Cassandra 4.0.
+
 
 3.1
 =



[1/3] cassandra git commit: add deprecation warning for Thrift

2016-01-04 Thread jbellis
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.2 01d26dd3f -> e2ce406f3
  refs/heads/cassandra-3.3 01d26dd3f -> 1bc147cea
  refs/heads/trunk 01d26dd3f -> 1bc147cea


add deprecation warning for Thrift


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e2ce406f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e2ce406f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e2ce406f

Branch: refs/heads/cassandra-3.2
Commit: e2ce406f3932040cc9a1298f7cc86c4ac78eef05
Parents: 01d26dd
Author: Jonathan Ellis 
Authored: Mon Jan 4 17:34:33 2016 -0600
Committer: Jonathan Ellis 
Committed: Mon Jan 4 17:34:33 2016 -0600

--
 NEWS.txt | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2ce406f/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 33fef1f..1269c98 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -31,6 +31,10 @@ Upgrading
- Running Cassandra as root is prevented by default.
- JVM options are moved from cassandra-env.(sh|ps1) to jvm.options file
 
+Deprecation
+---
+   - The Thrift API is deprecated and will be removed in Cassandra 4.0.
+
 
 3.1
 =



[2/3] cassandra git commit: add deprecation warning for Thrift

2016-01-04 Thread jbellis
add deprecation warning for Thrift


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1bc147ce
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1bc147ce
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1bc147ce

Branch: refs/heads/cassandra-3.3
Commit: 1bc147ceabc2e423e31c3810d6efcbfba5f57b02
Parents: 01d26dd
Author: Jonathan Ellis 
Authored: Mon Jan 4 17:34:33 2016 -0600
Committer: Jonathan Ellis 
Committed: Mon Jan 4 17:34:49 2016 -0600

--
 NEWS.txt | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bc147ce/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 33fef1f..1269c98 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -31,6 +31,10 @@ Upgrading
- Running Cassandra as root is prevented by default.
- JVM options are moved from cassandra-env.(sh|ps1) to jvm.options file
 
+Deprecation
+---
+   - The Thrift API is deprecated and will be removed in Cassandra 4.0.
+
 
 3.1
 =



[jira] [Resolved] (CASSANDRA-10782) AssertionError at getApproximateKeyCount

2016-01-04 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian resolved CASSANDRA-10782.

Resolution: Duplicate

> AssertionError at getApproximateKeyCount
> 
>
> Key: CASSANDRA-10782
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10782
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: C* 2.1.11, Debian Wheezy
>Reporter: mlowicki
>
> {code}
> ERROR [CompactionExecutor:9845] 2015-11-28 09:26:10,525 
> CassandraDaemon.java:227 - Exception in thread 
> Thread[CompactionExecutor:9845,1,main]
> java.lang.AssertionError: 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-6335-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:268)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:151)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:236)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2016-01-04 Thread Richard Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081929#comment-15081929
 ] 

Richard Low commented on CASSANDRA-10726:
-

+1 on the option to disable.

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9258) Range movement causes CPU & performance impact

2016-01-04 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-9258:
-
Attachment: 0001-pending-ranges-maps-for-2.2.patch
0001-pending-ranges-map.patch

Address comments.

> Range movement causes CPU & performance impact
> --
>
> Key: CASSANDRA-9258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9258
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4
>Reporter: Rick Branson
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
> Attachments: 0001-pending-ranges-map.patch, 
> 0001-pending-ranges-maps-for-2.2.patch, Screenshot 2015-12-16 16.11.36.png, 
> Screenshot 2015-12-16 16.11.51.png
>
>
> Observing big CPU & latency regressions when doing range movements on 
> clusters with many tens of thousands of vnodes. See CPU usage increase by 
> ~80% when a single node is being replaced.
> Top methods are:
> 1) Ljava/math/BigInteger;.compareTo in 
> Lorg/apache/cassandra/dht/ComparableObjectToken;.compareTo 
> 2) Lcom/google/common/collect/AbstractMapBasedMultimap;.wrapCollection in 
> Lcom/google/common/collect/AbstractMapBasedMultimap$AsMap$AsMapIterator;.next
> 3) Lorg/apache/cassandra/db/DecoratedKey;.compareTo in 
> Lorg/apache/cassandra/dht/Range;.contains
> Here's a sample stack from a thread dump:
> {code}
> "Thrift:50673" daemon prio=10 tid=0x7f2f20164800 nid=0x3a04af runnable 
> [0x7f2d878d]
>java.lang.Thread.State: RUNNABLE
>   at org.apache.cassandra.dht.Range.isWrapAround(Range.java:260)
>   at org.apache.cassandra.dht.Range.contains(Range.java:51)
>   at org.apache.cassandra.dht.Range.contains(Range.java:110)
>   at 
> org.apache.cassandra.locator.TokenMetadata.pendingEndpointsFor(TokenMetadata.java:916)
>   at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:775)
>   at 
> org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:541)
>   at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:616)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1101)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1083)
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9258) Range movement causes CPU & performance impact

2016-01-04 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-9258:
-
Attachment: (was: 0001-pending-ranges-maps-for-2.2.patch)

> Range movement causes CPU & performance impact
> --
>
> Key: CASSANDRA-9258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9258
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4
>Reporter: Rick Branson
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
> Attachments: Screenshot 2015-12-16 16.11.36.png, Screenshot 
> 2015-12-16 16.11.51.png
>
>
> Observing big CPU & latency regressions when doing range movements on 
> clusters with many tens of thousands of vnodes. See CPU usage increase by 
> ~80% when a single node is being replaced.
> Top methods are:
> 1) Ljava/math/BigInteger;.compareTo in 
> Lorg/apache/cassandra/dht/ComparableObjectToken;.compareTo 
> 2) Lcom/google/common/collect/AbstractMapBasedMultimap;.wrapCollection in 
> Lcom/google/common/collect/AbstractMapBasedMultimap$AsMap$AsMapIterator;.next
> 3) Lorg/apache/cassandra/db/DecoratedKey;.compareTo in 
> Lorg/apache/cassandra/dht/Range;.contains
> Here's a sample stack from a thread dump:
> {code}
> "Thrift:50673" daemon prio=10 tid=0x7f2f20164800 nid=0x3a04af runnable 
> [0x7f2d878d]
>java.lang.Thread.State: RUNNABLE
>   at org.apache.cassandra.dht.Range.isWrapAround(Range.java:260)
>   at org.apache.cassandra.dht.Range.contains(Range.java:51)
>   at org.apache.cassandra.dht.Range.contains(Range.java:110)
>   at 
> org.apache.cassandra.locator.TokenMetadata.pendingEndpointsFor(TokenMetadata.java:916)
>   at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:775)
>   at 
> org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:541)
>   at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:616)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1101)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1083)
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9258) Range movement causes CPU & performance impact

2016-01-04 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-9258:
-
Attachment: (was: 0001-pending-ranges-map.patch)

> Range movement causes CPU & performance impact
> --
>
> Key: CASSANDRA-9258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9258
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4
>Reporter: Rick Branson
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
> Attachments: Screenshot 2015-12-16 16.11.36.png, Screenshot 
> 2015-12-16 16.11.51.png
>
>
> Observing big CPU & latency regressions when doing range movements on 
> clusters with many tens of thousands of vnodes. See CPU usage increase by 
> ~80% when a single node is being replaced.
> Top methods are:
> 1) Ljava/math/BigInteger;.compareTo in 
> Lorg/apache/cassandra/dht/ComparableObjectToken;.compareTo 
> 2) Lcom/google/common/collect/AbstractMapBasedMultimap;.wrapCollection in 
> Lcom/google/common/collect/AbstractMapBasedMultimap$AsMap$AsMapIterator;.next
> 3) Lorg/apache/cassandra/db/DecoratedKey;.compareTo in 
> Lorg/apache/cassandra/dht/Range;.contains
> Here's a sample stack from a thread dump:
> {code}
> "Thrift:50673" daemon prio=10 tid=0x7f2f20164800 nid=0x3a04af runnable 
> [0x7f2d878d]
>java.lang.Thread.State: RUNNABLE
>   at org.apache.cassandra.dht.Range.isWrapAround(Range.java:260)
>   at org.apache.cassandra.dht.Range.contains(Range.java:51)
>   at org.apache.cassandra.dht.Range.contains(Range.java:110)
>   at 
> org.apache.cassandra.locator.TokenMetadata.pendingEndpointsFor(TokenMetadata.java:916)
>   at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:775)
>   at 
> org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:541)
>   at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:616)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1101)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1083)
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[cassandra] Git Push Summary

2016-01-04 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.2 [created] 01d26dd3f
  refs/heads/cassandra-3.3 [created] 01d26dd3f


[jira] [Commented] (CASSANDRA-10960) Compaction should delete old files from incremental backups folder

2016-01-04 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082020#comment-15082020
 ] 

Anubhav Kale commented on CASSANDRA-10960:
--

Thanks for the explanation. While I don't want to continue the conversation 
here, IMHO C* need to enable a behavior where "old" ss tables from backups are 
deleted whenever they are deleted as part of compaction from actual folders. 
Else, too much duplicate data has to be moved back to nodes at the time of 
recovery.
Specific scenario is when backups need to be moved outside of Cassandra, else 
current behavior is good enough.

> Compaction should delete old files from incremental backups folder
> --
>
> Key: CASSANDRA-10960
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10960
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>
> When compaction runs the old flushed SS Tables from backups folder are not 
> deleted. If folks need to move the backups folder somewhere outside the 
> cluster, recovery becomes slower because unnecessary files need to be copied 
> back. 
> Is this behavior by design ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8755) Replace trivial uses of String.replace/replaceAll/split with StringUtils methods

2016-01-04 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-8755:

Fix Version/s: 3.2

> Replace trivial uses of String.replace/replaceAll/split with StringUtils 
> methods
> 
>
> Key: CASSANDRA-8755
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8755
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jaroslav Kamenik
>Assignee: Alexander Shopov
>Priority: Trivial
>  Labels: lhf
> Fix For: 3.2
>
> Attachments: 8755.tar.gz, trunk-8755.patch, trunk-8755.txt
>
>
> There are places in the code where those regex based methods are  used with 
> plain, not regexp, strings, so StringUtils alternatives should be faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10960) Compaction should delete old files from incremental backups folder

2016-01-04 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian resolved CASSANDRA-10960.

Resolution: Not A Problem

s4 would not be in the backups directory, only SSTables which have been flushed 
end up there. That is the point of incremental backups: you start from a 
snapshot and add any newly flushed sstables which have been flushed since the 
snapshot to recover a certain point in time.

If you have more questions, please ask them on the mailing list; Jira is only 
used for bug reports, and this is operating the way it should be.

> Compaction should delete old files from incremental backups folder
> --
>
> Key: CASSANDRA-10960
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10960
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>
> When compaction runs the old flushed SS Tables from backups folder are not 
> deleted. If folks need to move the backups folder somewhere outside the 
> cluster, recovery becomes slower because unnecessary files need to be copied 
> back. 
> Is this behavior by design ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-6696) Partition sstables by token range

2016-01-04 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081526#comment-15081526
 ] 

Yuki Morishita edited comment on CASSANDRA-6696 at 1/4/16 11:40 PM:


[~krummas] I still prefer just returning 'keyspace name/table name pair' in 
{{RangeAwareSSTableWriter#getFilename}} over adding UUID to {{ProgressInfo}}. 
Even with ID, {{nodetool netstats}} will still show constantly changing file 
name with inaccurate bytes.
My suggested change is [here|https://github.com/krummas/cassandra/pull/2].
 -{{SSTableMultiWriter#getFilename}} is also used in debug log when complete 
flushing SSTable(s), and because {{RangeAwareSSTableWriter}} can write SSTables 
when flushing, I think displaying just ks/table name there too is not confusing 
than displaying only last written file name.- (edit: This looks like no problem 
here, my bad)


was (Author: yukim):
[~krummas] I still prefer just returning 'keyspace name/table name pair' in 
{{RangeAwareSSTableWriter#getFilename}} over adding UUID to {{ProgressInfo}}. 
Even with ID, {{nodetool netstats}} will still show constantly changing file 
name with inaccurate bytes. {{SSTableMultiWriter#getFilename}} is also used in 
debug log when complete flushing SSTable(s), and because 
{{RangeAwareSSTableWriter}} can write SSTables when flushing, I think 
displaying just ks/table name there too is not confusing than displaying only 
last written file name.

> Partition sstables by token range
> -
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Marcus Eriksson
>  Labels: compaction, correctness, dense-storage, 
> jbod-aware-compaction, performance
> Fix For: 3.2
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10887) Pending range calculator gives wrong pending ranges for moves

2016-01-04 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-10887:
--
Attachment: CASSANDRA_10887_v3.diff

> Pending range calculator gives wrong pending ranges for moves
> -
>
> Key: CASSANDRA-10887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10887
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Richard Low
>Assignee: sankalp kohli
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: CASSANDRA-10887.diff, CASSANDRA_10887_v2.diff, 
> CASSANDRA_10887_v3.diff
>
>
> My understanding is the PendingRangeCalculator is meant to calculate who 
> should receive extra writes during range movements. However, it adds the 
> wrong ranges for moves. An extreme example of this can be seen in the 
> following reproduction. Create a 5 node cluster (I did this on 2.0.16 and 
> 2.2.4) and a keyspace RF=3 and a simple table. Then start moving a node and 
> immediately kill -9 it. Now you see a node as down and moving in the ring. 
> Try a quorum write for a partition that is stored on that node - it will fail 
> with a timeout. Further, all CAS reads or writes fail immediately with 
> unavailable exception because they attempt to include the moving node twice. 
> This is likely to be the cause of CASSANDRA-10423.
> In my example I had this ring:
> 127.0.0.1  rack1   Up Normal  170.97 KB   20.00%  
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  124.06 KB   20.00%  
> -5534023222112865485
> 127.0.0.3  rack1   Down   Moving  108.7 KB40.00%  
> 1844674407370955160
> 127.0.0.4  rack1   Up Normal  142.58 KB   0.00%   
> 1844674407370955161
> 127.0.0.5  rack1   Up Normal  118.64 KB   20.00%  
> 5534023222112865484
> Node 3 was moving to -1844674407370955160. I added logging to print the 
> pending and natural endpoints. For ranges owned by node 3, node 3 appeared in 
> pending and natural endpoints. The blockFor is increased to 3 so we’re 
> effectively doing CL.ALL operations. This manifests as write timeouts and CAS 
> unavailables when the node is down.
> The correct pending range for this scenario is node 1 is gaining the range 
> (-1844674407370955160, 1844674407370955160). So node 1 should be added as a 
> destination for writes and CAS for this range, not node 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9303) Match cassandra-loader options in COPY FROM

2016-01-04 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082211#comment-15082211
 ] 

Paulo Motta commented on CASSANDRA-9303:


Nice job, we`re nearly there! :) Now tests are passing locally on Windows and 
code looks good. Some minor nits:
* Could you improve the local mutation check on {{BatchStatement}}, by using 
{{StorageService.getLocalRanges}}, {{Range.isInRanges}} and also skip the 
{{isMutationLocal()}} evaluation if the {{localMutationsOnly}} variable is 
{{false}}. Also you can remove the cqlsh reference on the comment, since even 
in a non-cqlsh context the warning is not necessary if there are only local 
mutations in an unlogged batch.
* Although the fix for CASSANDRA-10938 looks harmless, I'm not sure if it could 
have some unintended consequences, so I'd prefer to commit it separately after 
discussion on CASSANDRA-10938.

Did you validate the performance of the new batch-by-replica approach? In the 
end it seems CASSANDRA-10938 was not caused by batching by partition key and 
there was a lot of back-and-forth between batch-by-replica vs 
batch-by-partition, so it's not very clear which approach is the best. We could 
probably do a more thorough evaluation/validation later, but it would be nice 
to make sure our batching strategy performs well.

Since there are also java code changes, can you also submit unit tests in 
addition to dtests on cassci? Thanks!

> Match cassandra-loader options in COPY FROM
> ---
>
> Key: CASSANDRA-9303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9303
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: dtest.out
>
>
> https://github.com/brianmhess/cassandra-loader added a bunch of options to 
> handle real world requirements, we should match those.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10887) Pending range calculator gives wrong pending ranges for moves

2016-01-04 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082196#comment-15082196
 ] 

sankalp kohli commented on CASSANDRA-10887:
---

cc [~barnie]

> Pending range calculator gives wrong pending ranges for moves
> -
>
> Key: CASSANDRA-10887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10887
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Richard Low
>Assignee: sankalp kohli
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: CASSANDRA-10887.diff, CASSANDRA_10887_v2.diff, 
> CASSANDRA_10887_v3.diff
>
>
> My understanding is the PendingRangeCalculator is meant to calculate who 
> should receive extra writes during range movements. However, it adds the 
> wrong ranges for moves. An extreme example of this can be seen in the 
> following reproduction. Create a 5 node cluster (I did this on 2.0.16 and 
> 2.2.4) and a keyspace RF=3 and a simple table. Then start moving a node and 
> immediately kill -9 it. Now you see a node as down and moving in the ring. 
> Try a quorum write for a partition that is stored on that node - it will fail 
> with a timeout. Further, all CAS reads or writes fail immediately with 
> unavailable exception because they attempt to include the moving node twice. 
> This is likely to be the cause of CASSANDRA-10423.
> In my example I had this ring:
> 127.0.0.1  rack1   Up Normal  170.97 KB   20.00%  
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  124.06 KB   20.00%  
> -5534023222112865485
> 127.0.0.3  rack1   Down   Moving  108.7 KB40.00%  
> 1844674407370955160
> 127.0.0.4  rack1   Up Normal  142.58 KB   0.00%   
> 1844674407370955161
> 127.0.0.5  rack1   Up Normal  118.64 KB   20.00%  
> 5534023222112865484
> Node 3 was moving to -1844674407370955160. I added logging to print the 
> pending and natural endpoints. For ranges owned by node 3, node 3 appeared in 
> pending and natural endpoints. The blockFor is increased to 3 so we’re 
> effectively doing CL.ALL operations. This manifests as write timeouts and CAS 
> unavailables when the node is down.
> The correct pending range for this scenario is node 1 is gaining the range 
> (-1844674407370955160, 1844674407370955160). So node 1 should be added as a 
> destination for writes and CAS for this range, not node 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10887) Pending range calculator gives wrong pending ranges for moves

2016-01-04 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082195#comment-15082195
 ] 

sankalp kohli commented on CASSANDRA-10887:
---

Attached v3 with rack aware tests. 

> Pending range calculator gives wrong pending ranges for moves
> -
>
> Key: CASSANDRA-10887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10887
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Richard Low
>Assignee: sankalp kohli
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: CASSANDRA-10887.diff, CASSANDRA_10887_v2.diff, 
> CASSANDRA_10887_v3.diff
>
>
> My understanding is the PendingRangeCalculator is meant to calculate who 
> should receive extra writes during range movements. However, it adds the 
> wrong ranges for moves. An extreme example of this can be seen in the 
> following reproduction. Create a 5 node cluster (I did this on 2.0.16 and 
> 2.2.4) and a keyspace RF=3 and a simple table. Then start moving a node and 
> immediately kill -9 it. Now you see a node as down and moving in the ring. 
> Try a quorum write for a partition that is stored on that node - it will fail 
> with a timeout. Further, all CAS reads or writes fail immediately with 
> unavailable exception because they attempt to include the moving node twice. 
> This is likely to be the cause of CASSANDRA-10423.
> In my example I had this ring:
> 127.0.0.1  rack1   Up Normal  170.97 KB   20.00%  
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  124.06 KB   20.00%  
> -5534023222112865485
> 127.0.0.3  rack1   Down   Moving  108.7 KB40.00%  
> 1844674407370955160
> 127.0.0.4  rack1   Up Normal  142.58 KB   0.00%   
> 1844674407370955161
> 127.0.0.5  rack1   Up Normal  118.64 KB   20.00%  
> 5534023222112865484
> Node 3 was moving to -1844674407370955160. I added logging to print the 
> pending and natural endpoints. For ranges owned by node 3, node 3 appeared in 
> pending and natural endpoints. The blockFor is increased to 3 so we’re 
> effectively doing CL.ALL operations. This manifests as write timeouts and CAS 
> unavailables when the node is down.
> The correct pending range for this scenario is node 1 is gaining the range 
> (-1844674407370955160, 1844674407370955160). So node 1 should be added as a 
> destination for writes and CAS for this range, not node 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: fix parameterized logging marker/parm count mismatch

2016-01-04 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5fe709706 -> 2188bec56


fix parameterized logging marker/parm count mismatch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2188bec5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2188bec5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2188bec5

Branch: refs/heads/trunk
Commit: 2188bec560d973e58da6781fade6f52781b154fd
Parents: 5fe7097
Author: Dave Brosius 
Authored: Mon Jan 4 22:27:50 2016 -0500
Committer: Dave Brosius 
Committed: Mon Jan 4 22:27:50 2016 -0500

--
 src/java/org/apache/cassandra/hints/HintVerbHandler.java | 2 +-
 src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2188bec5/src/java/org/apache/cassandra/hints/HintVerbHandler.java
--
diff --git a/src/java/org/apache/cassandra/hints/HintVerbHandler.java 
b/src/java/org/apache/cassandra/hints/HintVerbHandler.java
index b2c7b6a..36d8a10 100644
--- a/src/java/org/apache/cassandra/hints/HintVerbHandler.java
+++ b/src/java/org/apache/cassandra/hints/HintVerbHandler.java
@@ -67,7 +67,7 @@ public final class HintVerbHandler implements 
IVerbHandler
 }
 catch (MarshalException e)
 {
-logger.warn("Failed to validate a hint for {} (table id {}) - 
skipped", hostId);
+logger.warn("Failed to validate a hint for {} - skipped", hostId);
 reply(id, message.from);
 return;
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2188bec5/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
--
diff --git a/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java 
b/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
index 30e5fe0..93c1193 100644
--- a/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
+++ b/src/java/org/apache/cassandra/hints/LegacyHintsMigrator.java
@@ -213,7 +213,7 @@ public final class LegacyHintsMigrator
 }
 catch (IOException e)
 {
-logger.error("Failed to migrate a hint for {} from legacy {}.{} 
table: {}",
+logger.error("Failed to migrate a hint for {} from legacy {}.{} 
table",
  row.getUUID("target_id"),
  SystemKeyspace.NAME,
  SystemKeyspace.LEGACY_HINTS,
@@ -222,7 +222,7 @@ public final class LegacyHintsMigrator
 }
 catch (MarshalException e)
 {
-logger.warn("Failed to validate a hint for {} (table id {}) from 
legacy {}.{} table - skipping: {})",
+logger.warn("Failed to validate a hint for {} from legacy {}.{} 
table - skipping",
 row.getUUID("target_id"),
 SystemKeyspace.NAME,
 SystemKeyspace.LEGACY_HINTS,



[jira] [Updated] (CASSANDRA-10966) guard against legacy migration failure due to non-existent index name

2016-01-04 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-10966:
-
Attachment: 10966.txt

> guard against legacy migration failure due to non-existent index name
> -
>
> Key: CASSANDRA-10966
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10966
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Dave Brosius
>Assignee: Dave Brosius
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 10966.txt
>
>
> code checks for whether an index has a name, but then blindly goes ahead and 
> tries creates the index regardless. That would cause an NPE. 
> Simple guard against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10966) guard against legacy migration failure due to non-existent index name

2016-01-04 Thread Dave Brosius (JIRA)
Dave Brosius created CASSANDRA-10966:


 Summary: guard against legacy migration failure due to 
non-existent index name
 Key: CASSANDRA-10966
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10966
 Project: Cassandra
  Issue Type: Improvement
  Components: Distributed Metadata
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 3.x


code checks for whether an index has a name, but then blindly goes ahead and 
tries creates the index regardless. That would cause an NPE. 

Simple guard against that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: avoid Gossiper dead checks on irrelevant ApplicationStates

2016-01-04 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2188bec56 -> 73793d6f0


avoid Gossiper dead checks on irrelevant ApplicationStates


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73793d6f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73793d6f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73793d6f

Branch: refs/heads/trunk
Commit: 73793d6f0e03f72e561b994577f48e119715d353
Parents: 2188bec
Author: Dave Brosius 
Authored: Mon Jan 4 23:59:35 2016 -0500
Committer: Dave Brosius 
Committed: Mon Jan 4 23:59:35 2016 -0500

--
 .../org/apache/cassandra/locator/ReconnectableSnitchHelper.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73793d6f/src/java/org/apache/cassandra/locator/ReconnectableSnitchHelper.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/ReconnectableSnitchHelper.java 
b/src/java/org/apache/cassandra/locator/ReconnectableSnitchHelper.java
index 3277af7..6b6182f 100644
--- a/src/java/org/apache/cassandra/locator/ReconnectableSnitchHelper.java
+++ b/src/java/org/apache/cassandra/locator/ReconnectableSnitchHelper.java
@@ -80,7 +80,7 @@ public class ReconnectableSnitchHelper implements 
IEndpointStateChangeSubscriber
 
 public void onChange(InetAddress endpoint, ApplicationState state, 
VersionedValue value)
 {
-if (preferLocal && 
!Gossiper.instance.isDeadState(Gossiper.instance.getEndpointStateForEndpoint(endpoint))
 && state == ApplicationState.INTERNAL_IP)
+if (preferLocal && state == ApplicationState.INTERNAL_IP && 
!Gossiper.instance.isDeadState(Gossiper.instance.getEndpointStateForEndpoint(endpoint)))
 reconnect(endpoint, value);
 }
 



cassandra git commit: simplify: no need for 'Unsupported' exception handling with UTF-8

2016-01-04 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1bc147cea -> 5fe709706


simplify: no need for 'Unsupported' exception handling with UTF-8


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5fe70970
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5fe70970
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5fe70970

Branch: refs/heads/trunk
Commit: 5fe7097068ae5cb1272b28c2bfe0ef3f48e2ba61
Parents: 1bc147c
Author: Dave Brosius 
Authored: Mon Jan 4 22:13:06 2016 -0500
Committer: Dave Brosius 
Committed: Mon Jan 4 22:13:06 2016 -0500

--
 src/java/org/apache/cassandra/utils/MD5Digest.java | 12 +++-
 1 file changed, 3 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe70970/src/java/org/apache/cassandra/utils/MD5Digest.java
--
diff --git a/src/java/org/apache/cassandra/utils/MD5Digest.java 
b/src/java/org/apache/cassandra/utils/MD5Digest.java
index 2dc57de..2feb09e 100644
--- a/src/java/org/apache/cassandra/utils/MD5Digest.java
+++ b/src/java/org/apache/cassandra/utils/MD5Digest.java
@@ -17,9 +17,10 @@
  */
 package org.apache.cassandra.utils;
 
-import java.io.UnsupportedEncodingException;
+import java.nio.charset.StandardCharsets;
 import java.util.Arrays;
 
+
 /**
  * The result of the computation of an MD5 digest.
  *
@@ -51,14 +52,7 @@ public class MD5Digest
 
 public static MD5Digest compute(String toHash)
 {
-try
-{
-return compute(toHash.getBytes("UTF-8"));
-}
-catch (UnsupportedEncodingException e)
-{
-throw new RuntimeException(e.getMessage());
-}
+return compute(toHash.getBytes(StandardCharsets.UTF_8));
 }
 
 @Override



[jira] [Comment Edited] (CASSANDRA-10938) test_bulk_round_trip_blogposts is failing occasionally

2016-01-04 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080872#comment-15080872
 ] 

Stefania edited comment on CASSANDRA-10938 at 1/4/16 12:19 PM:
---

The flight recorder file attached, _recording_127.0.0.1.jfr_, provides the best 
information to understand the problem: about 15 shared pool worker threads are 
busy copying the {{NonBlockingHashMap}} that we use to store the query states 
in {{ServerConnection}}. This consumes 99% of the CPU on the machine (note that 
I lowered the priority of the process when I recorded that file).

We store one entry per stream id and we never clean this map but this is not 
the issue. When inserting data with cassandra-stress, we use up to 33k stream 
ids whilst when inserting data with COPY FROM the python driver is careful to 
reuse stream ids and we only use around 300 of them. So the map should not be 
resized as much and yet the problem occurs with COPY FROM (approximately once 
every twenty times) and never with cassandra-stress. The difference between the 
two is probably that in COPY FROM we have more concurrent requests, hence a 
higher concurrency level on the map.

Of all hot threads in the flight recorder file, only one is doing a 
{{putIfAbsent}} whist the other ones are simply accessing a value via a 
{{get}}. However the map is designed so that all threads help with the copy and 
this is what's happening here. I suspect a bug that prevents threads from 
making progress and keeps them spinning.

We are currently using the latest available version of {{NonBlockingHashMap}}, 
version 1.0.6, from [this 
repository|https://github.com/boundary/high-scale-lib].

We have a number of options:

- Fix {{NonBlockingHashMap}}
- Replace it
- Instantiate it with an initial size to prevent resizing (4K fixes this 
specific case). 



was (Author: stefania):
The flight recorder file attached, _recording_127.0.0.1.jfr_, provides the best 
information to understand the problem: about 15 shared pool worker threads are 
busy copying the {{NonBlockingHashMap}} that we use to store the query states 
in {{ServerConnection}}. This consumes 99% of the CPU on the machine (note that 
I lowered the priority of the process when I recorded that file).

We store one entry per stream id and we never clean this map but this is not 
the issue. When inserting data with cassandra-stress, we use up to 33k stream 
ids whilst when inserting data with COPY FROM the python driver is careful to 
reuse stream ids and we only use around 300 of them. So the map should not be 
resized as much and yet the problem occurs with COPY FROM and not with 
cassandra-stress. The difference between the two is probably that in COPY FROM 
we have may more concurrent requests, hence a higher concurrency level on the 
map.

Of all hot threads in the flight recorder file, only one is doing a 
{{putIfAbsent}} whist the other ones are simply accessing a value via a 
{{get}}. However the map is designed so that all threads help with the copy and 
this is what's happening here. I suspect a bug that prevents threads from 
making progress and keeps them spinning.

We are currently using the latest available version of {{NonBlockingHashMap}}, 
version 1.0.6, from [this 
repository|https://github.com/boundary/high-scale-lib].

We have a number of options:

- Fix {{NonBlockingHashMap}}
- Replace it
- Instantiate it with an initial size to prevent resizing (4K fixes this 
specific case). 


> test_bulk_round_trip_blogposts is failing occasionally
> --
>
> Key: CASSANDRA-10938
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10938
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x
>
> Attachments: 6452.nps, 6452.png, 7300.nps, 7300a.png, 7300b.png, 
> node1_debug.log, node2_debug.log, node3_debug.log, recording_127.0.0.1.jfr
>
>
> We get timeouts occasionally that cause the number of records to be incorrect:
> http://cassci.datastax.com/job/trunk_dtest/858/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10954) [Regression] Error when removing list element with UPDATE statement

2016-01-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081199#comment-15081199
 ] 

Sylvain Lebresne commented on CASSANDRA-10954:
--

Linking to the very trivial fix along with a simple unit test to exercise it.  
I've started tests for the 3.0 branch but I'm dispensing from pushing a merge 
to trunk as the patch is beyond trivial and I'd rather save some electricity.

| [patch|https://github.com/pcmanus/cassandra/commits/10954] | [unit 
tests|http://cassci.datastax.com/view/Dev/view/pcmanus/job/pcmanus-10954-testall/]
 | 
[dtests|http://cassci.datastax.com/view/Dev/view/pcmanus/job/pcmanus-10954-dtest/]
 |

bq. about the code block in org.apache.cassandra.cql3.Lists:362, there is an 
if/else if but there is no final else block to catch all other alternatives, is 
it intended

It is intended. The only case left is when {{value == UNSET_BYTE_BUFFER}} and 
in that case we want by definition do nothing.


> [Regression] Error when removing list element with UPDATE statement
> ---
>
> Key: CASSANDRA-10954
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10954
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Cassandra 3.0.0, Cassandra 3.1.1
>Reporter: DOAN DuyHai
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.x
>
>
> Steps to reproduce:
> {code:sql}
> CREATE TABLE simple(
>   id int PRIMARY KEY,
>   int_list list
> );
> INSERT INTO simple(id, int_list) VALUES(10, [1,2,3]);
> SELECT * FROM simple;
>  id | int_list
> +---
>  10 | [1, 2, 3]
> UPDATE simple SET int_list[0]=null WHERE id=10;
> ServerError:  message="java.lang.AssertionError">
> {code}
>  Per CQL semantics, setting a column to NULL == deleting it.
>  When using debugger, below is the Java stack trace on server side:
> {noformat}
>  ERROR o.apache.cassandra.transport.Message - Unexpected exception during 
> request; channel = [id: 0x6dbc33bd, /192.168.51.1:57723 => /192.168.51.1:9473]
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.UpdateParameters.addTombstone(UpdateParameters.java:141)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.UpdateParameters.addTombstone(UpdateParameters.java:136)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.Lists$SetterByIndex.execute(Lists.java:362) 
> ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:94)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addUpdates(ModificationStatement.java:666)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:606)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:413)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:401)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [cassandra-all-3.1.1.jar:3.1.1]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> 

[jira] [Commented] (CASSANDRA-9303) Match cassandra-loader options in COPY FROM

2016-01-04 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081092#comment-15081092
 ] 

Stefania commented on CASSANDRA-9303:
-

I've also performed a little bit more work:

* Removed the WARN for UNLOGGED batches with multiple partitions introduced by 
CASSANDRA-9399 _if the partitions are only local_.

* Optimized {{split_batches}} to first batch by partition key, if at least two 
rows have the same partition key, and batch by replica only those rows without 
common partition keys. This ensures we optimize single insertions server side 
per partition key and it saves us the cost of accessing the token map to work 
out the replica if we have common partition keys.

* Ensured that {{DCAwareRoundRobinPolicy}} gets the data center name to avoid a 
WARN.

* Applied a workaround for CASSANDRA-10938.

> Match cassandra-loader options in COPY FROM
> ---
>
> Key: CASSANDRA-9303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9303
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: dtest.out
>
>
> https://github.com/brianmhess/cassandra-loader added a bunch of options to 
> handle real world requirements, we should match those.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10924) Pass base table's metadata to Index.validateOptions

2016-01-04 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-10924:

Assignee: Andrés de la Peña
Reviewer: Sam Tunnicliffe

> Pass base table's metadata to Index.validateOptions
> ---
>
> Key: CASSANDRA-10924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10924
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, index, validation
> Fix For: 3.2, 3.0.x, 3.x, 3.0.3
>
> Attachments: CASSANDRA-10924-v0.diff
>
>
> Some custom index implementations require the base table's metadata to 
> validate their creation options. For example, the options of these 
> implementations can contain information about which base table's columns are 
> going to be indexed and how, so the implementation needs to know the 
> existence and the type of the columns to be indexed to properly validate.
> The attached patch proposes to add base table's {{CFMetaData}} to Index' 
> optional static method to validate the custom index options:
> {{public static Map validateOptions(CFMetaData cfm, 
> Map options);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread xiaost (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaost updated CASSANDRA-10961:
---
Attachment: netstats.log
jstack.log
debug.log

progress is stuck now :-(

I will give more informations later

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: debug.log, jstack.log, netstats.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
>

[jira] [Created] (CASSANDRA-10962) Cassandra should not create snapshot at restart for compactions_in_progress

2016-01-04 Thread FACORAT (JIRA)
FACORAT created CASSANDRA-10962:
---

 Summary: Cassandra should not create snapshot at restart for 
compactions_in_progress
 Key: CASSANDRA-10962
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10962
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 14.04.3 LTS
Reporter: FACORAT
Priority: Minor


If auto_snapshot is set to true in cassandra.yaml, each time you restart 
Cassandra, a snapshot is created for system.compactions_in_progress as the 
table is truncated at cassandra start.

However as datas in this table are temporary, Cassandra should not create 
snapshot for this table (or maybe even for system.* tables). This will be 
coherent with the fact that "nodetool listsnapshots" doesn't even list this 
table.

Exemple:

$ nodetool listsnapshots | grep compactions
$ ls -lh 
system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/snapshots/
total 16K
drwxr-xr-x 2 cassandra cassandra 4.0K Nov 30 13:12 
1448885530280-compactions_in_progress
drwxr-xr-x 2 cassandra cassandra 4.0K Dec  7 15:36 
1449498977181-compactions_in_progress
drwxr-xr-x 2 cassandra cassandra 4.0K Dec 14 18:20 
1450113621506-compactions_in_progress
drwxr-xr-x 2 cassandra cassandra 4.0K Jan  4 12:53 
1451908396364-compactions_in_progress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10954) [Regression] Error when removing list element with UPDATE statement

2016-01-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-10954:


Assignee: Sylvain Lebresne

> [Regression] Error when removing list element with UPDATE statement
> ---
>
> Key: CASSANDRA-10954
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10954
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Cassandra 3.0.0, Cassandra 3.1.1
>Reporter: DOAN DuyHai
>Assignee: Sylvain Lebresne
>
> Steps to reproduce:
> {code:sql}
> CREATE TABLE simple(
>   id int PRIMARY KEY,
>   int_list list
> );
> INSERT INTO simple(id, int_list) VALUES(10, [1,2,3]);
> SELECT * FROM simple;
>  id | int_list
> +---
>  10 | [1, 2, 3]
> UPDATE simple SET int_list[0]=null WHERE id=10;
> ServerError:  message="java.lang.AssertionError">
> {code}
>  Per CQL semantics, setting a column to NULL == deleting it.
>  When using debugger, below is the Java stack trace on server side:
> {noformat}
>  ERROR o.apache.cassandra.transport.Message - Unexpected exception during 
> request; channel = [id: 0x6dbc33bd, /192.168.51.1:57723 => /192.168.51.1:9473]
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.UpdateParameters.addTombstone(UpdateParameters.java:141)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.UpdateParameters.addTombstone(UpdateParameters.java:136)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.Lists$SetterByIndex.execute(Lists.java:362) 
> ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:94)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addUpdates(ModificationStatement.java:666)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:606)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:413)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:401)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [cassandra-all-3.1.1.jar:3.1.1]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60-ea]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [cassandra-all-3.1.1.jar:3.1.1]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-3.1.1.jar:3.1.1]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
> {noformat}
> The root cause seems to be located at *org.apache.cassandra.cql3.Lists:362* :
> {code:java}
> CellPath elementPath = 
> existingRow.getComplexColumnData(column).getCellByIndex(idx).path();
> if (value == null)
> {
> 

[jira] [Created] (CASSANDRA-10963) Can join cluster java.lang.InterruptedException

2016-01-04 Thread Jack Money (JIRA)
Jack Money created CASSANDRA-10963:
--

 Summary: Can join cluster java.lang.InterruptedException 
 Key: CASSANDRA-10963
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10963
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
 Environment: [cqlsh 5.0.1 | Cassandra 2.2.4 | CQL spec 3.3.1 | Native 
protocol v4]
java version "1.8.0_65"
Reporter: Jack Money


hello

I got 2 nodes in 2 DC.
Each node own 100% data of keyspace hugespace.
Keyspace have 21 tables with 2TB data
Biggest table have 1.6 TB of data.
Biggest sstable 1,3 TB.

Schemats:
{noformat} 
KEYSPACE hugespace WITH replication = {'class': 'NetworkTopologyStrategy', 
'DC1': '3', 'DC2': '1'};

CREATE TABLE hugespace.content (
y int,
m int,
d int,
ts bigint,
ha text,
co text,
he text,
ids bigint,
ifr text,
js text,
PRIMARY KEY ((y, m, d), ts, ha)
) WITH CLUSTERING ORDER BY (ts ASC, ha ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
CREATE INDEX content_ids_idx ON hugespace.content (ids);
{noformat}

I tried to add one node (target 6 node in DC1) to DC1.

Names:
Existing node in DC1 = nodeDC1
Existing node in DC2 = nodeDC2
New node joining DC1 = joiningDC1

joiningDC1
{noformat} 
INFO  [main] 2016-01-04 12:17:55,535 StorageService.java:1176 - JOINING: 
Starting to bootstrap...
INFO  [main] 2016-01-04 12:17:55,802 StreamResultFuture.java:86 - [Stream 
#2f473320-b2dd-11e5-8353-b5506ad414a4] Executing streaming plan for Bootstrap
INFO  [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,803 
StreamSession.java:232 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
Starting streaming to /nodeDC1
INFO  [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,803 
StreamSession.java:232 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
Starting streaming to /nodeDC2
DEBUG [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,803 
ConnectionHandler.java:82 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
Sending stream init for incoming stream
DEBUG [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,803 
ConnectionHandler.java:82 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
Sending stream init for incoming stream
DEBUG [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,806 
ConnectionHandler.java:87 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
Sending stream init for outgoing stream
DEBUG [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,806 
ConnectionHandler.java:87 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
Sending stream init for outgoing stream
DEBUG [STREAM-OUT-/nodeDC1] 2016-01-04 12:17:55,810 ConnectionHandler.java:334 
- [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] Sending Prepare (5 requests,  
0 files}
DEBUG [STREAM-OUT-/nodeDC2] 2016-01-04 12:17:55,810 ConnectionHandler.java:334 
- [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] Sending Prepare (2 requests,  
0 files}
INFO  [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,810 
StreamCoordinator.java:213 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4, 
ID#0] Beginning stream session with /nodeDC2
INFO  [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,810 
StreamCoordinator.java:213 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4, 
ID#0] Beginning stream session with /nodeDC1
DEBUG [STREAM-IN-/nodeDC2] 2016-01-04 12:17:55,821 ConnectionHandler.java:266 - 
[Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] Received Prepare (0 requests,  1 
files}
INFO  [STREAM-IN-/nodeDC2] 2016-01-04 12:17:55,822 StreamResultFuture.java:168 
- [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4 ID#0] Prepare completed. 
Receiving 1 files(161 bytes), sending 0 files(0 bytes)
DEBUG [STREAM-IN-/nodeDC2] 2016-01-04 12:17:55,828 
CompressedStreamReader.java:67 - reading file from /nodeDC2, repairedAt = 
1451483586917
DEBUG [STREAM-IN-/nodeDC2] 2016-01-04 12:17:55,831 ConnectionHandler.java:266 - 
[Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] Received File (Header (cfId: 
5bc52802-de25-35ed-aeab-188eecebb090, #0, version: la, format: BIG, estimated 
keys: 128, transfer size: 161, compressed?: true, repairedAt: 1451483586917, 
level: 0), file: 
/cassandra/data/system_auth/roles-5bc52802de2535edaeab188eecebb090/tmp-la-1-big-Data.db)
DEBUG [STREAM-OUT-/nodeDC2] 2016-01-04 12:17:55,831 ConnectionHandler.java:334 
- [Stream 

[jira] [Commented] (CASSANDRA-10880) Paging state between 2.2 and 3.0 are incompatible on protocol v4

2016-01-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081085#comment-15081085
 ] 

Sylvain Lebresne commented on CASSANDRA-10880:
--

After some offline discussions, there was some agreement on going with option 3 
above: simply document clearly that the protocol v3 should be used when 
migrating to 3.X. Unless someone has a strong objection to this or something 
better to suggest in the next day or so, I'll proceed by adding a clear mention 
in the {{NEWS}} file (including what you lose by sticking to the protocol v3, 
which as said above is not a whole lot) and sending a mail to the user list to 
grab attention on this issue.

> Paging state between 2.2 and 3.0 are incompatible on protocol v4
> 
>
> Key: CASSANDRA-10880
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10880
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Priority: Critical
>  Labels: client-impacting
> Fix For: 3.x
>
>
> In CASSANDRA-10254, the paging states generated by 3.0 for the native 
> protocol v4 were made 3.0 specific. This was done because the paging state in 
> pre-3.0 versions contains a serialized cell name, but 3.0 doesn't talk in 
> term of cells internally (at least not the pre-3.0 ones) and so using an 
> old-format cell name when we only have 3.0 nodes is inefficient and inelegant.
> Unfortunately that change was made on the assumption than the protocol v4 was 
> 3.0 only but it's not, it ended up being released with 2.2 and that 
> completely slipped my mind. So in practice, you can't properly have a mixed 
> 2.2/3.0 cluster if your driver is using the protocol v4.
> And unfortunately, I don't think there is an easy way to fix that without 
> breaking something. Concretely, I can see 3 choices:
> # we change 3.0 so that it generates old-format paging states on the v4 
> protocol. The 2 main downsides are that 1) this breaks 3.0 upgrades if the 
> driver is using the v4 protocol, and at least on the java side the only 
> driver versions that support 3.0 will use v4 by default and 2) we're signing 
> off on having sub-optimal paging state until the protocol v5 ships (probably 
> not too soon).
> # we remove the v4 protocol from 2.2. This means 2.2 will have to use v3 
> before upgrade at the risk of breaking upgrade. This is also bad, but I'm not 
> sure the driver version using the v4 protocol are quite ready yet (at least 
> the java driver is not GA yet) so if we work with the drivers teams to make 
> sure the v3 protocol gets prefered by default on 2.2 in the GA versions of 
> these driver, this might be somewhat transparent to users.
> # we don't change anything code-wise, but we document clearly that you can't 
> upgrade from 2.2 to 3.0 if your clients use protocol v4 (so we leave upgrade 
> broken if the v4 protocol is used as it is currently). This is not great, but 
> we can work with the drivers teams here again to make sure drivers prefer the 
> v3 version for 2.2 nodes so most people don't notice in practice.
> I think I'm leaning towards solution 3). It's not great but at least we break 
> no minor upgrades (neither on 2.2, nor on 3.0) which is probably the most 
> important. We'd basically be just adding a new condition on 2.2->3.0 
> upgrades.  We could additionally make 3.0 node completely refuse v4 
> connections if they know a 2.2 nodes is in the cluster for extra safety.
> Ping [~omichallat], [~adutra] and [~aholmber] as you might want to be aware 
> of that ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10886) test_read_invalid_text dtest fails

2016-01-04 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081140#comment-15081140
 ] 

Stefania commented on CASSANDRA-10886:
--

It works fine locally on a dual boot laptop. I tried both trunk and the 
CASSANDRA-9303 branch (which will be committed soon). 

It seems fine on [CASSCI 3.0 
too|http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_read_invalid_text/history/].
 I noticed CASSCI for trunk on Win has not run since August.

[~mambocab] : can you confirm if you can still reproduce it and what the 
problem is?

> test_read_invalid_text dtest fails 
> ---
>
> Key: CASSANDRA-10886
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10886
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Stefania
> Fix For: 3.0.x
>
>
> {{cqlsh_tests/cqlsh_copy_tests.py:CqlshCopyTest.test_read_invalid_text}} 
> seems to fail hard on Windows on trunk. These were only recently unskipped 
> when CASSANDRA-9302 was closed, so I don't know if they fail on other 
> branches, and I won't know until those jobs run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10817) DROP USER is not case-sensitive

2016-01-04 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081200#comment-15081200
 ] 

Sam Tunnicliffe commented on CASSANDRA-10817:
-

+1

> DROP USER is not case-sensitive
> ---
>
> Key: CASSANDRA-10817
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10817
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Mike Adamson
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 2.2.x
>
>
> As per the summary {{DROP USER}} is not case sensitive, so:
> {noformat}
> CREATE USER 'Test';
> LIST USERS;
>  name  | super
> ---+---
>   Test | False
>  cassandra |  True
> DROP USER 'Test';
> InvalidRequest: code=2200 [Invalid query] message="test doesn't exist"
> {noformat}
> {{DROP ROLE}} is case-sensitive and will drop the above user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10967) bootstrap fail with “Streaming error occurred”

2016-01-04 Thread Terry Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Ma updated CASSANDRA-10967:
-
Attachment: system.log
debug.log

full debug logs and system logs

> bootstrap fail with “Streaming error occurred”
> --
>
> Key: CASSANDRA-10967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10967
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: os: ubuntu 14.04.2 LTS (ec2)
>Reporter: Terry Ma
> Attachments: debug.log, system.log
>
>
> I installed  cassandra 2.2.4 from datastax apt source
> I have a cassandra cluster in aws ec2. the cluster has 7 nodes now. and I 
> want to add a new node to this cluster. but there is error occured with 
> bootstrap. the bootstrap can not finish sucessfully.
> I got the following errors on the new node:
> {code}
> WARN  [STREAM-IN-/172.31.4.135] 2016-01-01 15:23:44,228 
> StreamSession.java:644 - [Stream #b1cc3600-b054-11e5-80c
> 4-21d1c7c11a01] Retrying for following error
> java.lang.ArrayIndexOutOfBoundsException: null
> ERROR [Thread-571] 2016-01-01 15:23:44,228 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-571,5,ma
> in]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:
> 2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer
> .java:1220) ~[na:1.8.0_66]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStrea
> m.java:176) ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.4.jar:
> 2.2.4]
> {code}
> and I also got following errors: 
> {code}
> WARN  [STREAM-IN-/172.31.20.223] 2016-01-01 15:37:30,941 
> StreamSession.java:644 - [Stream #b1cc3600-b054-11e5-80
> c4-21d1c7c11a01] Retrying for following error
> java.lang.RuntimeException: Last written key 
> DecoratedKey(-8466153190082758358, 356a) >= current key
>  DecoratedKey(-9223372036854775808, ) writing into 
> /data_lvm/cassandra/data/feeds/inbox-eb873af0a19711e5ade0432b
> 31304f95/tmp-la-309-big-Data.db
> {code}
> I tried many things to do bootstrap, like cleanuped all nodes, scrubed all 
> nodes, repaired all nodes. but It still fails with these errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10967) bootstrap fail with “Streaming error occurred”

2016-01-04 Thread Terry Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082536#comment-15082536
 ] 

Terry Ma edited comment on CASSANDRA-10967 at 1/5/16 7:01 AM:
--

this is full debug logs and system logs


was (Author: zjumty):
full debug logs and system logs

> bootstrap fail with “Streaming error occurred”
> --
>
> Key: CASSANDRA-10967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10967
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: os: ubuntu 14.04.2 LTS (ec2)
>Reporter: Terry Ma
> Attachments: debug.log, system.log
>
>
> I installed  cassandra 2.2.4 from datastax apt source
> I have a cassandra cluster in aws ec2. the cluster has 7 nodes now. and I 
> want to add a new node to this cluster. but there is error occured with 
> bootstrap. the bootstrap can not finish sucessfully.
> I got the following errors on the new node:
> {code}
> WARN  [STREAM-IN-/172.31.4.135] 2016-01-01 15:23:44,228 
> StreamSession.java:644 - [Stream #b1cc3600-b054-11e5-80c
> 4-21d1c7c11a01] Retrying for following error
> java.lang.ArrayIndexOutOfBoundsException: null
> ERROR [Thread-571] 2016-01-01 15:23:44,228 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-571,5,ma
> in]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:
> 2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer
> .java:1220) ~[na:1.8.0_66]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStrea
> m.java:176) ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.4.jar:
> 2.2.4]
> {code}
> and I also got following errors: 
> {code}
> WARN  [STREAM-IN-/172.31.20.223] 2016-01-01 15:37:30,941 
> StreamSession.java:644 - [Stream #b1cc3600-b054-11e5-80
> c4-21d1c7c11a01] Retrying for following error
> java.lang.RuntimeException: Last written key 
> DecoratedKey(-8466153190082758358, 356a) >= current key
>  DecoratedKey(-9223372036854775808, ) writing into 
> /data_lvm/cassandra/data/feeds/inbox-eb873af0a19711e5ade0432b
> 31304f95/tmp-la-309-big-Data.db
> {code}
> I tried many things to do bootstrap, like cleanuped all nodes, scrubed all 
> nodes, repaired all nodes. but It still fails with these errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10838) print 3.0 statistics in sstablemetadata command output

2016-01-04 Thread Shogo Hoshii (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shogo Hoshii updated CASSANDRA-10838:
-
Attachment: CASSANDRA-10838.txt

added array length check

> print 3.0 statistics in sstablemetadata command output
> --
>
> Key: CASSANDRA-10838
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10838
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Shogo Hoshii
>Priority: Minor
> Fix For: 3.x
>
> Attachments: CASSANDRA-10838.txt, CASSANDRA-10838.txt, 
> sample_result.txt
>
>
> In CASSANDRA-7159, some statistics were added in 2.1.x, and in version 3.0, 
> we can print additional statistics.
> So I would like to print them in sstablemetadata output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10887) Pending range calculator gives wrong pending ranges for moves

2016-01-04 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-10887:
--
Attachment: (was: CASSANDRA_10887_v3.diff)

> Pending range calculator gives wrong pending ranges for moves
> -
>
> Key: CASSANDRA-10887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10887
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Richard Low
>Assignee: sankalp kohli
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: CASSANDRA-10887.diff, CASSANDRA_10887_v2.diff
>
>
> My understanding is the PendingRangeCalculator is meant to calculate who 
> should receive extra writes during range movements. However, it adds the 
> wrong ranges for moves. An extreme example of this can be seen in the 
> following reproduction. Create a 5 node cluster (I did this on 2.0.16 and 
> 2.2.4) and a keyspace RF=3 and a simple table. Then start moving a node and 
> immediately kill -9 it. Now you see a node as down and moving in the ring. 
> Try a quorum write for a partition that is stored on that node - it will fail 
> with a timeout. Further, all CAS reads or writes fail immediately with 
> unavailable exception because they attempt to include the moving node twice. 
> This is likely to be the cause of CASSANDRA-10423.
> In my example I had this ring:
> 127.0.0.1  rack1   Up Normal  170.97 KB   20.00%  
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  124.06 KB   20.00%  
> -5534023222112865485
> 127.0.0.3  rack1   Down   Moving  108.7 KB40.00%  
> 1844674407370955160
> 127.0.0.4  rack1   Up Normal  142.58 KB   0.00%   
> 1844674407370955161
> 127.0.0.5  rack1   Up Normal  118.64 KB   20.00%  
> 5534023222112865484
> Node 3 was moving to -1844674407370955160. I added logging to print the 
> pending and natural endpoints. For ranges owned by node 3, node 3 appeared in 
> pending and natural endpoints. The blockFor is increased to 3 so we’re 
> effectively doing CL.ALL operations. This manifests as write timeouts and CAS 
> unavailables when the node is down.
> The correct pending range for this scenario is node 1 is gaining the range 
> (-1844674407370955160, 1844674407370955160). So node 1 should be added as a 
> destination for writes and CAS for this range, not node 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10967) bootstrap fail with “Streaming error occurred”

2016-01-04 Thread Terry Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082542#comment-15082542
 ] 

Terry Ma commented on CASSANDRA-10967:
--

nodetool status feeds
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   Owns (effective)  Host ID
   Rack
UN  172.31.20.223  665.85 GB  256  28.8% 
e6c800d7-15ab-4b4b-8c0f-43a10df02fee  rack4
UN  172.31.15.14   527.74 GB  256  28.8% 
06a65c50-7a37-40df-9e05-7c84bc751e20  rack5
UN  172.31.8.188   436.05 GB  256  26.4% 
b7dbfb00-6fe0-41b7-ae48-b38ba9fa95f9  rack1
UN  172.31.8.187   637.19 GB  256  28.5% 
1666ea2e-fef1-42ef-9a66-de4d3c6e2704  rack2
UN  172.31.16.232  497.47 GB  256  29.2% 
ad8ece0d-c2d9-4618-8c2f-e13e12ac110b  rack6
UN  172.31.4.135   496.75 GB  256  28.7% 
3c74e65f-93fd-4405-99b1-a5b9453d554c  rack7
UJ  172.31.4.134   39.98 GB   256  ? 
015f2a9a-a8d7-4c34-8936-184f35b56af6  rack8
UN  172.31.20.224  524.46 GB  256  29.6% 
44743768-fc48-45ac-a0cd-6c9027e33afc  rack3

nodetool netstats
Mode: JOINING
Bootstrap 366d1720-b376-11e5-9aac-bbecd825311e
/172.31.8.188
Receiving 50 files, 48498301504 bytes total. Already received 0 files, 
0 bytes total
/172.31.20.223
Receiving 56 files, 80283672656 bytes total. Already received 0 files, 
0 bytes total
/172.31.20.224
Receiving 49 files, 88426202890 bytes total. Already received 0 files, 
0 bytes total
/172.31.16.232
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool NameActive   Pending  Completed
Large messages  n/a 0  0
Small messages  n/a 0  138949147
Gossip messages n/a 0 296937

> bootstrap fail with “Streaming error occurred”
> --
>
> Key: CASSANDRA-10967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10967
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: os: ubuntu 14.04.2 LTS (ec2)
>Reporter: Terry Ma
> Attachments: debug.log, system.log
>
>
> I installed  cassandra 2.2.4 from datastax apt source
> I have a cassandra cluster in aws ec2. the cluster has 7 nodes now. and I 
> want to add a new node to this cluster. but there is error occured with 
> bootstrap. the bootstrap can not finish sucessfully.
> I got the following errors on the new node:
> {code}
> WARN  [STREAM-IN-/172.31.4.135] 2016-01-01 15:23:44,228 
> StreamSession.java:644 - [Stream #b1cc3600-b054-11e5-80c
> 4-21d1c7c11a01] Retrying for following error
> java.lang.ArrayIndexOutOfBoundsException: null
> ERROR [Thread-571] 2016-01-01 15:23:44,228 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-571,5,ma
> in]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:
> 2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer
> .java:1220) ~[na:1.8.0_66]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStrea
> m.java:176) ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.4.jar:
> 2.2.4]
> {code}
> and I also got following errors: 
> {code}
> WARN  [STREAM-IN-/172.31.20.223] 2016-01-01 15:37:30,941 
> StreamSession.java:644 - [Stream #b1cc3600-b054-11e5-80
> c4-21d1c7c11a01] Retrying for following error
> java.lang.RuntimeException: Last written key 
> DecoratedKey(-8466153190082758358, 356a) >= current key
>  DecoratedKey(-9223372036854775808, ) writing into 
> /data_lvm/cassandra/data/feeds/inbox-eb873af0a19711e5ade0432b
> 31304f95/tmp-la-309-big-Data.db
> {code}
> I tried many things to do bootstrap, like cleanuped all nodes, scrubed all 
> nodes, repaired all nodes. but It still fails with these errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10967) bootstrap fail with “Streaming error occurred”

2016-01-04 Thread Terry Ma (JIRA)
Terry Ma created CASSANDRA-10967:


 Summary: bootstrap fail with “Streaming error occurred”
 Key: CASSANDRA-10967
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10967
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
 Environment: os: ubuntu 14.04.2 LTS (ec2)

Reporter: Terry Ma


I installed  cassandra 2.2.4 from datastax apt source
I have a cassandra cluster in aws ec2. the cluster has 7 nodes now. and I want 
to add a new node to this cluster. but there is error occured with bootstrap. 
the bootstrap can not finish sucessfully.

I got the following errors on the new node:
{code}
WARN  [STREAM-IN-/172.31.4.135] 2016-01-01 15:23:44,228 StreamSession.java:644 
- [Stream #b1cc3600-b054-11e5-80c
4-21d1c7c11a01] Retrying for following error
java.lang.ArrayIndexOutOfBoundsException: null
ERROR [Thread-571] 2016-01-01 15:23:44,228 CassandraDaemon.java:185 - Exception 
in thread Thread[Thread-571,5,ma
in]
java.lang.RuntimeException: java.lang.InterruptedException
at com.google.common.base.Throwables.propagate(Throwables.java:160) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
~[apache-cassandra-2.2.4.jar:
2.2.4]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
Caused by: java.lang.InterruptedException: null
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer
.java:1220) ~[na:1.8.0_66]
at 
java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
 ~[na:1.8.0_66]
at 
java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
~[na:1.8.0_66]
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStrea
m.java:176) ~[apache-cassandra-2.2.4.jar:2.2.4]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.2.4.jar:
2.2.4]
{code}

and I also got following errors: 

{code}
WARN  [STREAM-IN-/172.31.20.223] 2016-01-01 15:37:30,941 StreamSession.java:644 
- [Stream #b1cc3600-b054-11e5-80
c4-21d1c7c11a01] Retrying for following error
java.lang.RuntimeException: Last written key DecoratedKey(-8466153190082758358, 
356a) >= current key
 DecoratedKey(-9223372036854775808, ) writing into 
/data_lvm/cassandra/data/feeds/inbox-eb873af0a19711e5ade0432b
31304f95/tmp-la-309-big-Data.db
{code}

I tried many things to do bootstrap, like cleanuped all nodes, scrubed all 
nodes, repaired all nodes. but It still fails with these errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10967) bootstrap fail with “Streaming error occurred”

2016-01-04 Thread Terry Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082531#comment-15082531
 ] 

Terry Ma commented on CASSANDRA-10967:
--

nodetool bootstrap resume got some errors.

> bootstrap fail with “Streaming error occurred”
> --
>
> Key: CASSANDRA-10967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10967
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: os: ubuntu 14.04.2 LTS (ec2)
>Reporter: Terry Ma
>
> I installed  cassandra 2.2.4 from datastax apt source
> I have a cassandra cluster in aws ec2. the cluster has 7 nodes now. and I 
> want to add a new node to this cluster. but there is error occured with 
> bootstrap. the bootstrap can not finish sucessfully.
> I got the following errors on the new node:
> {code}
> WARN  [STREAM-IN-/172.31.4.135] 2016-01-01 15:23:44,228 
> StreamSession.java:644 - [Stream #b1cc3600-b054-11e5-80c
> 4-21d1c7c11a01] Retrying for following error
> java.lang.ArrayIndexOutOfBoundsException: null
> ERROR [Thread-571] 2016-01-01 15:23:44,228 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-571,5,ma
> in]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:
> 2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer
> .java:1220) ~[na:1.8.0_66]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStrea
> m.java:176) ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.4.jar:
> 2.2.4]
> {code}
> and I also got following errors: 
> {code}
> WARN  [STREAM-IN-/172.31.20.223] 2016-01-01 15:37:30,941 
> StreamSession.java:644 - [Stream #b1cc3600-b054-11e5-80
> c4-21d1c7c11a01] Retrying for following error
> java.lang.RuntimeException: Last written key 
> DecoratedKey(-8466153190082758358, 356a) >= current key
>  DecoratedKey(-9223372036854775808, ) writing into 
> /data_lvm/cassandra/data/feeds/inbox-eb873af0a19711e5ade0432b
> 31304f95/tmp-la-309-big-Data.db
> {code}
> I tried many things to do bootstrap, like cleanuped all nodes, scrubed all 
> nodes, repaired all nodes. but It still fails with these errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10838) print 3.0 statistics in sstablemetadata command output

2016-01-04 Thread Shogo Hoshii (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082547#comment-15082547
 ] 

Shogo Hoshii commented on CASSANDRA-10838:
--

Hello Yuki,

Thank you for verifying the patch.
I added array length check before displaying minClusteringValues.
Could you check the attachment?

> print 3.0 statistics in sstablemetadata command output
> --
>
> Key: CASSANDRA-10838
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10838
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Shogo Hoshii
>Priority: Minor
> Fix For: 3.x
>
> Attachments: CASSANDRA-10838.txt, CASSANDRA-10838.txt, 
> sample_result.txt
>
>
> In CASSANDRA-7159, some statistics were added in 2.1.x, and in version 3.0, 
> we can print additional statistics.
> So I would like to print them in sstablemetadata output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10887) Pending range calculator gives wrong pending ranges for moves

2016-01-04 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-10887:
--
Attachment: CASSANDRA_10887_v3.diff

> Pending range calculator gives wrong pending ranges for moves
> -
>
> Key: CASSANDRA-10887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10887
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Richard Low
>Assignee: sankalp kohli
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: CASSANDRA-10887.diff, CASSANDRA_10887_v2.diff, 
> CASSANDRA_10887_v3.diff
>
>
> My understanding is the PendingRangeCalculator is meant to calculate who 
> should receive extra writes during range movements. However, it adds the 
> wrong ranges for moves. An extreme example of this can be seen in the 
> following reproduction. Create a 5 node cluster (I did this on 2.0.16 and 
> 2.2.4) and a keyspace RF=3 and a simple table. Then start moving a node and 
> immediately kill -9 it. Now you see a node as down and moving in the ring. 
> Try a quorum write for a partition that is stored on that node - it will fail 
> with a timeout. Further, all CAS reads or writes fail immediately with 
> unavailable exception because they attempt to include the moving node twice. 
> This is likely to be the cause of CASSANDRA-10423.
> In my example I had this ring:
> 127.0.0.1  rack1   Up Normal  170.97 KB   20.00%  
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  124.06 KB   20.00%  
> -5534023222112865485
> 127.0.0.3  rack1   Down   Moving  108.7 KB40.00%  
> 1844674407370955160
> 127.0.0.4  rack1   Up Normal  142.58 KB   0.00%   
> 1844674407370955161
> 127.0.0.5  rack1   Up Normal  118.64 KB   20.00%  
> 5534023222112865484
> Node 3 was moving to -1844674407370955160. I added logging to print the 
> pending and natural endpoints. For ranges owned by node 3, node 3 appeared in 
> pending and natural endpoints. The blockFor is increased to 3 so we’re 
> effectively doing CL.ALL operations. This manifests as write timeouts and CAS 
> unavailables when the node is down.
> The correct pending range for this scenario is node 1 is gaining the range 
> (-1844674407370955160, 1844674407370955160). So node 1 should be added as a 
> destination for writes and CAS for this range, not node 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread xiaost (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaost updated CASSANDRA-10961:
---
Attachment: netstats.1.log
debug.1.log

reproduced. [~pauloricardomg]

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: debug.1.log, debug.log, jstack.log, netstats.1.log, 
> netstats.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> ... 1 common 

[jira] [Updated] (CASSANDRA-9472) Reintroduce off heap memtables

2016-01-04 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-9472:

Fix Version/s: (was: 3.2)
   3.x

> Reintroduce off heap memtables
> --
>
> Key: CASSANDRA-9472
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9472
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Stefania
> Fix For: 3.x
>
>
> CASSANDRA-8099 removes off heap memtables. We should reintroduce them ASAP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9977) Support counter-columns for native aggregates (sum,avg,max,min)

2016-01-04 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081215#comment-15081215
 ] 

Benjamin Lerer commented on CASSANDRA-9977:
---

LGTM

> Support counter-columns for native aggregates (sum,avg,max,min)
> ---
>
> Key: CASSANDRA-9977
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9977
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Noam Liran
>Assignee: Robert Stupp
> Fix For: 2.2.5, 3.0.3
>
>
> When trying to SUM a column of type COUNTER, this error is returned:
> {noformat}
> InvalidRequest: code=2200 [Invalid query] message="Invalid call to function 
> sum, none of its type signatures match (known type signatures: system.sum : 
> (tinyint) -> tinyint, system.sum : (smallint) -> smallint, system.sum : (int) 
> -> int, system.sum : (bigint) -> bigint, system.sum : (float) -> float, 
> system.sum : (double) -> double, system.sum : (decimal) -> decimal, 
> system.sum : (varint) -> varint)"
> {noformat}
> This might be relevant for other agg. functions.
> CQL for reproduction:
> {noformat}
> CREATE TABLE test (
> key INT,
> ctr COUNTER,
> PRIMARY KEY (
> key
> )
> );
> UPDATE test SET ctr = ctr + 1 WHERE key = 1;
> SELECT SUM(ctr) FROM test;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-01-04 Thread snazy
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d0e20364
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d0e20364
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d0e20364

Branch: refs/heads/trunk
Commit: d0e20364529b8c037bc14d95f477d8bf6caf2ff2
Parents: 6b7db8a e4eabd9
Author: Robert Stupp 
Authored: Mon Jan 4 16:34:43 2016 +0100
Committer: Robert Stupp 
Committed: Mon Jan 4 16:34:43 2016 +0100

--
 .../cassandra/cql3/functions/AggregateFcts.java | 230 ++-
 .../cql3/validation/entities/UFTest.java|  26 +++
 .../validation/operations/AggregationTest.java  |  41 
 3 files changed, 239 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d0e20364/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d0e20364/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
--



cassandra git commit: Support counter-columns for native aggregates (sum, avg, max, min)

2016-01-04 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 4d0f1405c -> e4eabd901


Support counter-columns for native aggregates (sum,avg,max,min)

patch by Robert Stupp; reviewed by Benjamin Lerer for CASSANDRA-9977


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4eabd90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4eabd90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4eabd90

Branch: refs/heads/cassandra-3.0
Commit: e4eabd901522742550074d5c3c5f25b642037891
Parents: 4d0f140
Author: Robert Stupp 
Authored: Mon Jan 4 16:34:27 2016 +0100
Committer: Robert Stupp 
Committed: Mon Jan 4 16:34:27 2016 +0100

--
 .../cassandra/cql3/functions/AggregateFcts.java | 230 ++-
 .../cql3/validation/entities/UFTest.java|  26 +++
 .../validation/operations/AggregationTest.java  |  41 
 3 files changed, 239 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4eabd90/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java 
b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
index 7b5bdb8..a1b67e1 100644
--- a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
@@ -47,6 +47,7 @@ public abstract class AggregateFcts
 functions.add(sumFunctionForDouble);
 functions.add(sumFunctionForDecimal);
 functions.add(sumFunctionForVarint);
+functions.add(sumFunctionForCounter);
 
 // avg for primitives
 functions.add(avgFunctionForByte);
@@ -57,6 +58,7 @@ public abstract class AggregateFcts
 functions.add(avgFunctionForDouble);
 functions.add(avgFunctionForDecimal);
 functions.add(avgFunctionForVarint);
+functions.add(avgFunctionForCounter);
 
 // count, max, and min for all standard types
 for (CQL3Type type : CQL3Type.Native.values())
@@ -64,8 +66,16 @@ public abstract class AggregateFcts
 if (type != CQL3Type.Native.VARCHAR) // varchar and text both 
mapping to UTF8Type
 {
 functions.add(AggregateFcts.makeCountFunction(type.getType()));
-functions.add(AggregateFcts.makeMaxFunction(type.getType()));
-functions.add(AggregateFcts.makeMinFunction(type.getType()));
+if (type != CQL3Type.Native.COUNTER)
+{
+
functions.add(AggregateFcts.makeMaxFunction(type.getType()));
+
functions.add(AggregateFcts.makeMinFunction(type.getType()));
+}
+else
+{
+functions.add(AggregateFcts.maxFunctionForCounter);
+functions.add(AggregateFcts.minFunctionForCounter);
+}
 }
 }
 
@@ -515,31 +525,7 @@ public abstract class AggregateFcts
 {
 public Aggregate newAggregate()
 {
-return new Aggregate()
-{
-private long sum;
-
-public void reset()
-{
-sum = 0;
-}
-
-public ByteBuffer compute(int protocolVersion)
-{
-return ((LongType) returnType()).decompose(sum);
-}
-
-public void addInput(int protocolVersion, 
List values)
-{
-ByteBuffer value = values.get(0);
-
-if (value == null)
-return;
-
-Number number = ((Number) 
argTypes().get(0).compose(value));
-sum += number.longValue();
-}
-};
+return new LongSumAggregate();
 }
 };
 
@@ -551,37 +537,7 @@ public abstract class AggregateFcts
 {
 public Aggregate newAggregate()
 {
-return new Aggregate()
-{
-private long sum;
-
-private int count;
-
-public void reset()
-{
-count = 0;
-sum = 0;
-}
-
-public ByteBuffer compute(int protocolVersion)
-{
-long avg = 

[1/2] cassandra git commit: Support counter-columns for native aggregates (sum, avg, max, min)

2016-01-04 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6b7db8a53 -> d0e203645


Support counter-columns for native aggregates (sum,avg,max,min)

patch by Robert Stupp; reviewed by Benjamin Lerer for CASSANDRA-9977


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4eabd90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4eabd90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4eabd90

Branch: refs/heads/trunk
Commit: e4eabd901522742550074d5c3c5f25b642037891
Parents: 4d0f140
Author: Robert Stupp 
Authored: Mon Jan 4 16:34:27 2016 +0100
Committer: Robert Stupp 
Committed: Mon Jan 4 16:34:27 2016 +0100

--
 .../cassandra/cql3/functions/AggregateFcts.java | 230 ++-
 .../cql3/validation/entities/UFTest.java|  26 +++
 .../validation/operations/AggregationTest.java  |  41 
 3 files changed, 239 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4eabd90/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java 
b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
index 7b5bdb8..a1b67e1 100644
--- a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
@@ -47,6 +47,7 @@ public abstract class AggregateFcts
 functions.add(sumFunctionForDouble);
 functions.add(sumFunctionForDecimal);
 functions.add(sumFunctionForVarint);
+functions.add(sumFunctionForCounter);
 
 // avg for primitives
 functions.add(avgFunctionForByte);
@@ -57,6 +58,7 @@ public abstract class AggregateFcts
 functions.add(avgFunctionForDouble);
 functions.add(avgFunctionForDecimal);
 functions.add(avgFunctionForVarint);
+functions.add(avgFunctionForCounter);
 
 // count, max, and min for all standard types
 for (CQL3Type type : CQL3Type.Native.values())
@@ -64,8 +66,16 @@ public abstract class AggregateFcts
 if (type != CQL3Type.Native.VARCHAR) // varchar and text both 
mapping to UTF8Type
 {
 functions.add(AggregateFcts.makeCountFunction(type.getType()));
-functions.add(AggregateFcts.makeMaxFunction(type.getType()));
-functions.add(AggregateFcts.makeMinFunction(type.getType()));
+if (type != CQL3Type.Native.COUNTER)
+{
+
functions.add(AggregateFcts.makeMaxFunction(type.getType()));
+
functions.add(AggregateFcts.makeMinFunction(type.getType()));
+}
+else
+{
+functions.add(AggregateFcts.maxFunctionForCounter);
+functions.add(AggregateFcts.minFunctionForCounter);
+}
 }
 }
 
@@ -515,31 +525,7 @@ public abstract class AggregateFcts
 {
 public Aggregate newAggregate()
 {
-return new Aggregate()
-{
-private long sum;
-
-public void reset()
-{
-sum = 0;
-}
-
-public ByteBuffer compute(int protocolVersion)
-{
-return ((LongType) returnType()).decompose(sum);
-}
-
-public void addInput(int protocolVersion, 
List values)
-{
-ByteBuffer value = values.get(0);
-
-if (value == null)
-return;
-
-Number number = ((Number) 
argTypes().get(0).compose(value));
-sum += number.longValue();
-}
-};
+return new LongSumAggregate();
 }
 };
 
@@ -551,37 +537,7 @@ public abstract class AggregateFcts
 {
 public Aggregate newAggregate()
 {
-return new Aggregate()
-{
-private long sum;
-
-private int count;
-
-public void reset()
-{
-count = 0;
-sum = 0;
-}
-
-public ByteBuffer compute(int protocolVersion)
-{
-long avg = count == 0 ? 0 

[jira] [Commented] (CASSANDRA-9258) Range movement causes CPU & performance impact

2016-01-04 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081211#comment-15081211
 ] 

Branimir Lambov commented on CASSANDRA-9258:


Sorry for not spotting this in the previous round: the {{Blackhole}} is used to 
swallow results so that the compiler does not optimize away the thing you are 
trying to benchmark. {{setUp}} does not need it as argument, but both test 
methods should take it and should pass the result to {{bh.consume}}. I don't 
think this is affecting the results at the moment, but could do so with future 
JVM versions, so it should be corrected.

Otherwise, LGTM. 
[utests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-dikang85-9258-testall/]
 and 
[dtests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-dikang85-9258-dtest/]
 look good as well.


> Range movement causes CPU & performance impact
> --
>
> Key: CASSANDRA-9258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9258
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4
>Reporter: Rick Branson
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
> Attachments: 0001-pending-ranges-map.patch, 
> 0001-pending-ranges-maps-for-2.2.patch, Screenshot 2015-12-16 16.11.36.png, 
> Screenshot 2015-12-16 16.11.51.png
>
>
> Observing big CPU & latency regressions when doing range movements on 
> clusters with many tens of thousands of vnodes. See CPU usage increase by 
> ~80% when a single node is being replaced.
> Top methods are:
> 1) Ljava/math/BigInteger;.compareTo in 
> Lorg/apache/cassandra/dht/ComparableObjectToken;.compareTo 
> 2) Lcom/google/common/collect/AbstractMapBasedMultimap;.wrapCollection in 
> Lcom/google/common/collect/AbstractMapBasedMultimap$AsMap$AsMapIterator;.next
> 3) Lorg/apache/cassandra/db/DecoratedKey;.compareTo in 
> Lorg/apache/cassandra/dht/Range;.contains
> Here's a sample stack from a thread dump:
> {code}
> "Thrift:50673" daemon prio=10 tid=0x7f2f20164800 nid=0x3a04af runnable 
> [0x7f2d878d]
>java.lang.Thread.State: RUNNABLE
>   at org.apache.cassandra.dht.Range.isWrapAround(Range.java:260)
>   at org.apache.cassandra.dht.Range.contains(Range.java:51)
>   at org.apache.cassandra.dht.Range.contains(Range.java:110)
>   at 
> org.apache.cassandra.locator.TokenMetadata.pendingEndpointsFor(TokenMetadata.java:916)
>   at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:775)
>   at 
> org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:541)
>   at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:616)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1101)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1083)
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread xiaost (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081315#comment-15081315
 ] 

xiaost edited comment on CASSANDRA-10961 at 1/4/16 4:54 PM:


I'm trying to fix the deadlock issue with this patch. 
[^0001-Fix-streaming-deadlock-when-interrupted.patch]
I don't known whether it works. 
It is running, and will figure out. :-)


was (Author: xiaost):
I'm trying to fix the deadlock issue with this patch. 
I don't known whether it works. 
It is running, and will figure out. :-)

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: 0001-Fix-streaming-deadlock-when-interrupted.patch, 
> debug.1.log, netstats.1.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> 

[jira] [Updated] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread xiaost (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaost updated CASSANDRA-10961:
---
Attachment: 0001-Fix-streaming-deadlock-when-interrupted.patch

I'm trying to fix the deadlock issue with this patch. 
I don't known whether it works. 
It is running, and will figure out. :-)

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: 0001-Fix-streaming-deadlock-when-interrupted.patch, 
> debug.1.log, debug.log, jstack.log, netstats.1.log, netstats.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  

[jira] [Comment Edited] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread xiaost (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081243#comment-15081243
 ] 

xiaost edited comment on CASSANDRA-10961 at 1/4/16 4:55 PM:


reproduced. [~pauloricardomg] [^debug.1.log] [^netstats.1.log]


was (Author: xiaost):
reproduced. [~pauloricardomg]

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: 0001-Fix-streaming-deadlock-when-interrupted.patch, 
> debug.1.log, netstats.1.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> 

[jira] [Updated] (CASSANDRA-10964) Startup errors in Docker containers depending on memtable allocation type

2016-01-04 Thread Jacek Furmankiewicz (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacek Furmankiewicz updated CASSANDRA-10964:

Summary: Startup errors in Docker containers depending on memtable 
allocation type  (was: Starup errors in Docker containers depending on memtable 
allocation type)

> Startup errors in Docker containers depending on memtable allocation type
> -
>
> Key: CASSANDRA-10964
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10964
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Docker, Debian Testing, 3.0.1
>Reporter: Jacek Furmankiewicz
>
> We are creating Docker containers for various versions of Cassandra. All are 
> based on Debian, Oracle JDK 1.8 and the Cassandra versions are installed 
> directly from the DataStax Debian repos via apt-get.
> We noticed that with 3.0.1 (only that version, 2.1.11 and 2.2.4 work always 
> fine) the Cassandra process fails to start up randonly (about 50% of the 
> time) with the following error:
> {noformat}
> Caused by: java.lang.RuntimeException: 
> system_distributed:parent_repair_history not found in the schema definitions 
> keyspace.
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:940)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:931)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:894)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesOnly(SchemaKeyspace.java:886)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1276)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1255)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.service.MigrationManager$1.runMayThrow(MigrationManager.java:531)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> {noformat}
> Started playing with different configuration parameters and by trial and 
> error figured out it seems to be related to this configuration parameter:
> {noformat}
> memtable_allocation_type: offheap_buffers
> {noformat}
> If we set it to offheap_buffers, this error occurs about 50% of the time 
> (when starting on a new clean filesystem).
> If we set it to heap_buffers, it works always, 100% of the time, never an 
> issue. 
> Attaching full stack output to help debug:
> {noformat}
> INFO  16:11:44 Configuration location: 
> file:/etc/cassandra/cassandra.yaml
> INFO  16:11:44 Node 
> configuration:[authenticator=PasswordAuthenticator; 
> authorizer=CassandraAuthorizer; auto_snapshot=true; 
> batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; 
> batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; 
> client_encryption_options=; cluster_name=TEST_CLUSTER; 
> column_index_size_in_kb=64; commit_failure_policy=stop; 
> commitlog_directory=/var/lib/cassandra/commitlog2; 
> commitlog_segment_size_in_mb=32; commitlog_sync=periodic; 
> commitlog_sync_period_in_ms=1; 
> compaction_large_partition_warning_threshold_mb=100; 
> compaction_throughput_mb_per_sec=16; concurrent_counter_writes=12; 
> concurrent_materialized_view_writes=7; concurrent_reads=64; 
> concurrent_writes=10; counter_cache_save_period=17200; 
> counter_cache_size_in_mb=1027; counter_write_request_timeout_in_ms=5000; 
> cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data2]; 
> disk_failure_policy=stop; disk_optimization_strategy=spinning; 
> dynamic_snitch_badness_threshold=0.1; 
> dynamic_snitch_reset_interval_in_ms=60; 
> dynamic_snitch_update_interval_in_ms=100; 
> enable_scripted_user_defined_functions=false; 
> enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; 
> gc_warn_threshold_in_ms=1000; 

[jira] [Updated] (CASSANDRA-10957) Verify disk is readable on FileNotFound Exceptions

2016-01-04 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10957:

Priority: Minor  (was: Major)

> Verify disk is readable on FileNotFound Exceptions
> --
>
> Key: CASSANDRA-10957
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10957
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Priority: Minor
>
> In JVMStabilityInspector we only mark ourselves unstable when we get some 
> special messages in file not found exceptions.
> {code}
> // Check for file handle exhaustion
> if (t instanceof FileNotFoundException || t instanceof 
> SocketException)
> if (t.getMessage().contains("Too many open files"))
> isUnstable = true;
> {code}
> It seems like the OS might also have the same issue of too many open files 
> but will instead return "No such file or directory".
> It might make more sense, when we check this exception type, to try to read a 
> known-to-exist file to verify the disk is readable.
> This would mean creating a hidden file on startup on each data disk? other 
> ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10928) SSTableExportTest.testExportColumnsWithMetadata randomly fails

2016-01-04 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081418#comment-15081418
 ] 

Brandon Williams commented on CASSANDRA-10928:
--

[~philipthompson] is cassci seeing this?

> SSTableExportTest.testExportColumnsWithMetadata randomly fails
> --
>
> Key: CASSANDRA-10928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10928
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>
> The SSTableExportTest.testExportColumnsWithMetadata test will randomly fail 
> (bogusly). Currently, the string check used won’t work if the JSON generated 
> happened to order the elements in the array differently.
> {code}
> assertEquals(
> "unexpected serialization format for topLevelDeletion",
> "{\"markedForDeleteAt\":0,\"localDeletionTime\":0}",
> serializedDeletionInfo.toJSONString());
> {code}
> {noformat}
> [junit] Testcase: 
> testExportColumnsWithMetadata(org.apache.cassandra.tools.SSTableExportTest):  
>   FAILED
> [junit] unexpected serialization format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit] junit.framework.AssertionFailedError: unexpected serialization 
> format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit]   at 
> org.apache.cassandra.tools.SSTableExportTest.testExportColumnsWithMetadata(SSTableExportTest.java:299)
> [junit]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6696) Partition sstables by token range

2016-01-04 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081391#comment-15081391
 ] 

Yuki Morishita commented on CASSANDRA-6696:
---

[~carlyeks] that test should have been fixed in CASSANDRA-10950.

> Partition sstables by token range
> -
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Marcus Eriksson
>  Labels: compaction, correctness, dense-storage, 
> jbod-aware-compaction, performance
> Fix For: 3.2
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10957) Verify disk is readable on FileNotFound Exceptions

2016-01-04 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081406#comment-15081406
 ] 

Joshua McKenzie commented on CASSANDRA-10957:
-

An alternative (to make it more cross-platform friendly) might be to attempt to 
write a file to temp and read it to confirm the disk is working when we hit 
this path in the stability inspector.

> Verify disk is readable on FileNotFound Exceptions
> --
>
> Key: CASSANDRA-10957
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10957
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Priority: Minor
>
> In JVMStabilityInspector we only mark ourselves unstable when we get some 
> special messages in file not found exceptions.
> {code}
> // Check for file handle exhaustion
> if (t instanceof FileNotFoundException || t instanceof 
> SocketException)
> if (t.getMessage().contains("Too many open files"))
> isUnstable = true;
> {code}
> It seems like the OS might also have the same issue of too many open files 
> but will instead return "No such file or directory".
> It might make more sense, when we check this exception type, to try to read a 
> known-to-exist file to verify the disk is readable.
> This would mean creating a hidden file on startup on each data disk? other 
> ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6696) Partition sstables by token range

2016-01-04 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081411#comment-15081411
 ] 

Carl Yeksigian commented on CASSANDRA-6696:
---

[~yukim] Ah, thanks; I'm still catching up on the changes :)

> Partition sstables by token range
> -
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Marcus Eriksson
>  Labels: compaction, correctness, dense-storage, 
> jbod-aware-compaction, performance
> Fix For: 3.2
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Replace trivial uses of String.replace/replaceAll/split with StringUtils methods

2016-01-04 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk 869bdabf4 -> 01d26dd3f


Replace trivial uses of String.replace/replaceAll/split with StringUtils methods

patch by Alexander Shopov; reviewed by Robert Stupp for CASSANDRA-8755


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/01d26dd3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/01d26dd3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/01d26dd3

Branch: refs/heads/trunk
Commit: 01d26dd3fc35a6b22a538f75545b0d9b739ee48d
Parents: 869bdab
Author: Alexander Shopov 
Authored: Mon Jan 4 22:33:44 2016 +0100
Committer: Robert Stupp 
Committed: Mon Jan 4 22:33:44 2016 +0100

--
 .../org/apache/cassandra/config/CFMetaData.java |  9 ++-
 .../cassandra/config/DatabaseDescriptor.java|  6 +-
 .../apache/cassandra/cql3/ColumnIdentifier.java |  5 +-
 .../statements/CreateKeyspaceStatement.java |  5 +-
 .../cql3/statements/CreateTableStatement.java   |  6 +-
 .../cql3/statements/PropertyDefinitions.java|  5 +-
 .../db/commitlog/CommitLogArchiver.java | 18 --
 .../db/commitlog/CommitLogReplayer.java |  2 +-
 .../db/marshal/AbstractCompositeType.java   |  9 ++-
 .../apache/cassandra/db/marshal/TupleType.java  | 18 +-
 .../index/internal/CassandraIndex.java  |  7 ++-
 .../metrics/CassandraMetricsRegistry.java   | 16 -
 .../apache/cassandra/schema/IndexMetadata.java  | 11 +++-
 .../cassandra/service/StorageService.java   |  4 +-
 .../cassandra/utils/CassandraVersion.java   |  6 +-
 .../apache/cassandra/config/CFMetaDataTest.java | 20 +++
 .../config/DatabaseDescriptorTest.java  | 15 +
 .../cassandra/cql3/ColumnIdentifierTest.java| 19 ++
 .../statements/PropertyDefinitionsTest.java | 61 
 .../db/marshal/AbstractCompositeTypeTest.java   | 35 +++
 .../metrics/CassandraMetricsRegistryTest.java   | 34 +++
 .../cassandra/schema/IndexMetadataTest.java | 36 
 .../cassandra/utils/CassandraVersionTest.java   | 51 +++-
 23 files changed, 363 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/01d26dd3/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 128255c..ffa55c3 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -25,6 +25,7 @@ import java.util.*;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ThreadLocalRandom;
 import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
 import java.util.stream.Collectors;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -69,6 +70,8 @@ public final class CFMetaData
 SUPER, COUNTER, DENSE, COMPOUND
 }
 
+private static final Pattern PATTERN_WORD_CHARS = Pattern.compile("\\w+");
+
 private static final Logger logger = 
LoggerFactory.getLogger(CFMetaData.class);
 
 public static final Serializer serializer = new Serializer();
@@ -830,9 +833,9 @@ public final class CFMetaData
 return columnMetadata.get(name);
 }
 
-public static boolean isNameValid(String name)
-{
-return name != null && !name.isEmpty() && name.length() <= 
Schema.NAME_LENGTH && name.matches("\\w+");
+public static boolean isNameValid(String name) {
+return name != null && !name.isEmpty()
+&& name.length() <= Schema.NAME_LENGTH && 
PATTERN_WORD_CHARS.matcher(name).matches();
 }
 
 public CFMetaData validate() throws ConfigurationException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/01d26dd3/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index edcbcf5..3fc0b31 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -54,9 +54,9 @@ import org.apache.cassandra.scheduler.NoScheduler;
 import org.apache.cassandra.security.EncryptionContext;
 import org.apache.cassandra.service.CacheService;
 import org.apache.cassandra.thrift.ThriftServer;
-import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.memory.*;
+import org.apache.commons.lang3.StringUtils;
 
 public class DatabaseDescriptor
 {
@@ -927,8 +927,8 @@ public 

[jira] [Updated] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread xiaost (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaost updated CASSANDRA-10961:
---
Attachment: (was: 0001-Fix-streaming-deadlock-when-interrupted.patch)

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: debug.1.log, netstats.1.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> ... 1 common frames omitted
> ERROR [STREAM-IN-/HostA] 2016-01-02 

[jira] [Issue Comment Deleted] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-04 Thread xiaost (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaost updated CASSANDRA-10961:
---
Comment: was deleted

(was: I'm trying to fix the deadlock issue with this patch. 
[^0001-Fix-streaming-deadlock-when-interrupted.patch]
I don't known whether it works. 
It is running, and will figure out. :-))

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: 0001-Fix-streaming-deadlock-when-interrupted.patch, 
> debug.1.log, netstats.1.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> 

[jira] [Updated] (CASSANDRA-10910) Materialized view remained rows

2016-01-04 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10910:

Reviewer: T Jake Luciani

> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10937) OOM on multiple nodes on write load (v. 3.0.0), problem also present on DSE-4.8.3, but there it survives more time

2016-01-04 Thread Peter Kovgan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081469#comment-15081469
 ] 

Peter Kovgan commented on CASSANDRA-10937:
--

1) I've read recommendations and there were no word about VMWare (or any 
virtualization) as a bad choice. It is no so clear from recommendations. Shared 
storage (NAT drives, etc..), for example, listed as a bad thing, so we avoided 
it. But virtualization per ce is sort of "not mentioned". You have some 
recommendations for Amazon cloud , that's all. Nothing negative regarding 
virtualization.
If you tested on VMWare and got bad results, I'd like to see that in 
instructions.

2) Even with bad IO for whatever reason (and this is probably the case), I 
would rather expect accepting threads stop to accept new messages, not allowing 
memory to overpopulate and explode. So I would anyway treat it as an important 
feature request. May be not a bug, but vulnerability, that should be answered 
with some mechanism. 

Thanks for your answer.

BTW, we will install multiple nodes on single physical machine (because the 
machine is quite strong for 1 node), is that also problematic? 



> OOM on multiple nodes on write load (v. 3.0.0), problem also present on 
> DSE-4.8.3, but there it survives more time
> --
>
> Key: CASSANDRA-10937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10937
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra : 3.0.0
> Installed as open archive, no connection to any OS specific installer.
> Java:
> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
> OS :
> Linux version 2.6.32-431.el6.x86_64 
> (mockbu...@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red 
> Hat 4.4.7-4) (GCC) ) #1 SMP Sun Nov 10 22:19:54 EST 2013
> We have:
> 8 guests ( Linux OS as above) on 2 (VMWare managed) physical hosts. Each 
> physical host keeps 4 guests.
> Physical host parameters(shared by all 4 guests):
> Model: HP ProLiant DL380 Gen9
> Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
> 46 logical processors.
> Hyperthreading - enabled
> Each guest assigned to have:
> 1 disk 300 Gb for seq. log (NOT SSD)
> 1 disk 4T for data (NOT SSD)
> 11 CPU cores
> Disks are local, not shared.
> Memory on each host -  24 Gb total.
> 8 (or 6, tested both) Gb - cassandra heap
> (lshw and cpuinfo attached in file test2.rar)
>Reporter: Peter Kovgan
>Priority: Critical
> Attachments: gc-stat.txt, more-logs.rar, some-heap-stats.rar, 
> test2.rar, test3.rar, test4.rar, test5.rar
>
>
> 8 cassandra nodes.
> Load test started with 4 clients(different and not equal machines), each 
> running 1000 threads.
> Each thread assigned in round-robin way to run one of 4 different inserts. 
> Consistency->ONE.
> I attach the full CQL schema of tables and the query of insert.
> Replication factor - 2:
> create keyspace OBLREPOSITORY_NY with replication = 
> {'class':'NetworkTopologyStrategy','NY':2};
> Initiall throughput is:
> 215.000  inserts /sec
> or
> 54Mb/sec, considering single insert size a bit larger than 256byte.
> Data:
> all fields(5-6) are short strings, except one is BLOB of 256 bytes.
> After about a 2-3 hours of work, I was forced to increase timeout from 2000 
> to 5000ms, for some requests failed for short timeout.
> Later on(after aprox. 12 hous of work) OOM happens on multiple nodes.
> (all failed nodes logs attached)
> I attach also java load client and instructions how set-up and use 
> it.(test2.rar)
> Update:
> Later on test repeated with lesser load (10 mes/sec) with more relaxed 
> CPU (idle 25%), with only 2 test clients, but anyway test failed.
> Update:
> DSE-4.8.3 also failed on OOM (3 nodes from 8), but here it survived 48 hours, 
> not 10-12.
> Attachments:
> test2.rar -contains most of material
> more-logs.rar - contains additional nodes logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10937) OOM on multiple nodes on write load (v. 3.0.0), problem also present on DSE-4.8.3, but there it survives more time

2016-01-04 Thread Peter Kovgan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081483#comment-15081483
 ] 

Peter Kovgan edited comment on CASSANDRA-10937 at 1/4/16 6:00 PM:
--

In addition, OOM happens only after 10-48 hours of test. So it is really looks 
like a bug, not just load problem. System accepts that load for very long time 
- not failing in short time - that is the sort of "accumulated" problem and so 
it looks like a pure bug. 


was (Author: tierhetze):
In additional, OOM happens only after 10-48 hours of test. So it is really 
looks like a bug, not just load problem. System accepts that load for very long 
time - not failing in short time - that is the sort of "accumulated" problem 
and so it looks like a pure bug. 

> OOM on multiple nodes on write load (v. 3.0.0), problem also present on 
> DSE-4.8.3, but there it survives more time
> --
>
> Key: CASSANDRA-10937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10937
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra : 3.0.0
> Installed as open archive, no connection to any OS specific installer.
> Java:
> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
> OS :
> Linux version 2.6.32-431.el6.x86_64 
> (mockbu...@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red 
> Hat 4.4.7-4) (GCC) ) #1 SMP Sun Nov 10 22:19:54 EST 2013
> We have:
> 8 guests ( Linux OS as above) on 2 (VMWare managed) physical hosts. Each 
> physical host keeps 4 guests.
> Physical host parameters(shared by all 4 guests):
> Model: HP ProLiant DL380 Gen9
> Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
> 46 logical processors.
> Hyperthreading - enabled
> Each guest assigned to have:
> 1 disk 300 Gb for seq. log (NOT SSD)
> 1 disk 4T for data (NOT SSD)
> 11 CPU cores
> Disks are local, not shared.
> Memory on each host -  24 Gb total.
> 8 (or 6, tested both) Gb - cassandra heap
> (lshw and cpuinfo attached in file test2.rar)
>Reporter: Peter Kovgan
>Priority: Critical
> Attachments: gc-stat.txt, more-logs.rar, some-heap-stats.rar, 
> test2.rar, test3.rar, test4.rar, test5.rar
>
>
> 8 cassandra nodes.
> Load test started with 4 clients(different and not equal machines), each 
> running 1000 threads.
> Each thread assigned in round-robin way to run one of 4 different inserts. 
> Consistency->ONE.
> I attach the full CQL schema of tables and the query of insert.
> Replication factor - 2:
> create keyspace OBLREPOSITORY_NY with replication = 
> {'class':'NetworkTopologyStrategy','NY':2};
> Initiall throughput is:
> 215.000  inserts /sec
> or
> 54Mb/sec, considering single insert size a bit larger than 256byte.
> Data:
> all fields(5-6) are short strings, except one is BLOB of 256 bytes.
> After about a 2-3 hours of work, I was forced to increase timeout from 2000 
> to 5000ms, for some requests failed for short timeout.
> Later on(after aprox. 12 hous of work) OOM happens on multiple nodes.
> (all failed nodes logs attached)
> I attach also java load client and instructions how set-up and use 
> it.(test2.rar)
> Update:
> Later on test repeated with lesser load (10 mes/sec) with more relaxed 
> CPU (idle 25%), with only 2 test clients, but anyway test failed.
> Update:
> DSE-4.8.3 also failed on OOM (3 nodes from 8), but here it survived 48 hours, 
> not 10-12.
> Attachments:
> test2.rar -contains most of material
> more-logs.rar - contains additional nodes logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10910) Materialized view remained rows

2016-01-04 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081484#comment-15081484
 ] 

T Jake Luciani commented on CASSANDRA-10910:


I'm wondering how this will affect the shadowable tombstone logic.  Can you 
modify that dtest to include this update?

> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>Assignee: Carl Yeksigian
> Fix For: 3.0.x, 3.x
>
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> {code}
> ...I've updated the value of the row:
> {code}
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 3
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 3
> (2 rows)
> {code}
> ...I've deleted the row by the id key:
> {code}
> > delete from test where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
> (0 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> Is it a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10937) OOM on multiple nodes on write load (v. 3.0.0), problem also present on DSE-4.8.3, but there it survives more time

2016-01-04 Thread Peter Kovgan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081483#comment-15081483
 ] 

Peter Kovgan commented on CASSANDRA-10937:
--

In additional, OOM happens only after 10-48 hours of test. So it is really 
looks like a bug, not just load problem. System accepts that load for very long 
time - not failing in short time - that is the sort of "accumulated" problem 
and so it looks like a pure bug. 

> OOM on multiple nodes on write load (v. 3.0.0), problem also present on 
> DSE-4.8.3, but there it survives more time
> --
>
> Key: CASSANDRA-10937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10937
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra : 3.0.0
> Installed as open archive, no connection to any OS specific installer.
> Java:
> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
> OS :
> Linux version 2.6.32-431.el6.x86_64 
> (mockbu...@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red 
> Hat 4.4.7-4) (GCC) ) #1 SMP Sun Nov 10 22:19:54 EST 2013
> We have:
> 8 guests ( Linux OS as above) on 2 (VMWare managed) physical hosts. Each 
> physical host keeps 4 guests.
> Physical host parameters(shared by all 4 guests):
> Model: HP ProLiant DL380 Gen9
> Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
> 46 logical processors.
> Hyperthreading - enabled
> Each guest assigned to have:
> 1 disk 300 Gb for seq. log (NOT SSD)
> 1 disk 4T for data (NOT SSD)
> 11 CPU cores
> Disks are local, not shared.
> Memory on each host -  24 Gb total.
> 8 (or 6, tested both) Gb - cassandra heap
> (lshw and cpuinfo attached in file test2.rar)
>Reporter: Peter Kovgan
>Priority: Critical
> Attachments: gc-stat.txt, more-logs.rar, some-heap-stats.rar, 
> test2.rar, test3.rar, test4.rar, test5.rar
>
>
> 8 cassandra nodes.
> Load test started with 4 clients(different and not equal machines), each 
> running 1000 threads.
> Each thread assigned in round-robin way to run one of 4 different inserts. 
> Consistency->ONE.
> I attach the full CQL schema of tables and the query of insert.
> Replication factor - 2:
> create keyspace OBLREPOSITORY_NY with replication = 
> {'class':'NetworkTopologyStrategy','NY':2};
> Initiall throughput is:
> 215.000  inserts /sec
> or
> 54Mb/sec, considering single insert size a bit larger than 256byte.
> Data:
> all fields(5-6) are short strings, except one is BLOB of 256 bytes.
> After about a 2-3 hours of work, I was forced to increase timeout from 2000 
> to 5000ms, for some requests failed for short timeout.
> Later on(after aprox. 12 hous of work) OOM happens on multiple nodes.
> (all failed nodes logs attached)
> I attach also java load client and instructions how set-up and use 
> it.(test2.rar)
> Update:
> Later on test repeated with lesser load (10 mes/sec) with more relaxed 
> CPU (idle 25%), with only 2 test clients, but anyway test failed.
> Update:
> DSE-4.8.3 also failed on OOM (3 nodes from 8), but here it survived 48 hours, 
> not 10-12.
> Attachments:
> test2.rar -contains most of material
> more-logs.rar - contains additional nodes logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Add requireAuthorization method to IAuthorizer

2016-01-04 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/trunk d0e203645 -> f54eab71d


Add requireAuthorization method to IAuthorizer


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f54eab71
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f54eab71
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f54eab71

Branch: refs/heads/trunk
Commit: f54eab71d299429e17f315734484fb176f542167
Parents: d0e2036
Author: Mike Adamson 
Authored: Sat Dec 12 15:37:40 2015 +
Committer: Sam Tunnicliffe 
Committed: Mon Jan 4 17:57:07 2016 +

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java  | 6 ++
 src/java/org/apache/cassandra/auth/IAuthorizer.java | 9 +
 src/java/org/apache/cassandra/auth/PermissionsCache.java| 2 +-
 .../org/apache/cassandra/config/DatabaseDescriptor.java | 4 ++--
 src/java/org/apache/cassandra/service/ClientState.java  | 4 ++--
 6 files changed, 21 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54eab71/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cbd109e..e6b22b3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.2
+ * Add requireAuthorization method to IAuthorizer (CASSANDRA-10852)
  * Fix CassandraVersion to accept x.y version string (CASSANDRA-10931)
  * Add forceUserDefinedCleanup to allow more flexible cleanup (CASSANDRA-10708)
  * (cqlsh) allow setting TTL with COPY (CASSANDRA-9494)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54eab71/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java
--
diff --git a/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java 
b/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java
index bc6fee4..3b40979 100644
--- a/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java
+++ b/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java
@@ -22,6 +22,12 @@ import java.util.Set;
 
 public class AllowAllAuthorizer implements IAuthorizer
 {
+@Override
+public boolean requireAuthorization()
+{
+return false;
+}
+
 public Set authorize(AuthenticatedUser user, IResource 
resource)
 {
 return resource.applicablePermissions();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54eab71/src/java/org/apache/cassandra/auth/IAuthorizer.java
--
diff --git a/src/java/org/apache/cassandra/auth/IAuthorizer.java 
b/src/java/org/apache/cassandra/auth/IAuthorizer.java
index 01c05af..a023e3e 100644
--- a/src/java/org/apache/cassandra/auth/IAuthorizer.java
+++ b/src/java/org/apache/cassandra/auth/IAuthorizer.java
@@ -29,6 +29,15 @@ import 
org.apache.cassandra.exceptions.RequestValidationException;
 public interface IAuthorizer
 {
 /**
+ * Whether or not the authorizer will attempt authorization.
+ * If false the authorizer will not be called for authorization of 
resources.
+ */
+default boolean requireAuthorization()
+{
+return true;
+}
+
+/**
  * Returns a set of permissions of a user on a resource.
  * Since Roles were introduced in version 2.2, Cassandra does not 
distinguish in any
  * meaningful way between users and roles. A role may or may not have 
login privileges

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54eab71/src/java/org/apache/cassandra/auth/PermissionsCache.java
--
diff --git a/src/java/org/apache/cassandra/auth/PermissionsCache.java 
b/src/java/org/apache/cassandra/auth/PermissionsCache.java
index 8746b36..95aa398 100644
--- a/src/java/org/apache/cassandra/auth/PermissionsCache.java
+++ b/src/java/org/apache/cassandra/auth/PermissionsCache.java
@@ -107,7 +107,7 @@ public class PermissionsCache implements 
PermissionsCacheMBean
 private LoadingCache, Set> 
initCache(
  
LoadingCache, Set> existing)
 {
-if (authorizer instanceof AllowAllAuthorizer)
+if (!authorizer.requireAuthorization())
 return null;
 
 if (DatabaseDescriptor.getPermissionsValidity() <= 0)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54eab71/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 

[jira] [Commented] (CASSANDRA-10937) OOM on multiple nodes on write load (v. 3.0.0), problem also present on DSE-4.8.3, but there it survives more time

2016-01-04 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081424#comment-15081424
 ] 

Michael Shuler commented on CASSANDRA-10937:


I appreciate the log of load testing your infrastructure. The hardware you 
indicate that you are testing is not exactly what I would classify as following 
recommendations. Yes, "bad things" can happen with overloaded Cassandra nodes. 
If the nodes are too small and have poor I/O, as your environment details 
suggest, server overload happens and "bad things" occur.

This is purely an observation on what you've posted so far, without digging 
through any of your logs.

Your last post suggests that you'll continue testing on real servers - this is 
a good start on getting closer to recommended hardware.

Is there a hint on what this "bug" is actually suggesting, other than load 
testing small non-recommended virtual servers?

> OOM on multiple nodes on write load (v. 3.0.0), problem also present on 
> DSE-4.8.3, but there it survives more time
> --
>
> Key: CASSANDRA-10937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10937
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra : 3.0.0
> Installed as open archive, no connection to any OS specific installer.
> Java:
> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
> OS :
> Linux version 2.6.32-431.el6.x86_64 
> (mockbu...@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red 
> Hat 4.4.7-4) (GCC) ) #1 SMP Sun Nov 10 22:19:54 EST 2013
> We have:
> 8 guests ( Linux OS as above) on 2 (VMWare managed) physical hosts. Each 
> physical host keeps 4 guests.
> Physical host parameters(shared by all 4 guests):
> Model: HP ProLiant DL380 Gen9
> Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
> 46 logical processors.
> Hyperthreading - enabled
> Each guest assigned to have:
> 1 disk 300 Gb for seq. log (NOT SSD)
> 1 disk 4T for data (NOT SSD)
> 11 CPU cores
> Disks are local, not shared.
> Memory on each host -  24 Gb total.
> 8 (or 6, tested both) Gb - cassandra heap
> (lshw and cpuinfo attached in file test2.rar)
>Reporter: Peter Kovgan
>Priority: Critical
> Attachments: gc-stat.txt, more-logs.rar, some-heap-stats.rar, 
> test2.rar, test3.rar, test4.rar, test5.rar
>
>
> 8 cassandra nodes.
> Load test started with 4 clients(different and not equal machines), each 
> running 1000 threads.
> Each thread assigned in round-robin way to run one of 4 different inserts. 
> Consistency->ONE.
> I attach the full CQL schema of tables and the query of insert.
> Replication factor - 2:
> create keyspace OBLREPOSITORY_NY with replication = 
> {'class':'NetworkTopologyStrategy','NY':2};
> Initiall throughput is:
> 215.000  inserts /sec
> or
> 54Mb/sec, considering single insert size a bit larger than 256byte.
> Data:
> all fields(5-6) are short strings, except one is BLOB of 256 bytes.
> After about a 2-3 hours of work, I was forced to increase timeout from 2000 
> to 5000ms, for some requests failed for short timeout.
> Later on(after aprox. 12 hous of work) OOM happens on multiple nodes.
> (all failed nodes logs attached)
> I attach also java load client and instructions how set-up and use 
> it.(test2.rar)
> Update:
> Later on test repeated with lesser load (10 mes/sec) with more relaxed 
> CPU (idle 25%), with only 2 test clients, but anyway test failed.
> Update:
> DSE-4.8.3 also failed on OOM (3 nodes from 8), but here it survived 48 hours, 
> not 10-12.
> Attachments:
> test2.rar -contains most of material
> more-logs.rar - contains additional nodes logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10928) SSTableExportTest.testExportColumnsWithMetadata randomly fails

2016-01-04 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081427#comment-15081427
 ] 

Philip Thompson commented on CASSANDRA-10928:
-

Not anytime in the last 50 builds on 2.1.

> SSTableExportTest.testExportColumnsWithMetadata randomly fails
> --
>
> Key: CASSANDRA-10928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10928
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>
> The SSTableExportTest.testExportColumnsWithMetadata test will randomly fail 
> (bogusly). Currently, the string check used won’t work if the JSON generated 
> happened to order the elements in the array differently.
> {code}
> assertEquals(
> "unexpected serialization format for topLevelDeletion",
> "{\"markedForDeleteAt\":0,\"localDeletionTime\":0}",
> serializedDeletionInfo.toJSONString());
> {code}
> {noformat}
> [junit] Testcase: 
> testExportColumnsWithMetadata(org.apache.cassandra.tools.SSTableExportTest):  
>   FAILED
> [junit] unexpected serialization format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit] junit.framework.AssertionFailedError: unexpected serialization 
> format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit]   at 
> org.apache.cassandra.tools.SSTableExportTest.testExportColumnsWithMetadata(SSTableExportTest.java:299)
> [junit]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >