[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-22 Thread slebresne
Merge branch 'cassandra-2.0' into trunk

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ac926233
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ac926233
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ac926233

Branch: refs/heads/trunk
Commit: ac926233ffd69338818c606f4e703c8fc3331792
Parents: fdbddc1 3c9760b
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Nov 22 09:56:33 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Nov 22 09:56:33 2013 +0100

--
 CHANGES.txt |  8 +-
 .../service/pager/AbstractQueryPager.java   | 80 +---
 2 files changed, 56 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac926233/CHANGES.txt
--
diff --cc CHANGES.txt
index 06628a0,8163c94..79e5880
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,23 -1,3 +1,18 @@@
 +2.1
 + * Removed multithreaded compaction (CASSANDRA-6142)
 + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337)
 + * change logging from log4j to logback (CASSANDRA-5883)
 + * switch to LZ4 compression for internode communication (CASSANDRA-5887)
 + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971)
 + * Remove 1.2 network compatibility code (CASSANDRA-5960)
 + * Remove leveled json manifest migration code (CASSANDRA-5996)
 + * Remove CFDefinition (CASSANDRA-6253)
 + * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278)
 + * User-defined types for CQL3 (CASSANDRA-5590)
 + * Use of o.a.c.metrics in nodetool (CASSANDRA-5871)
 + * Batch read from OTC's queue and cleanup (CASSANDRA-1632)
 +
 +
- 2.0.4
-  * remove RF from nodetool ring output (CASSANDRA-6289)
-  * fix attempting to flush empty rows (CASSANDRA-6374)
- 
- 
  2.0.3
   * Fix FD leak on slice read path (CASSANDRA-6275)
   * Cancel read meter task when closing SSTR (CASSANDRA-6358)



[1/2] git commit: Fix potential out of bounds exception during paging

2013-11-22 Thread slebresne
Updated Branches:
  refs/heads/trunk fdbddc132 - ac926233f


Fix potential out of bounds exception during paging

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6333


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c9760bd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c9760bd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c9760bd

Branch: refs/heads/trunk
Commit: 3c9760bdb986f6c2430adfc13c86ecb75c3246ac
Parents: 1ca459d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Nov 22 08:45:22 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Nov 22 08:45:22 2013 +0100

--
 CHANGES.txt |  8 +-
 .../service/pager/AbstractQueryPager.java   | 80 +---
 2 files changed, 56 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c9760bd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 24d14ee..8163c94 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,8 +1,3 @@
-2.0.4
- * remove RF from nodetool ring output (CASSANDRA-6289)
- * fix attempting to flush empty rows (CASSANDRA-6374)
-
-
 2.0.3
  * Fix FD leak on slice read path (CASSANDRA-6275)
  * Cancel read meter task when closing SSTR (CASSANDRA-6358)
@@ -32,6 +27,9 @@
  * Fix paging with reversed slices (CASSANDRA-6343)
  * Set minTimestamp correctly to be able to drop expired sstables 
(CASSANDRA-6337)
  * Support NaN and Infinity as float literals (CASSANDRA-6003)
+ * Remove RF from nodetool ring output (CASSANDRA-6289)
+ * Fix attempting to flush empty rows (CASSANDRA-6374)
+ * Fix potential out of bounds exception when paging (CASSANDRA-6333)
 Merged from 1.2:
  * Optimize FD phi calculation (CASSANDRA-6386)
  * Improve initial FD phi estimate when starting up (CASSANDRA-6385)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c9760bd/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
--
diff --git 
a/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java 
b/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
index d040203..9372665 100644
--- a/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
+++ b/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java
@@ -77,6 +77,16 @@ abstract class AbstractQueryPager implements QueryPager
 }
 
 int liveCount = getPageLiveCount(rows);
+
+// Because SP.getRangeSlice doesn't trim the result (see SP.trim()), 
liveCount may be greater than what asked
+// (currentPageSize). This would throw off the paging logic so we trim 
the excess. It's not extremely efficient
+// but most of the time there should be nothing or very little to trim.
+if (liveCount  currentPageSize)
+{
+rows = discardLast(rows, liveCount - currentPageSize);
+liveCount = currentPageSize;
+}
+
 remaining -= liveCount;
 
 // If we've got less than requested, there is no more query to do (but
@@ -166,9 +176,11 @@ abstract class AbstractQueryPager implements QueryPager
 private ListRow discardFirst(ListRow rows)
 {
 Row first = rows.get(0);
-ColumnFamily newCf = isReversed()
-   ? discardLast(first.cf)
-   : discardFirst(first.cf);
+ColumnFamily newCf = first.cf.cloneMeShallow();
+int discarded = isReversed()
+  ? discardLast(first.cf, 1, newCf)
+  : discardFirst(first.cf, 1, newCf);
+assert discarded == 1;
 
 int count = newCf.getColumnCount();
 ListRow newRows = new ArrayListRow(count == 0 ? rows.size() - 1 : 
rows.size());
@@ -181,16 +193,32 @@ abstract class AbstractQueryPager implements QueryPager
 
 private ListRow discardLast(ListRow rows)
 {
-Row last = rows.get(rows.size() - 1);
-ColumnFamily newCf = isReversed()
-   ? discardFirst(last.cf)
-   : discardLast(last.cf);
+return discardLast(rows, 1);
+}
 
-int count = newCf.getColumnCount();
-ListRow newRows = new ArrayListRow(count == 0 ? rows.size() - 1 : 
rows.size());
-newRows.addAll(rows.subList(0, rows.size() - 1));
+private ListRow discardLast(ListRow rows, int toDiscard)
+{
+if (toDiscard == 0)
+return rows;
+
+int size = rows.size();
+DecoratedKey lastKey = null;
+ColumnFamily lastCf = null;
+while (toDiscard  0)
+{
+Row last = rows.get(--size);
+lastKey = 

[jira] [Updated] (CASSANDRA-4511) Secondary index support for CQL3 collections

2013-11-22 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4511:


Attachment: 4511.txt

 Secondary index support for CQL3 collections 
 -

 Key: CASSANDRA-4511
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4511
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0 beta 1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1

 Attachments: 4511.txt


 We should allow to 2ndary index on collections. A typical use case would be 
 to add a 'tag setString' to say a user profile and to query users based on 
 what tag they have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-4511) Secondary index support for CQL3 collections

2013-11-22 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4511:


Attachment: (was: 4511.txt)

 Secondary index support for CQL3 collections 
 -

 Key: CASSANDRA-4511
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4511
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0 beta 1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1

 Attachments: 4511.txt


 We should allow to 2ndary index on collections. A typical use case would be 
 to add a 'tag setString' to say a user profile and to query users based on 
 what tag they have.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6393) Invalid metadata for IN ? queries

2013-11-22 Thread Mikhail Mazursky (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Mazursky updated CASSANDRA-6393:


Attachment: column_def_debug.png

Attaching screenshot from debugger.

 Invalid metadata for IN ? queries
 -

 Key: CASSANDRA-6393
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6393
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java Driver 2.0.0-rc1
Reporter: Mikhail Mazursky
Priority: Minor
 Attachments: column_def_debug.png


 I tried to use the following CQL query:
 DELETE FROM table WHERE id IN ?
 using Java driver like this:
 prepStatement.setList(id, idsAsList);
 but got the following exception:
 {noformat}
  java.lang.IllegalArgumentException: id is not a column defined in this 
 metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.BoundStatement.setList(BoundStatement.java:840)
 {noformat}
 Debugger shows that Cassandra sends in(id) in metadata. Is this correct?
 See mail thread for more details: 
 https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/U7mlKcoDL5o



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6393) Invalid metadata for IN ? queries

2013-11-22 Thread Mikhail Mazursky (JIRA)
Mikhail Mazursky created CASSANDRA-6393:
---

 Summary: Invalid metadata for IN ? queries
 Key: CASSANDRA-6393
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6393
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java Driver 2.0.0-rc1
Reporter: Mikhail Mazursky
Priority: Minor
 Attachments: column_def_debug.png

I tried to use the following CQL query:
DELETE FROM table WHERE id IN ?

using Java driver like this:
prepStatement.setList(id, idsAsList);

but got the following exception:
{noformat}
 java.lang.IllegalArgumentException: id is not a column defined in this metadata
at 
com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
at com.datastax.driver.core.BoundStatement.setList(BoundStatement.java:840)
{noformat}

Debugger shows that Cassandra sends in(id) in metadata. Is this correct?

See mail thread for more details: 
https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/U7mlKcoDL5o



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2013-11-22 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13829911#comment-13829911
 ] 

Andreas Schnitzerling commented on CASSANDRA-6283:
--

During my importing I got the warnings:

 WARN [CompactionExecutor:13] 2013-11-22 03:10:49,864 AutoSavingCache.java 
(line 277) Failed to delete 
D:\Programme\cassandra\saved_caches\nieste-nfiles-KeyCache-b.db

and other cache files.
Later the same:

 WARN [CompactionExecutor:22] 2013-11-22 07:10:49,896 AutoSavingCache.java 
(line 277) Failed to delete 
D:\Programme\cassandra\saved_caches\nieste-nfiles-KeyCache-b.db


 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Priority: Critical
  Labels: newbie, patch, test
 Fix For: 2.0.3, 2.1

 Attachments: screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
   @Override
   protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
   }
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6394) Accessing and setting expiration time of a column (instead of TTL)

2013-11-22 Thread Patrick Varilly (JIRA)
Patrick Varilly created CASSANDRA-6394:
--

 Summary: Accessing and setting expiration time of a column 
(instead of TTL)
 Key: CASSANDRA-6394
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6394
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Patrick Varilly
Priority: Minor


When selecting and inserting/updating columns, one can get/set the TIMESTAMP / 
WRITETIME and TTL.  However, this is not enough information to recreate the 
column's expiration time (clock sync and network latency between the client and 
server get in the way).  This makes updating columns with a set TTL fragile: 
there is no way to make the new value of a column *expire* at the same time as 
the old value.

Ideally, you'd be able to say something like:

SELECT x, y EXPIRATIONTIME(y) FROM my_cf

and

UPDATE my_cf USING EXPIRATIONTIME sameAsFromSelect SET y = newy WHERE x = oldx.

The use case I'm facing is that I write an entire row with a given TTL, and 
might later want to update a few of its columns.  Currently, that makes the 
updated columns live longer than the non-updated columns.  Of course, you can 
come up with a good approximation for the appropriate TTL in the update to make 
the updated columns expire at *around* the same time, but not at *exactly* the 
same time.  Since Cassandra stores an expiration time internally, making the 
expiration *exactly* simultaneous should be possible, but CQL3 does not expose 
this ability.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6394) Accessing and setting expiration time of a column (instead of TTL)

2013-11-22 Thread Patrick Varilly (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Varilly updated CASSANDRA-6394:
---

Description: 
When selecting and inserting/updating columns, one can get/set the TIMESTAMP / 
WRITETIME and TTL.  However, this is not enough information to recreate the 
column's expiration time (clock sync and network latency between the client and 
server get in the way).  This makes updating columns with a set TTL fragile: 
there is no way to make the new value of a column *expire* at the same time as 
the old value.

Ideally, you'd be able to say something like:

SELECT x, y EXPIRATIONTIME( y ) FROM my_cf

and

UPDATE my_cf USING EXPIRATIONTIME sameAsFromSelect SET y = newy WHERE x = oldx.

The use case I'm facing is that I write an entire row with a given TTL, and 
might later want to update a few of its columns.  Currently, that makes the 
updated columns live longer than the non-updated columns.  Of course, you can 
come up with a good approximation for the appropriate TTL in the update to make 
the updated columns expire at *around* the same time, but not at *exactly* the 
same time.  Since Cassandra stores an expiration time internally, making the 
expiration *exactly* simultaneous should be possible, but CQL3 does not expose 
this ability.

  was:
When selecting and inserting/updating columns, one can get/set the TIMESTAMP / 
WRITETIME and TTL.  However, this is not enough information to recreate the 
column's expiration time (clock sync and network latency between the client and 
server get in the way).  This makes updating columns with a set TTL fragile: 
there is no way to make the new value of a column *expire* at the same time as 
the old value.

Ideally, you'd be able to say something like:

SELECT x, y EXPIRATIONTIME(y) FROM my_cf

and

UPDATE my_cf USING EXPIRATIONTIME sameAsFromSelect SET y = newy WHERE x = oldx.

The use case I'm facing is that I write an entire row with a given TTL, and 
might later want to update a few of its columns.  Currently, that makes the 
updated columns live longer than the non-updated columns.  Of course, you can 
come up with a good approximation for the appropriate TTL in the update to make 
the updated columns expire at *around* the same time, but not at *exactly* the 
same time.  Since Cassandra stores an expiration time internally, making the 
expiration *exactly* simultaneous should be possible, but CQL3 does not expose 
this ability.


 Accessing and setting expiration time of a column (instead of TTL)
 --

 Key: CASSANDRA-6394
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6394
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Patrick Varilly
Priority: Minor
  Labels: cql3, features, timestamp, ttl
   Original Estimate: 24h
  Remaining Estimate: 24h

 When selecting and inserting/updating columns, one can get/set the TIMESTAMP 
 / WRITETIME and TTL.  However, this is not enough information to recreate the 
 column's expiration time (clock sync and network latency between the client 
 and server get in the way).  This makes updating columns with a set TTL 
 fragile: there is no way to make the new value of a column *expire* at the 
 same time as the old value.
 Ideally, you'd be able to say something like:
 SELECT x, y EXPIRATIONTIME( y ) FROM my_cf
 and
 UPDATE my_cf USING EXPIRATIONTIME sameAsFromSelect SET y = newy WHERE x = 
 oldx.
 The use case I'm facing is that I write an entire row with a given TTL, and 
 might later want to update a few of its columns.  Currently, that makes the 
 updated columns live longer than the non-updated columns.  Of course, you can 
 come up with a good approximation for the appropriate TTL in the update to 
 make the updated columns expire at *around* the same time, but not at 
 *exactly* the same time.  Since Cassandra stores an expiration time 
 internally, making the expiration *exactly* simultaneous should be possible, 
 but CQL3 does not expose this ability.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6356) Proposal: Statistics.db (SSTableMetadata) format change

2013-11-22 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13829938#comment-13829938
 ] 

Marcus Eriksson commented on CASSANDRA-6356:


agree that it needs refactoring and this looks nice, adding extra real 
components might be cleaner but at times i hear people complain about the 
amount of files in the cassandra datadir with LCS. It is ofc most often not a 
real problem though

unsure how using HLL would affect this though? we would still need to keep the 
tombstone histogram etc for other uses right?

 Proposal: Statistics.db (SSTableMetadata) format change
 ---

 Key: CASSANDRA-6356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6356
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1


 We started to distinguish what's loaded to heap, and what's not from 
 Statistics.db. For now, ancestors are loaded as they needed.
 Current serialization format is so adhoc that adding new metadata that are 
 not permanently hold onto memory is somewhat difficult and messy. I propose 
 to change serialization format so that a group of stats can be loaded as 
 needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6395) Add the ability to query by TimeUUIDs with milisecond granularity

2013-11-22 Thread Lorcan Coyle (JIRA)
Lorcan Coyle created CASSANDRA-6395:
---

 Summary: Add the ability to query by TimeUUIDs with milisecond 
granularity
 Key: CASSANDRA-6395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6395
 Project: Cassandra
  Issue Type: New Feature
Reporter: Lorcan Coyle
Priority: Minor


Currently it is impossible to query for dates with sub-second accuracy from 
cqlsh because the parser doesn't recognise dates formatted at that granularity 
(e.g., 2013-09-30 22:19:06.591). By adding the following ISO8601 patterns to 
TimestampSerializer this functionality is unlocked:

-MM-dd HH:mm:ss.SSS,
-MM-dd HH:mm:ss.SSSZ,
-MM-dd'T'HH:mm:ss.SSS,
-MM-dd'T'HH:mm:ss.SSSZ.

I submitted this as a pull-request on the github mirror 
(https://github.com/apache/cassandra/pull/23), which I'll close now. I'll 
submit a patch to address this here.






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6395) Add the ability to query by TimeUUIDs with milisecond granularity

2013-11-22 Thread Lorcan Coyle (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lorcan Coyle updated CASSANDRA-6395:


Description: 
Currently it is impossible to query for dates with the minTimeuuid and 
maxTimeuuid functions with sub-second accuracy from cqlsh because the parser 
doesn't recognise dates formatted at that granularity (e.g., 2013-09-30 
22:19:06.591). By adding the following ISO8601 patterns to TimestampSerializer 
this functionality is unlocked:

-MM-dd HH:mm:ss.SSS,
-MM-dd HH:mm:ss.SSSZ,
-MM-dd'T'HH:mm:ss.SSS,
-MM-dd'T'HH:mm:ss.SSSZ.

I submitted this as a pull-request on the github mirror 
(https://github.com/apache/cassandra/pull/23), which I'll close now. I'll 
submit a patch to address this here.




  was:
Currently it is impossible to query for dates with sub-second accuracy from 
cqlsh because the parser doesn't recognise dates formatted at that granularity 
(e.g., 2013-09-30 22:19:06.591). By adding the following ISO8601 patterns to 
TimestampSerializer this functionality is unlocked:

-MM-dd HH:mm:ss.SSS,
-MM-dd HH:mm:ss.SSSZ,
-MM-dd'T'HH:mm:ss.SSS,
-MM-dd'T'HH:mm:ss.SSSZ.

I submitted this as a pull-request on the github mirror 
(https://github.com/apache/cassandra/pull/23), which I'll close now. I'll 
submit a patch to address this here.





 Add the ability to query by TimeUUIDs with milisecond granularity
 -

 Key: CASSANDRA-6395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6395
 Project: Cassandra
  Issue Type: New Feature
Reporter: Lorcan Coyle
Priority: Minor
  Labels: easyfix, patch
 Fix For: 2.0.1


 Currently it is impossible to query for dates with the minTimeuuid and 
 maxTimeuuid functions with sub-second accuracy from cqlsh because the parser 
 doesn't recognise dates formatted at that granularity (e.g., 2013-09-30 
 22:19:06.591). By adding the following ISO8601 patterns to 
 TimestampSerializer this functionality is unlocked:
 -MM-dd HH:mm:ss.SSS,
 -MM-dd HH:mm:ss.SSSZ,
 -MM-dd'T'HH:mm:ss.SSS,
 -MM-dd'T'HH:mm:ss.SSSZ.
 I submitted this as a pull-request on the github mirror 
 (https://github.com/apache/cassandra/pull/23), which I'll close now. I'll 
 submit a patch to address this here.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CASSANDRA-6393) Invalid metadata for IN ? queries

2013-11-22 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-6393:
---

Assignee: Sylvain Lebresne

 Invalid metadata for IN ? queries
 -

 Key: CASSANDRA-6393
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6393
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java Driver 2.0.0-rc1
Reporter: Mikhail Mazursky
Assignee: Sylvain Lebresne
Priority: Minor
 Attachments: column_def_debug.png


 I tried to use the following CQL query:
 DELETE FROM table WHERE id IN ?
 using Java driver like this:
 prepStatement.setList(id, idsAsList);
 but got the following exception:
 {noformat}
  java.lang.IllegalArgumentException: id is not a column defined in this 
 metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.BoundStatement.setList(BoundStatement.java:840)
 {noformat}
 Debugger shows that Cassandra sends in(id) in metadata. Is this correct?
 See mail thread for more details: 
 https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/U7mlKcoDL5o



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6345) Endpoint cache invalidation causes CPU spike (on vnode rings?)

2013-11-22 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6345:
--

Attachment: 6345-v5.txt

bq. It seems that unless I'm missing something either is possible with the 
current release code, and thus these patches as well

Technically correct, but in practice we're in pretty good shape.  The sequence 
is:

# Add the changing node to pending ranges
# Sleep for RING_DELAY so everyone else starts including the new target in 
their writes
# Flush data to be transferred
# Send over data for writes that happened before (1)

Step 1 happens on every coordinator.  2-4 only happen on the node that is 
giving up a token range.

The guarantee we need is that any write that happens before the pending range 
change, completes before the subsequent flush.

Even if we used TM.lock to protect the entire ARS sequence (guaranteeing that 
no local write is in progress once the PRC happens) we could still receive 
writes from other nodes that began their PRC change later.  

So we rely on the RING_DELAY (30s) sleep.  I suppose a GC pause for instance at 
just the wrong time could theoretically mean a mutation against the old state 
gets sent out late, but I don't see how we can improve it.

bq. IMHO to be defensive, any time the write lock is acquired in TokenMetadata, 
the version should be bumped in the finally block before the lock is released

Haven't thought this through as much.  What are you saying we should bump that 
we weren't calling invalidate on before?

bq. Is the idea with the striped lock on the endpoint cache in 
AbstractReplicationStrategy to help smooth out the stampede effect when the 
global lock on the cached TM gets released after the fill?

I'm trying to avoid a minor stampede on calculateNaturalEndpoints 
(CASSANDRA-3881) but it's probably premature optimization.  v5 attached w/o 
that.

 Endpoint cache invalidation causes CPU spike (on vnode rings?)
 --

 Key: CASSANDRA-6345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6345
 Project: Cassandra
  Issue Type: Bug
 Environment: 30 nodes total, 2 DCs
 Cassandra 1.2.11
 vnodes enabled (256 per node)
Reporter: Rick Branson
Assignee: Jonathan Ellis
 Fix For: 1.2.13

 Attachments: 6345-rbranson-v2.txt, 6345-rbranson.txt, 6345-v2.txt, 
 6345-v3.txt, 6345-v4.txt, 6345-v5.txt, 6345.txt, 
 half-way-thru-6345-rbranson-patch-applied.png


 We've observed that events which cause invalidation of the endpoint cache 
 (update keyspace, add/remove nodes, etc) in AbstractReplicationStrategy 
 result in several seconds of thundering herd behavior on the entire cluster. 
 A thread dump shows over a hundred threads (I stopped counting at that point) 
 with a backtrace like this:
 at java.net.Inet4Address.getAddress(Inet4Address.java:288)
 at 
 org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:106)
 at 
 org.apache.cassandra.locator.TokenMetadata$1.compare(TokenMetadata.java:103)
 at java.util.TreeMap.getEntryUsingComparator(TreeMap.java:351)
 at java.util.TreeMap.getEntry(TreeMap.java:322)
 at java.util.TreeMap.get(TreeMap.java:255)
 at 
 com.google.common.collect.AbstractMultimap.put(AbstractMultimap.java:200)
 at 
 com.google.common.collect.AbstractSetMultimap.put(AbstractSetMultimap.java:117)
 at com.google.common.collect.TreeMultimap.put(TreeMultimap.java:74)
 at 
 com.google.common.collect.AbstractMultimap.putAll(AbstractMultimap.java:273)
 at com.google.common.collect.TreeMultimap.putAll(TreeMultimap.java:74)
 at 
 org.apache.cassandra.utils.SortedBiMultiValMap.create(SortedBiMultiValMap.java:60)
 at 
 org.apache.cassandra.locator.TokenMetadata.cloneOnlyTokenMap(TokenMetadata.java:598)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:104)
 at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2671)
 at 
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:375)
 It looks like there's a large amount of cost in the 
 TokenMetadata.cloneOnlyTokenMap that 
 AbstractReplicationStrategy.getNaturalEndpoints is calling each time there is 
 a cache miss for an endpoint. It seems as if this would only impact clusters 
 with large numbers of tokens, so it's probably a vnodes-only issue.
 Proposal: In AbstractReplicationStrategy.getNaturalEndpoints(), cache the 
 cloned TokenMetadata instance returned by TokenMetadata.cloneOnlyTokenMap(), 
 wrapping it with a lock to prevent stampedes, and clearing it in 
 clearEndpointCache(). Thoughts?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6374) AssertionError for rows with zero columns

2013-11-22 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830055#comment-13830055
 ] 

Aleksey Yeschenko commented on CASSANDRA-6374:
--

bq. Can this be fixed for 2.0.3 please? We need to upgrade our testing cluster 
to 2.0.x but cannot because of this issue.

2.0.3 vote is being restarted, so yes, it will be in 2.0.3.

 AssertionError for rows with zero columns
 -

 Key: CASSANDRA-6374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6374
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Anton Gorbunov
Assignee: Jonathan Ellis
 Fix For: 2.0.4

 Attachments: 6374.txt


 After upgrading from 1.2.5 to 1.2.9 and then to 2.0.2 we've got those 
 exceptions:
 {code}
 ERROR [FlushWriter:1] 2013-11-18 16:14:36,305 CassandraDaemon.java (line 187) 
 Exception in thread Thread[FlushWriter:1,5,main]
 java.lang.AssertionError
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:198)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:186)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:360)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:315)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
 {code}
 Also found similar issue in this thread:
 http://www.mail-archive.com/user@cassandra.apache.org/msg32875.html
 There Aaron Morton said that its caused by leaving rows with zero columns - 
 that's exactly what we do in some CFs (using Thrift  Astyanax).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6393) Invalid metadata for IN ? queries

2013-11-22 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830067#comment-13830067
 ] 

Aleksey Yeschenko commented on CASSANDRA-6393:
--

I wouldn't say it's a bug. id wouldn't be a list, but some single value, so 
in(id) for variadic IN's is there for a reason, to distinguish between the 
two. The java-driver might need some modifications though.

 Invalid metadata for IN ? queries
 -

 Key: CASSANDRA-6393
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6393
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java Driver 2.0.0-rc1
Reporter: Mikhail Mazursky
Assignee: Sylvain Lebresne
Priority: Minor
 Attachments: column_def_debug.png


 I tried to use the following CQL query:
 DELETE FROM table WHERE id IN ?
 using Java driver like this:
 prepStatement.setList(id, idsAsList);
 but got the following exception:
 {noformat}
  java.lang.IllegalArgumentException: id is not a column defined in this 
 metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.BoundStatement.setList(BoundStatement.java:840)
 {noformat}
 Debugger shows that Cassandra sends in(id) in metadata. Is this correct?
 See mail thread for more details: 
 https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/U7mlKcoDL5o



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: Remove the vestiges of PBSPredictor from nodetool help

2013-11-22 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 3c9760bdb - dbc79787b


Remove the vestiges of PBSPredictor from nodetool help


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dbc79787
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dbc79787
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dbc79787

Branch: refs/heads/cassandra-2.0
Commit: dbc79787bc32378925036cc785771b97c14d24f6
Parents: 3c9760b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Nov 22 19:12:11 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Nov 22 19:12:11 2013 +0300

--
 src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dbc79787/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml
--
diff --git a/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml 
b/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml
index d0c3d0d..632d7e1 100644
--- a/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml
+++ b/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml
@@ -205,9 +205,6 @@ commands:
   - name: getsstables keyspace cf key
 help: |
   Print the sstable filenames that own the key
-  - name: predictconsistency replication_factor time [versions] 
[latency_percentile]
-help: |
-  Predict latency and consistency t ms after writes
   - name: reloadtriggers
 help: |
   reload trigger classes



[1/2] git commit: Remove the vestiges of PBSPredictor from nodetool help

2013-11-22 Thread aleksey
Updated Branches:
  refs/heads/trunk ac926233f - 23e774459


Remove the vestiges of PBSPredictor from nodetool help


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dbc79787
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dbc79787
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dbc79787

Branch: refs/heads/trunk
Commit: dbc79787bc32378925036cc785771b97c14d24f6
Parents: 3c9760b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Nov 22 19:12:11 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Nov 22 19:12:11 2013 +0300

--
 src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dbc79787/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml
--
diff --git a/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml 
b/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml
index d0c3d0d..632d7e1 100644
--- a/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml
+++ b/src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml
@@ -205,9 +205,6 @@ commands:
   - name: getsstables keyspace cf key
 help: |
   Print the sstable filenames that own the key
-  - name: predictconsistency replication_factor time [versions] 
[latency_percentile]
-help: |
-  Predict latency and consistency t ms after writes
   - name: reloadtriggers
 help: |
   reload trigger classes



[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-22 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23e77445
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23e77445
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23e77445

Branch: refs/heads/trunk
Commit: 23e7744598a00b4650b8d34575a3216e8bfd0d7a
Parents: ac92623 dbc7978
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Nov 22 19:12:56 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Nov 22 19:12:56 2013 +0300

--
 src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml | 3 ---
 1 file changed, 3 deletions(-)
--




[jira] [Commented] (CASSANDRA-6356) Proposal: Statistics.db (SSTableMetadata) format change

2013-11-22 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830081#comment-13830081
 ] 

Yuki Morishita commented on CASSANDRA-6356:
---

Thanks for the comments, [~jbellis], [~krummas].

I first thought about adding more components, but as Marcus pointed out, I was 
worried about too many files.

The patch may seem not worth for the properties we have right now, but if we 
want to add more, I think it is going to be messy if we keep current 
implementation.

For just adding HLL, this refactoring may not needed. I did some experiments 
and can reduce the size of it I think. Will update about this on CASSANDRA-5906.

 Proposal: Statistics.db (SSTableMetadata) format change
 ---

 Key: CASSANDRA-6356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6356
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1


 We started to distinguish what's loaded to heap, and what's not from 
 Statistics.db. For now, ancestors are loaded as they needed.
 Current serialization format is so adhoc that adding new metadata that are 
 not permanently hold onto memory is somewhat difficult and messy. I propose 
 to change serialization format so that a group of stats can be loaded as 
 needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5519) Reduce index summary memory use for cold sstables

2013-11-22 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830123#comment-13830123
 ] 

Tyler Hobbs commented on CASSANDRA-5519:


bq. releaseReference, which can be reverted back to trunk form since isReplaced 
== !isCompacted

True

bq. close, which is only called by snapshot repair (and releaseReference) which 
will never do any index summary replacements

We still need to have different behavior for the {{close()}} call by snapshot 
repair, as it needs to perform the full close even though {{isCompacted}} will 
be false.  While we could add a parameter to close() or define a 
{{closeReplacedReader()}} method, it seems clearer and more future-proof to 
keep the isReplaced flag.

 Reduce index summary memory use for cold sstables
 -

 Key: CASSANDRA-5519
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5519
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.1

 Attachments: 5519-v1.txt, downsample.py






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2013-11-22 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830136#comment-13830136
 ] 

Andreas Schnitzerling commented on CASSANDRA-6283:
--

I deployed now 2.0.3-tentative on the whole cluster. Result: With nodetool 
repair patch for CASSANDRA-6275 doen't work. Neighbour nodes crash again with 
disk_failure_policy=stop.

ERROR [ValidationExecutor:3] 2013-11-22 18:21:49,591 FileUtils.java (line 417) 
Stopping gossiper
 WARN [ValidationExecutor:3] 2013-11-22 18:21:49,591 StorageService.java (line 
279) Stopping gossip by operator request
ERROR [ValidationExecutor:4] 2013-11-22 18:21:50,361 Validator.java (line 242) 
Failed creating a merkle tree for [repair #923a7360-539a-11e3-8fde-eb1c24a59bb8 
on nieste/evrangesdevice, (-787066926799647148,-773294852829911898]], 
/10.9.9.240 (see log for details)
ERROR [ValidationExecutor:4] 2013-11-22 18:21:50,371 CassandraDaemon.java (line 
187) Exception in thread Thread[ValidationExecutor:4,1,main]
FSWriteError in 
D:\Programme\cassandra\data\nieste\evrangesdevice\snapshots\923a7360-539a-11e3-8fde-eb1c24a59bb8\nieste-evrangesdevice-jb-9-Index.db
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:120)
at 
org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:382)
at 
org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:378)
at 
org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:416)
at 
org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:1801)
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:810)
at 
org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:62)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.call(CompactionManager.java:397)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.file.FileSystemException: 
D:\Programme\cassandra\data\nieste\evrangesdevice\snapshots\923a7360-539a-11e3-8fde-eb1c24a59bb8\nieste-evrangesdevice-jb-9-Index.db:
 Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen 
Prozess verwendet wird.

at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source)
at sun.nio.fs.AbstractFileSystemProvider.delete(Unknown Source)
at java.nio.file.Files.delete(Unknown Source)
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:116)
... 11 more
ERROR [ValidationExecutor:4] 2013-11-22 18:21:50,371 FileUtils.java (line 417) 
Stopping gossiper
 WARN [ValidationExecutor:4] 2013-11-22 18:21:50,371 StorageService.java (line 
279) Stopping gossip by operator request
ERROR [ValidationExecutor:2] 2013-11-22 18:21:51,221 FileUtils.java (line 423) 
Stopping RPC server
ERROR [ValidationExecutor:2] 2013-11-22 18:21:51,221 FileUtils.java (line 429) 
Stopping native transport


 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Priority: Critical
  Labels: newbie, patch, test
 Fix For: 2.0.3, 2.1

 Attachments: screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
   @Override
   protected void 

[jira] [Created] (CASSANDRA-6396) debian init searches for jdk6 explicitly

2013-11-22 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-6396:
---

 Summary: debian init searches for jdk6 explicitly
 Key: CASSANDRA-6396
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6396
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.0.4


When JAVA_HOME isn't set, the init looks for jdk6 explicitly.  Obviously for 
2.0+ this can cause problems.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6396) debian init searches for jdk6 explicitly

2013-11-22 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6396:


Attachment: 6396.txt

I think the best thing to do is just search the default-jvm path.  If the user 
hasn't set up alternatives correctly, they can fix that.  If they want to use a 
specific non-default jvm, they can override JAVA_HOME in /etc/default/cassandra.

 debian init searches for jdk6 explicitly
 

 Key: CASSANDRA-6396
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6396
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.0.4

 Attachments: 6396.txt


 When JAVA_HOME isn't set, the init looks for jdk6 explicitly.  Obviously for 
 2.0+ this can cause problems.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6396) debian init searches for jdk6 explicitly

2013-11-22 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6396:


 Reviewer: Sylvain Lebresne
Fix Version/s: 1.2.13

 debian init searches for jdk6 explicitly
 

 Key: CASSANDRA-6396
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6396
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: 6396.txt


 When JAVA_HOME isn't set, the init looks for jdk6 explicitly.  Obviously for 
 2.0+ this can cause problems.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2013-11-22 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830161#comment-13830161
 ] 

Andreas Schnitzerling commented on CASSANDRA-6283:
--

Actual I have only 4 nodes. But I want to enlarge my cluster with different 
racks (rooms with regular Desktop-PCs in different buildings in the same area). 
For that, I need to switch to NetworkTopologySnitch as well. Is repair with 
disk_failure_policy=ignore effective in that case? What can I do in my 
production cluster? Avoid repairing or repair and just ignore the delete-errors?

 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Priority: Critical
  Labels: newbie, patch, test
 Fix For: 2.0.3, 2.1

 Attachments: screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
   @Override
   protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
   }
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6397:


Description: 
*{{nodetool removenode force}}* outputs a slightly confusing error message when 
there is nothing for it to do.

* Start a cluster, then kill one of the nodes.
* run *{{nodetool removenode}}* on the node you killed.
* Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
that it outputs a simple message regarding it's status.
* Run *{{nodetool removenode force}}* again after the firsrt removenode command 
finishes, you'll see this message and traceback:

{code}
$ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
RemovalStatus: No token removals in process.
Exception in thread main java.lang.UnsupportedOperationException: No tokens 
to force removal on, call 'removetoken' first
at 
org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}

Two issues I see with this traceback:

* No tokens to force removal on is telling me the same thing that the message 
before it tells me: RemovalStatus: No token removals in process., So the 
entire traceback is unnecessary to output.
* call 'removetoken' first - removetoken has been deprecated according to the 
message output by removenode, so there is inconsistency in directions.

  was:
*{{nodetool removenode force}}* outputs a slightly confusing error message when 
there is nothing for it to do.

* Start a cluster, then kill one of the nodes.
* run *{{nodetool removenode}}* on the node you killed.
* Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
that it outputs a simple message regarding it's status.
* Run *{{nodetool removenode force}}* again after the firsrt removenode command 
finishes, you'll see this message and traceback:

{code}
$ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
xss =  -ea 
-javaagent:/home/ryan/.ccm/repository/git_cassandra-1.2/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1982M -Xmx1982M 
-Xmn400M -XX:+HeapDumpOnOutOfMemoryError -Xss228k
RemovalStatus: No token removals in process.
Exception in thread main java.lang.UnsupportedOperationException: No tokens 
to force removal on, call 'removetoken' first
at 

[jira] [Updated] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6397:


Description: 
*{{nodetool removenode force}}* outputs a slightly confusing error message when 
there is nothing for it to do.

* Start a cluster, then kill one of the nodes.
* run *{{nodetool removenode}}* on the node you killed.
* Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
that it outputs a simple message regarding it's status.
* Run *{{nodetool removenode force}}* again after the firsrt removenode command 
finishes, you'll see this message and traceback:

{code}
$ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
xss =  -ea 
-javaagent:/home/ryan/.ccm/repository/git_cassandra-1.2/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1982M -Xmx1982M 
-Xmn400M -XX:+HeapDumpOnOutOfMemoryError -Xss228k
RemovalStatus: No token removals in process.
Exception in thread main java.lang.UnsupportedOperationException: No tokens 
to force removal on, call 'removetoken' first
at 
org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}

Two issues I see with this traceback:

* No tokens to force removal on is telling me the same thing that the message 
before it tells me: RemovalStatus: No token removals in process., So the 
entire traceback is unnecessary to output.
* call 'removetoken' first - removetoken has been deprecated according to the 
message output by removenode, so there is inconsistency in directions.

  was:
*{{nodetool removenode force}}* outputs a slightly confusing error message when 
there is nothing for it to do.

* Start a cluster, then kill one of the nodes.
* run *{{nodetool removenode}}* on the node you killed.
* Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
that it outputs a simple message regarding it's status.
* Run *{{nodetool removenode force}}* again after the removenode finishes, 
you'll see this message and traceback:

{code}
$ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
xss =  -ea 
-javaagent:/home/ryan/.ccm/repository/git_cassandra-1.2/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1982M -Xmx1982M 
-Xmn400M -XX:+HeapDumpOnOutOfMemoryError -Xss228k
RemovalStatus: No token removals 

[jira] [Created] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Ryan McGuire (JIRA)
Ryan McGuire created CASSANDRA-6397:
---

 Summary: removenode outputs confusing non-error
 Key: CASSANDRA-6397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6397
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Priority: Minor


*{{nodetool removenode force}}* outputs a slightly confusing error message when 
there is nothing for it to do.

* Start a cluster, then kill one of the nodes.
* run *{{nodetool removenode}}* on the node you killed.
* Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
that it outputs a simple message regarding it's status.
* Run *{{nodetool removenode force}}* again after the removenode finishes, 
you'll see this message and traceback:

{code}
$ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
xss =  -ea 
-javaagent:/home/ryan/.ccm/repository/git_cassandra-1.2/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1982M -Xmx1982M 
-Xmn400M -XX:+HeapDumpOnOutOfMemoryError -Xss228k
RemovalStatus: No token removals in process.
Exception in thread main java.lang.UnsupportedOperationException: No tokens 
to force removal on, call 'removetoken' first
at 
org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}

Two issues I see with this traceback:

* No tokens to force removal on is telling me the same thing that the message 
before it tells me: RemovalStatus: No token removals in process., So the 
entire traceback is unnecessary to output.
* call 'removetoken' first - removetoken has been deprecated according to the 
message output by removenode, so there is inconsistency in directions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6397:


Priority: Trivial  (was: Minor)

 removenode outputs confusing non-error
 --

 Key: CASSANDRA-6397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6397
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Priority: Trivial

 *{{nodetool removenode force}}* outputs a slightly confusing error message 
 when there is nothing for it to do.
 * Start a cluster, then kill one of the nodes.
 * run *{{nodetool removenode}}* on the node you killed.
 * Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
 that it outputs a simple message regarding it's status.
 * Run *{{nodetool removenode force}}* again after the removenode finishes, 
 you'll see this message and traceback:
 {code}
 $ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
 xss =  -ea 
 -javaagent:/home/ryan/.ccm/repository/git_cassandra-1.2/lib/jamm-0.2.5.jar 
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1982M -Xmx1982M 
 -Xmn400M -XX:+HeapDumpOnOutOfMemoryError -Xss228k
 RemovalStatus: No token removals in process.
 Exception in thread main java.lang.UnsupportedOperationException: No tokens 
 to force removal on, call 'removetoken' first
   at 
 org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}
 Two issues I see with this traceback:
 * No tokens to force removal on is telling me the same thing that the 
 message before it tells me: RemovalStatus: No token removals in process., 
 So the entire traceback is unnecessary to output.
 * call 'removetoken' first - removetoken has been deprecated according to 
 the message output by removenode, so there is inconsistency in directions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6396) debian init searches for jdk6 explicitly

2013-11-22 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830166#comment-13830166
 ] 

Jonathan Ellis commented on CASSANDRA-6396:
---

+1

 debian init searches for jdk6 explicitly
 

 Key: CASSANDRA-6396
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6396
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: 6396.txt


 When JAVA_HOME isn't set, the init looks for jdk6 explicitly.  Obviously for 
 2.0+ this can cause problems.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[5/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-22 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e0523ea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e0523ea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e0523ea

Branch: refs/heads/cassandra-2.0
Commit: 8e0523ea6bf5755dd7b042030a7725c5c6c10a74
Parents: dbc7978 85668c5
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 12:02:32 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 12:02:32 2013 -0600

--
 debian/init | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e0523ea/debian/init
--



[6/6] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-22 Thread brandonwilliams
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40598efa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40598efa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40598efa

Branch: refs/heads/trunk
Commit: 40598efa6344333d4d4deee2c1ec3e71bd931066
Parents: 23e7744 8e0523e
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 12:02:57 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 12:02:57 2013 -0600

--
 debian/init | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[4/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-22 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e0523ea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e0523ea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e0523ea

Branch: refs/heads/trunk
Commit: 8e0523ea6bf5755dd7b042030a7725c5c6c10a74
Parents: dbc7978 85668c5
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 12:02:32 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 12:02:32 2013 -0600

--
 debian/init | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e0523ea/debian/init
--



[3/6] git commit: Don't search jdk6-specific paths in debian init Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-6396

2013-11-22 Thread brandonwilliams
Don't search jdk6-specific paths in debian init
Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-6396


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/85668c5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/85668c5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/85668c5f

Branch: refs/heads/trunk
Commit: 85668c5f37198077e164ae6cdc8970ec9b334da9
Parents: 92fe8c8
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 12:01:24 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 12:01:24 2013 -0600

--
 debian/init | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/85668c5f/debian/init
--
diff --git a/debian/init b/debian/init
index 99a4d83..f77a641 100644
--- a/debian/init
+++ b/debian/init
@@ -25,7 +25,7 @@ CASSANDRA_HOME=/usr/share/cassandra
 FD_LIMIT=10
 
 # The first existing directory is used for JAVA_HOME if needed.
-JVM_SEARCH_DIRS=/usr/lib/jvm/java-6-openjdk /usr/lib/jvm/java-6-sun
+JVM_SEARCH_DIRS=/usr/lib/jvm/default-java
 
 [ -e /usr/share/cassandra/apache-cassandra.jar ] || exit 0
 [ -e /etc/cassandra/cassandra.yaml ] || exit 0



[1/6] git commit: Don't search jdk6-specific paths in debian init Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-6396

2013-11-22 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.2 92fe8c896 - 85668c5f3
  refs/heads/cassandra-2.0 dbc79787b - 8e0523ea6
  refs/heads/trunk 23e774459 - 40598efa6


Don't search jdk6-specific paths in debian init
Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-6396


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/85668c5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/85668c5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/85668c5f

Branch: refs/heads/cassandra-1.2
Commit: 85668c5f37198077e164ae6cdc8970ec9b334da9
Parents: 92fe8c8
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 12:01:24 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 12:01:24 2013 -0600

--
 debian/init | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/85668c5f/debian/init
--
diff --git a/debian/init b/debian/init
index 99a4d83..f77a641 100644
--- a/debian/init
+++ b/debian/init
@@ -25,7 +25,7 @@ CASSANDRA_HOME=/usr/share/cassandra
 FD_LIMIT=10
 
 # The first existing directory is used for JAVA_HOME if needed.
-JVM_SEARCH_DIRS=/usr/lib/jvm/java-6-openjdk /usr/lib/jvm/java-6-sun
+JVM_SEARCH_DIRS=/usr/lib/jvm/default-java
 
 [ -e /usr/share/cassandra/apache-cassandra.jar ] || exit 0
 [ -e /etc/cassandra/cassandra.yaml ] || exit 0



[2/6] git commit: Don't search jdk6-specific paths in debian init Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-6396

2013-11-22 Thread brandonwilliams
Don't search jdk6-specific paths in debian init
Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-6396


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/85668c5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/85668c5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/85668c5f

Branch: refs/heads/cassandra-2.0
Commit: 85668c5f37198077e164ae6cdc8970ec9b334da9
Parents: 92fe8c8
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 12:01:24 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 12:01:24 2013 -0600

--
 debian/init | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/85668c5f/debian/init
--
diff --git a/debian/init b/debian/init
index 99a4d83..f77a641 100644
--- a/debian/init
+++ b/debian/init
@@ -25,7 +25,7 @@ CASSANDRA_HOME=/usr/share/cassandra
 FD_LIMIT=10
 
 # The first existing directory is used for JAVA_HOME if needed.
-JVM_SEARCH_DIRS=/usr/lib/jvm/java-6-openjdk /usr/lib/jvm/java-6-sun
+JVM_SEARCH_DIRS=/usr/lib/jvm/default-java
 
 [ -e /usr/share/cassandra/apache-cassandra.jar ] || exit 0
 [ -e /etc/cassandra/cassandra.yaml ] || exit 0



[jira] [Commented] (CASSANDRA-6396) debian init searches for jdk6 explicitly

2013-11-22 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830175#comment-13830175
 ] 

Brandon Williams commented on CASSANDRA-6396:
-

Committed.

 debian init searches for jdk6 explicitly
 

 Key: CASSANDRA-6396
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6396
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: 6396.txt


 When JAVA_HOME isn't set, the init looks for jdk6 explicitly.  Obviously for 
 2.0+ this can cause problems.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6398) nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying to remove it' occurs regularly.

2013-11-22 Thread Ryan McGuire (JIRA)
Ryan McGuire created CASSANDRA-6398:
---

 Summary: nodetool removenode error 'Endpoint /x.x.x.x generation 
changed while trying to remove it' occurs regularly.
 Key: CASSANDRA-6398
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6398
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Priority: Minor


I see this error somewhat regularly when running a *{{nodetool removenode}}* 
command. The command completes successfully, ie, the node is removed, so I'm 
not sure what this message is telling me.
 
{code}
~/.ccm/test/node1/bin/nodetool -p 7100 removenode 
bff9072e-4bb6-42fa-937c-bb73bcc094bc

Exception in thread main java.lang.RuntimeException: Endpoint /127.0.0.2 
generation changed while trying to remove it
at 
org.apache.cassandra.gms.Gossiper.advertiseRemoving(Gossiper.java:421)
at 
org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3080)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6397:


Component/s: Tools

 removenode outputs confusing non-error
 --

 Key: CASSANDRA-6397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6397
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Priority: Trivial

 *{{nodetool removenode force}}* outputs a slightly confusing error message 
 when there is nothing for it to do.
 * Start a cluster, then kill one of the nodes.
 * run *{{nodetool removenode}}* on the node you killed.
 * Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
 that it outputs a simple message regarding it's status.
 * Run *{{nodetool removenode force}}* again after the firsrt removenode 
 command finishes, you'll see this message and traceback:
 {code}
 $ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
 RemovalStatus: No token removals in process.
 Exception in thread main java.lang.UnsupportedOperationException: No tokens 
 to force removal on, call 'removetoken' first
   at 
 org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}
 Two issues I see with this traceback:
 * No tokens to force removal on is telling me the same thing that the 
 message before it tells me: RemovalStatus: No token removals in process., 
 So the entire traceback is unnecessary to output.
 * call 'removetoken' first - removetoken has been deprecated according to 
 the message output by removenode, so there is inconsistency in directions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6398) nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying to remove it' occurs regularly.

2013-11-22 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6398:


Description: 
I see this error somewhat regularly when running a *{{nodetool removenode}}* 
command. The command completes successfully, ie, the node is removed, so I'm 
not sure what this message is telling me.
 
{code}
$ nodetool -p 7100 removenode bff9072e-4bb6-42fa-937c-bb73bcc094bc

Exception in thread main java.lang.RuntimeException: Endpoint /127.0.0.2 
generation changed while trying to remove it
at 
org.apache.cassandra.gms.Gossiper.advertiseRemoving(Gossiper.java:421)
at 
org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3080)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}

  was:
I see this error somewhat regularly when running a *{{nodetool removenode}}* 
command. The command completes successfully, ie, the node is removed, so I'm 
not sure what this message is telling me.
 
{code}
~/.ccm/test/node1/bin/nodetool -p 7100 removenode 
bff9072e-4bb6-42fa-937c-bb73bcc094bc

Exception in thread main java.lang.RuntimeException: Endpoint /127.0.0.2 
generation changed while trying to remove it
at 
org.apache.cassandra.gms.Gossiper.advertiseRemoving(Gossiper.java:421)
at 
org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3080)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
at 

[jira] [Updated] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6397:


Description: 
*{{nodetool removenode force}}* outputs a slightly confusing error message when 
there is nothing for it to do.

* Start a cluster, then kill one of the nodes.
* run *{{nodetool removenode}}* on the node you killed.
* Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
that it outputs a simple message regarding it's status.
* Run *{{nodetool removenode force}}* again after the firsrt removenode command 
finishes, you'll see this message and traceback:

{code}
$ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
RemovalStatus: No token removals in process.
Exception in thread main java.lang.UnsupportedOperationException: No tokens 
to force removal on, call 'removetoken' first
at 
org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}

Two issues I see with this traceback:

* No tokens to force removal on is telling me the same thing that the message 
before it tells me: RemovalStatus: No token removals in process., So the 
entire traceback is redundant.
* call 'removetoken' first - removetoken has been deprecated according to the 
message output by removenode, so there is inconsistency in directions.

  was:
*{{nodetool removenode force}}* outputs a slightly confusing error message when 
there is nothing for it to do.

* Start a cluster, then kill one of the nodes.
* run *{{nodetool removenode}}* on the node you killed.
* Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
that it outputs a simple message regarding it's status.
* Run *{{nodetool removenode force}}* again after the firsrt removenode command 
finishes, you'll see this message and traceback:

{code}
$ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
RemovalStatus: No token removals in process.
Exception in thread main java.lang.UnsupportedOperationException: No tokens 
to force removal on, call 'removetoken' first
at 
org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 

[jira] [Updated] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6397:


Description: 
*{{nodetool removenode force}}* outputs a slightly confusing error message when 
there is nothing for it to do.

* Start a cluster, then kill one of the nodes.
* run *{{nodetool removenode}}* on the node you killed.
* Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
that it outputs a simple message regarding it's status.
* Run *{{nodetool removenode force}}* again after the firsrt removenode command 
finishes, you'll see this message and traceback:

{code}
$ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
RemovalStatus: No token removals in process.
Exception in thread main java.lang.UnsupportedOperationException: No tokens 
to force removal on, call 'removetoken' first
at 
org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}

Two issues I see with this traceback:

* No tokens to force removal on is telling me the same thing that the message 
before it tells me: RemovalStatus: No token removals in process., So the 
entire traceback is redundant.
* call 'removetoken' first - removetoken has been deprecated according to the 
message output by removenode, so there is inconsistency in directions to the 
user.

  was:
*{{nodetool removenode force}}* outputs a slightly confusing error message when 
there is nothing for it to do.

* Start a cluster, then kill one of the nodes.
* run *{{nodetool removenode}}* on the node you killed.
* Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
that it outputs a simple message regarding it's status.
* Run *{{nodetool removenode force}}* again after the firsrt removenode command 
finishes, you'll see this message and traceback:

{code}
$ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
RemovalStatus: No token removals in process.
Exception in thread main java.lang.UnsupportedOperationException: No tokens 
to force removal on, call 'removetoken' first
at 
org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

[1/2] allocate fixed index summary memory pool and resample cold index summaries to use less memory patch by Tyler Hobbs; reviewed by jbellis for CASSANDRA-5519

2013-11-22 Thread jbellis
Updated Branches:
  refs/heads/trunk 40598efa6 - dbd1a727b


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dbd1a727/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java 
b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
new file mode 100644
index 000..aac70ec
--- /dev/null
+++ b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
@@ -0,0 +1,327 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.io.sstable;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import org.junit.Test;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.Util;
+import org.apache.cassandra.db.*;
+import org.apache.cassandra.db.filter.QueryFilter;
+import org.apache.cassandra.metrics.RestorableMeter;
+import org.apache.cassandra.utils.ByteBufferUtil;
+
+import static org.apache.cassandra.io.sstable.Downsampling.BASE_SAMPLING_LEVEL;
+import static org.apache.cassandra.io.sstable.Downsampling.MIN_SAMPLING_LEVEL;
+import static 
org.apache.cassandra.io.sstable.IndexSummaryManager.DOWNSAMPLE_THESHOLD;
+import static 
org.apache.cassandra.io.sstable.IndexSummaryManager.UPSAMPLE_THRESHOLD;
+import static 
org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+public class IndexSummaryManagerTest extends SchemaLoader
+{
+private static final Logger logger = 
LoggerFactory.getLogger(IndexSummaryManagerTest.class);
+
+
+private static long totalOffHeapSize(ListSSTableReader sstables)
+{
+long total = 0;
+for (SSTableReader sstable : sstables)
+total += sstable.getIndexSummaryOffHeapSize();
+
+return total;
+}
+
+private static ListSSTableReader resetSummaries(ListSSTableReader 
sstables, long originalOffHeapSize) throws IOException
+{
+for (SSTableReader sstable : sstables)
+sstable.readMeter = new RestorableMeter(100.0, 100.0);
+
+sstables = redistributeSummaries(Collections.EMPTY_LIST, sstables, 
originalOffHeapSize * sstables.size());
+for (SSTableReader sstable : sstables)
+assertEquals(BASE_SAMPLING_LEVEL, 
sstable.getIndexSummarySamplingLevel());
+
+return sstables;
+}
+
+private void validateData(ColumnFamilyStore cfs, int numRows)
+{
+for (int i = 0; i  numRows; i++)
+{
+DecoratedKey key = Util.dk(String.valueOf(i));
+QueryFilter filter = QueryFilter.getIdentityFilter(key, 
cfs.getColumnFamilyName(), System.currentTimeMillis());
+ColumnFamily row = cfs.getColumnFamily(filter);
+assertNotNull(row);
+Column column = row.getColumn(ByteBufferUtil.bytes(column));
+assertNotNull(column);
+assertEquals(100, column.value().array().length);
+}
+}
+
+private ComparatorSSTableReader hotnessComparator = new 
ComparatorSSTableReader()
+{
+public int compare(SSTableReader o1, SSTableReader o2)
+{
+return Double.compare(o1.readMeter.fifteenMinuteRate(), 
o2.readMeter.fifteenMinuteRate());
+}
+};
+
+@Test
+public void testRedistributeSummaries() throws IOException
+{
+String ksname = Keyspace1;
+String cfname = StandardLowIndexInterval; // index interval of 8, no 
key caching
+Keyspace keyspace = Keyspace.open(ksname);
+ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(cfname);
+cfs.truncateBlocking();
+cfs.disableAutoCompaction();
+
+ByteBuffer value = ByteBuffer.wrap(new byte[100]);
+
+int numSSTables = 4;
+int numRows = 256;
+for (int sstable = 0; sstable  numSSTables; sstable++)
+{
+for 

[jira] [Updated] (CASSANDRA-6398) nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying to remove it' occurs regularly.

2013-11-22 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6398:
--

 Reviewer: Brandon Williams
Fix Version/s: 2.0.4
 Assignee: Tyler Hobbs

 nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying 
 to remove it' occurs regularly.
 

 Key: CASSANDRA-6398
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6398
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.4


 I see this error somewhat regularly when running a *{{nodetool removenode}}* 
 command. The command completes successfully, ie, the node is removed, so I'm 
 not sure what this message is telling me.
  
 {code}
 $ nodetool -p 7100 removenode bff9072e-4bb6-42fa-937c-bb73bcc094bc
 Exception in thread main java.lang.RuntimeException: Endpoint /127.0.0.2 
 generation changed while trying to remove it
   at 
 org.apache.cassandra.gms.Gossiper.advertiseRemoving(Gossiper.java:421)
   at 
 org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3080)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6356) Proposal: Statistics.db (SSTableMetadata) format change

2013-11-22 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830207#comment-13830207
 ] 

Jonathan Ellis commented on CASSANDRA-6356:
---

Could we move the DIGEST component into ValidationMetadata?

Should we add a table of contents to the serialized metadata so we can seek 
to a given sub-component w/o having to scan everything?

 Proposal: Statistics.db (SSTableMetadata) format change
 ---

 Key: CASSANDRA-6356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6356
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1


 We started to distinguish what's loaded to heap, and what's not from 
 Statistics.db. For now, ancestors are loaded as they needed.
 Current serialization format is so adhoc that adding new metadata that are 
 not permanently hold onto memory is somewhat difficult and messy. I propose 
 to change serialization format so that a group of stats can be loaded as 
 needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5906) Avoid allocating over-large bloom filters

2013-11-22 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830212#comment-13830212
 ] 

Yuki Morishita commented on CASSANDRA-5906:
---

So far, I tested HLL++ alone for serialized size and error% with various 
parameters. 
https://docs.google.com/a/datastax.com/spreadsheet/ccc?key=0AsVe14L_ijtkdEhDbk1rTjYwb3ZjdXFlTnNCNnk2cGc#gid=13

We can reduce the size from originally posted here (p=16, sp=0), down to less 
than 10k for p=13, sp=25. Using the sparse mode, we can save space for smaller 
number of partitions.
I think relative error 2% of estimated partition size is tolerable for 
constructing bloom filter. (though I don't have formula to prove it :P)


 Avoid allocating over-large bloom filters
 -

 Key: CASSANDRA-5906
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5906
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
 Fix For: 2.1


 We conservatively estimate the number of partitions post-compaction to be the 
 total number of partitions pre-compaction.  That is, we assume the worst-case 
 scenario of no partition overlap at all.
 This can result in substantial memory wasted in sstables resulting from 
 highly overlapping compactions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6389) Check first and last key to potentially skip SSTable for reads

2013-11-22 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-6389:
---

Attachment: 6389.patch

Attached file 6389.patch should apply to the 2.0 branch.

 Check first and last key to potentially skip SSTable for reads
 --

 Key: CASSANDRA-6389
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6389
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
Priority: Minor
 Attachments: 6389.patch


 In {{SSTableReader.getPosition()}}, we use a -1 result from a binary search 
 on the index summary to check if the requested key falls before the start of 
 the sstable.  Instead, we can directly compare the requested key with the 
 {{first}} and {{last}} keys for the sstable, which will allow us to also skip 
 keys that fall after the last key in the sstable.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5906) Avoid allocating over-large bloom filters

2013-11-22 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830216#comment-13830216
 ] 

Jonathan Ellis commented on CASSANDRA-5906:
---

Why does HLL size spike around 10k elements?

 Avoid allocating over-large bloom filters
 -

 Key: CASSANDRA-5906
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5906
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
 Fix For: 2.1


 We conservatively estimate the number of partitions post-compaction to be the 
 total number of partitions pre-compaction.  That is, we assume the worst-case 
 scenario of no partition overlap at all.
 This can result in substantial memory wasted in sstables resulting from 
 highly overlapping compactions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6393) Invalid metadata for IN ? queries

2013-11-22 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6393.
-

Resolution: Not A Problem

Aleksey is right, I'm sorry for the brain fart, I mixed it up while responding 
to the original email, thinking we had inverted when we should have returned 
'in(id)' but no. This is working as designed. 

 Invalid metadata for IN ? queries
 -

 Key: CASSANDRA-6393
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6393
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java Driver 2.0.0-rc1
Reporter: Mikhail Mazursky
Assignee: Sylvain Lebresne
Priority: Minor
 Attachments: column_def_debug.png


 I tried to use the following CQL query:
 DELETE FROM table WHERE id IN ?
 using Java driver like this:
 prepStatement.setList(id, idsAsList);
 but got the following exception:
 {noformat}
  java.lang.IllegalArgumentException: id is not a column defined in this 
 metadata
 at 
 com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273)
 at 
 com.datastax.driver.core.BoundStatement.setList(BoundStatement.java:840)
 {noformat}
 Debugger shows that Cassandra sends in(id) in metadata. Is this correct?
 See mail thread for more details: 
 https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/U7mlKcoDL5o



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5906) Avoid allocating over-large bloom filters

2013-11-22 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830234#comment-13830234
 ] 

Yuki Morishita commented on CASSANDRA-5906:
---

I think it should be transition from sparse mode to normal mode before going 
over maximum register normal mode uses.
I'm reading the code with the paper in one hand...

ping [~cburroughs] or [~abramsm51]...

 Avoid allocating over-large bloom filters
 -

 Key: CASSANDRA-5906
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5906
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
 Fix For: 2.1


 We conservatively estimate the number of partitions post-compaction to be the 
 total number of partitions pre-compaction.  That is, we assume the worst-case 
 scenario of no partition overlap at all.
 This can result in substantial memory wasted in sstables resulting from 
 highly overlapping compactions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6356) Proposal: Statistics.db (SSTableMetadata) format change

2013-11-22 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830241#comment-13830241
 ] 

Brandon Williams commented on CASSANDRA-6356:
-

I kind of like that, in a pinch, you can use the DIGEST component by itself 
offline in shell to validate your sstables and/or find corruption if need be.

 Proposal: Statistics.db (SSTableMetadata) format change
 ---

 Key: CASSANDRA-6356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6356
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1


 We started to distinguish what's loaded to heap, and what's not from 
 Statistics.db. For now, ancestors are loaded as they needed.
 Current serialization format is so adhoc that adding new metadata that are 
 not permanently hold onto memory is somewhat difficult and messy. I propose 
 to change serialization format so that a group of stats can be loaded as 
 needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2013-11-22 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830244#comment-13830244
 ] 

Mikhail Stepura commented on CASSANDRA-6283:


bq. Avoid repairing or repair and just ignore the delete-errors?
That doesn't look like a viable approach. I would try to run repairs with 
{{-par}} option. If I read the code correctly then a snapshots will not be used 
in this case.

{code}
- name: repair [keyspace] [cfnames]
help: |
  Repair one or more column families
 Use -pr to repair only the first range returned by the partitioner.
 Use -par to carry out a parallel repair.
{code}

But to be honest - I really don't know what are other differences between a 
sequential and parallel repair 

 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Priority: Critical
  Labels: newbie, patch, test
 Fix For: 2.0.3, 2.1

 Attachments: screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
   @Override
   protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
   }
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[2/6] git commit: Change removetoken refs to removenode

2013-11-22 Thread brandonwilliams
Change removetoken refs to removenode


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/da5ff080
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/da5ff080
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/da5ff080

Branch: refs/heads/cassandra-2.0
Commit: da5ff08055086979ed01c34154670799c87f08c4
Parents: 85668c5
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 13:22:45 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 13:22:45 2013 -0600

--
 src/java/org/apache/cassandra/dht/BootStrapper.java |  2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  2 +-
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/da5ff080/src/java/org/apache/cassandra/dht/BootStrapper.java
--
diff --git a/src/java/org/apache/cassandra/dht/BootStrapper.java 
b/src/java/org/apache/cassandra/dht/BootStrapper.java
index ff76534..da91be7 100644
--- a/src/java/org/apache/cassandra/dht/BootStrapper.java
+++ b/src/java/org/apache/cassandra/dht/BootStrapper.java
@@ -102,7 +102,7 @@ public class BootStrapper
 {
 Token token = 
StorageService.getPartitioner().getTokenFactory().fromString(tokenString);
 if (metadata.getEndpoint(token) != null)
-throw new ConfigurationException(Bootstraping to existing 
token  + tokenString +  is not allowed (decommission/removetoken the old node 
first).);
+throw new ConfigurationException(Bootstraping to existing 
token  + tokenString +  is not allowed (decommission/removenode the old node 
first).);
 tokens.add(token);
 }
 return tokens;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/da5ff080/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 545d26b..cec9a7a 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -384,7 +384,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 
 /**
  * This method will begin removing an existing endpoint from the cluster 
by spoofing its state
- * This should never be called unless this coordinator has had 
'removetoken' invoked
+ * This should never be called unless this coordinator has had 
'removenode' invoked
  *
  * @param endpoint - the endpoint being removed
  * @param hostId - the ID of the host being removed

http://git-wip-us.apache.org/repos/asf/cassandra/blob/da5ff080/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index cd98689..250fa62 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1238,7 +1238,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
  *   set if the node is dead and has been removed by its 
REMOVAL_COORDINATOR
  *
  * Note: Any time a node state changes from STATUS_NORMAL, it will not be 
visible to new nodes. So it follows that
- * you should never bootstrap a new node during a removetoken, 
decommission or move.
+ * you should never bootstrap a new node during a removenode, decommission 
or move.
  */
 public void onChange(InetAddress endpoint, ApplicationState state, 
VersionedValue value)
 {
@@ -1590,7 +1590,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 
 /**
- * Handle notification that a node being actively removed from the ring 
via 'removetoken'
+ * Handle notification that a node being actively removed from the ring 
via 'removenode'
  *
  * @param endpoint node
  * @param pieces either REMOVED_TOKEN (node is gone) or REMOVING_TOKEN 
(replicas need to be restored)
@@ -1601,7 +1601,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 if (endpoint.equals(FBUtilities.getBroadcastAddress()))
 {
-logger.info(Received removeToken gossip about myself. Is this 
node rejoining after an explicit removetoken?);
+logger.info(Received removenode gossip about myself. Is this node 
rejoining after an 

[6/6] git commit: Merge branch 'cassandra-2.0' into trunk

2013-11-22 Thread brandonwilliams
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87b39c8a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87b39c8a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87b39c8a

Branch: refs/heads/trunk
Commit: 87b39c8af3477c3b80f124da23b46de350a259e7
Parents: dbd1a72 4fd322f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 13:23:04 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 13:23:04 2013 -0600

--
 src/java/org/apache/cassandra/dht/BootStrapper.java |  2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  2 +-
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/87b39c8a/src/java/org/apache/cassandra/gms/Gossiper.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87b39c8a/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 5dabb42,e91ac8c..2dc7ad8
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -3046,10 -3046,10 +3046,10 @@@ public class StorageService extends Not
  
  // A leaving endpoint that is dead is already being removed.
  if (tokenMetadata.isLeaving(endpoint))
 -logger.warn(Node  + endpoint +  is already being removed, 
continuing removal anyway);
 +logger.warn(Node {} is already being removed, continuing removal 
anyway, endpoint);
  
  if (!replicatingNodes.isEmpty())
- throw new UnsupportedOperationException(This node is already 
processing a removal. Wait for it to complete, or use 'removetoken force' if 
this has failed.);
+ throw new UnsupportedOperationException(This node is already 
processing a removal. Wait for it to complete, or use 'removenode force' if 
this has failed.);
  
  // Find the endpoints that are going to become responsible for data
  for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())



[3/6] git commit: Change removetoken refs to removenode

2013-11-22 Thread brandonwilliams
Change removetoken refs to removenode


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/da5ff080
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/da5ff080
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/da5ff080

Branch: refs/heads/trunk
Commit: da5ff08055086979ed01c34154670799c87f08c4
Parents: 85668c5
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 13:22:45 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 13:22:45 2013 -0600

--
 src/java/org/apache/cassandra/dht/BootStrapper.java |  2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  2 +-
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/da5ff080/src/java/org/apache/cassandra/dht/BootStrapper.java
--
diff --git a/src/java/org/apache/cassandra/dht/BootStrapper.java 
b/src/java/org/apache/cassandra/dht/BootStrapper.java
index ff76534..da91be7 100644
--- a/src/java/org/apache/cassandra/dht/BootStrapper.java
+++ b/src/java/org/apache/cassandra/dht/BootStrapper.java
@@ -102,7 +102,7 @@ public class BootStrapper
 {
 Token token = 
StorageService.getPartitioner().getTokenFactory().fromString(tokenString);
 if (metadata.getEndpoint(token) != null)
-throw new ConfigurationException(Bootstraping to existing 
token  + tokenString +  is not allowed (decommission/removetoken the old node 
first).);
+throw new ConfigurationException(Bootstraping to existing 
token  + tokenString +  is not allowed (decommission/removenode the old node 
first).);
 tokens.add(token);
 }
 return tokens;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/da5ff080/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 545d26b..cec9a7a 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -384,7 +384,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 
 /**
  * This method will begin removing an existing endpoint from the cluster 
by spoofing its state
- * This should never be called unless this coordinator has had 
'removetoken' invoked
+ * This should never be called unless this coordinator has had 
'removenode' invoked
  *
  * @param endpoint - the endpoint being removed
  * @param hostId - the ID of the host being removed

http://git-wip-us.apache.org/repos/asf/cassandra/blob/da5ff080/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index cd98689..250fa62 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1238,7 +1238,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
  *   set if the node is dead and has been removed by its 
REMOVAL_COORDINATOR
  *
  * Note: Any time a node state changes from STATUS_NORMAL, it will not be 
visible to new nodes. So it follows that
- * you should never bootstrap a new node during a removetoken, 
decommission or move.
+ * you should never bootstrap a new node during a removenode, decommission 
or move.
  */
 public void onChange(InetAddress endpoint, ApplicationState state, 
VersionedValue value)
 {
@@ -1590,7 +1590,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 
 /**
- * Handle notification that a node being actively removed from the ring 
via 'removetoken'
+ * Handle notification that a node being actively removed from the ring 
via 'removenode'
  *
  * @param endpoint node
  * @param pieces either REMOVED_TOKEN (node is gone) or REMOVING_TOKEN 
(replicas need to be restored)
@@ -1601,7 +1601,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 if (endpoint.equals(FBUtilities.getBroadcastAddress()))
 {
-logger.info(Received removeToken gossip about myself. Is this 
node rejoining after an explicit removetoken?);
+logger.info(Received removenode gossip about myself. Is this node 
rejoining after an explicit 

[1/6] git commit: Change removetoken refs to removenode

2013-11-22 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.2 85668c5f3 - da5ff0805
  refs/heads/cassandra-2.0 8e0523ea6 - 4fd322f53
  refs/heads/trunk dbd1a727b - 87b39c8af


Change removetoken refs to removenode


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/da5ff080
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/da5ff080
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/da5ff080

Branch: refs/heads/cassandra-1.2
Commit: da5ff08055086979ed01c34154670799c87f08c4
Parents: 85668c5
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 13:22:45 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 13:22:45 2013 -0600

--
 src/java/org/apache/cassandra/dht/BootStrapper.java |  2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  2 +-
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/da5ff080/src/java/org/apache/cassandra/dht/BootStrapper.java
--
diff --git a/src/java/org/apache/cassandra/dht/BootStrapper.java 
b/src/java/org/apache/cassandra/dht/BootStrapper.java
index ff76534..da91be7 100644
--- a/src/java/org/apache/cassandra/dht/BootStrapper.java
+++ b/src/java/org/apache/cassandra/dht/BootStrapper.java
@@ -102,7 +102,7 @@ public class BootStrapper
 {
 Token token = 
StorageService.getPartitioner().getTokenFactory().fromString(tokenString);
 if (metadata.getEndpoint(token) != null)
-throw new ConfigurationException(Bootstraping to existing 
token  + tokenString +  is not allowed (decommission/removetoken the old node 
first).);
+throw new ConfigurationException(Bootstraping to existing 
token  + tokenString +  is not allowed (decommission/removenode the old node 
first).);
 tokens.add(token);
 }
 return tokens;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/da5ff080/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 545d26b..cec9a7a 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -384,7 +384,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 
 /**
  * This method will begin removing an existing endpoint from the cluster 
by spoofing its state
- * This should never be called unless this coordinator has had 
'removetoken' invoked
+ * This should never be called unless this coordinator has had 
'removenode' invoked
  *
  * @param endpoint - the endpoint being removed
  * @param hostId - the ID of the host being removed

http://git-wip-us.apache.org/repos/asf/cassandra/blob/da5ff080/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index cd98689..250fa62 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1238,7 +1238,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
  *   set if the node is dead and has been removed by its 
REMOVAL_COORDINATOR
  *
  * Note: Any time a node state changes from STATUS_NORMAL, it will not be 
visible to new nodes. So it follows that
- * you should never bootstrap a new node during a removetoken, 
decommission or move.
+ * you should never bootstrap a new node during a removenode, decommission 
or move.
  */
 public void onChange(InetAddress endpoint, ApplicationState state, 
VersionedValue value)
 {
@@ -1590,7 +1590,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 
 /**
- * Handle notification that a node being actively removed from the ring 
via 'removetoken'
+ * Handle notification that a node being actively removed from the ring 
via 'removenode'
  *
  * @param endpoint node
  * @param pieces either REMOVED_TOKEN (node is gone) or REMOVING_TOKEN 
(replicas need to be restored)
@@ -1601,7 +1601,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 if (endpoint.equals(FBUtilities.getBroadcastAddress()))
 {
-logger.info(Received removeToken gossip about myself. 

[5/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-22 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4fd322f5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4fd322f5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4fd322f5

Branch: refs/heads/cassandra-2.0
Commit: 4fd322f53daf37fff9b4a22c41af56f7274ef72c
Parents: 8e0523e da5ff08
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 13:22:53 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 13:22:53 2013 -0600

--
 src/java/org/apache/cassandra/dht/BootStrapper.java |  2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  2 +-
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4fd322f5/src/java/org/apache/cassandra/dht/BootStrapper.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4fd322f5/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index cb4e6ea,cec9a7a..a8e91ea
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -401,10 -384,10 +401,10 @@@ public class Gossiper implements IFailu
  
  /**
   * This method will begin removing an existing endpoint from the cluster 
by spoofing its state
-  * This should never be called unless this coordinator has had 
'removetoken' invoked
+  * This should never be called unless this coordinator has had 
'removenode' invoked
   *
 - * @param endpoint - the endpoint being removed
 - * @param hostId - the ID of the host being removed
 + * @param endpoint- the endpoint being removed
 + * @param hostId  - the ID of the host being removed
   * @param localHostId - my own host ID for replication coordination
   */
  public void advertiseRemoving(InetAddress endpoint, UUID hostId, UUID 
localHostId)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4fd322f5/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 50419c4,250fa62..e91ac8c
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -3049,13 -3173,13 +3049,13 @@@ public class StorageService extends Not
  logger.warn(Node  + endpoint +  is already being removed, 
continuing removal anyway);
  
  if (!replicatingNodes.isEmpty())
- throw new UnsupportedOperationException(This node is already 
processing a removal. Wait for it to complete, or use 'removetoken force' if 
this has failed.);
+ throw new UnsupportedOperationException(This node is already 
processing a removal. Wait for it to complete, or use 'removenode force' if 
this has failed.);
  
  // Find the endpoints that are going to become responsible for data
 -for (String table : Schema.instance.getNonSystemTables())
 +for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())
  {
  // if the replication factor is 1 the data is lost so we 
shouldn't wait for confirmation
 -if 
(Table.open(table).getReplicationStrategy().getReplicationFactor() == 1)
 +if 
(Keyspace.open(keyspaceName).getReplicationStrategy().getReplicationFactor() == 
1)
  continue;
  
  // get all ranges that change ownership (that is, a node needs



[4/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-11-22 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4fd322f5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4fd322f5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4fd322f5

Branch: refs/heads/trunk
Commit: 4fd322f53daf37fff9b4a22c41af56f7274ef72c
Parents: 8e0523e da5ff08
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Nov 22 13:22:53 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Nov 22 13:22:53 2013 -0600

--
 src/java/org/apache/cassandra/dht/BootStrapper.java |  2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  2 +-
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4fd322f5/src/java/org/apache/cassandra/dht/BootStrapper.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4fd322f5/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index cb4e6ea,cec9a7a..a8e91ea
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -401,10 -384,10 +401,10 @@@ public class Gossiper implements IFailu
  
  /**
   * This method will begin removing an existing endpoint from the cluster 
by spoofing its state
-  * This should never be called unless this coordinator has had 
'removetoken' invoked
+  * This should never be called unless this coordinator has had 
'removenode' invoked
   *
 - * @param endpoint - the endpoint being removed
 - * @param hostId - the ID of the host being removed
 + * @param endpoint- the endpoint being removed
 + * @param hostId  - the ID of the host being removed
   * @param localHostId - my own host ID for replication coordination
   */
  public void advertiseRemoving(InetAddress endpoint, UUID hostId, UUID 
localHostId)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4fd322f5/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 50419c4,250fa62..e91ac8c
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -3049,13 -3173,13 +3049,13 @@@ public class StorageService extends Not
  logger.warn(Node  + endpoint +  is already being removed, 
continuing removal anyway);
  
  if (!replicatingNodes.isEmpty())
- throw new UnsupportedOperationException(This node is already 
processing a removal. Wait for it to complete, or use 'removetoken force' if 
this has failed.);
+ throw new UnsupportedOperationException(This node is already 
processing a removal. Wait for it to complete, or use 'removenode force' if 
this has failed.);
  
  // Find the endpoints that are going to become responsible for data
 -for (String table : Schema.instance.getNonSystemTables())
 +for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())
  {
  // if the replication factor is 1 the data is lost so we 
shouldn't wait for confirmation
 -if 
(Table.open(table).getReplicationStrategy().getReplicationFactor() == 1)
 +if 
(Keyspace.open(keyspaceName).getReplicationStrategy().getReplicationFactor() == 
1)
  continue;
  
  // get all ranges that change ownership (that is, a node needs



[jira] [Commented] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830249#comment-13830249
 ] 

Brandon Williams commented on CASSANDRA-6397:
-

Changed all removetoken references to removenode in da5ff080550.  The rest 
should be fairly easy in nodetool.

 removenode outputs confusing non-error
 --

 Key: CASSANDRA-6397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6397
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Priority: Trivial
  Labels: lhf
 Fix For: 1.2.13, 2.0.4


 *{{nodetool removenode force}}* outputs a slightly confusing error message 
 when there is nothing for it to do.
 * Start a cluster, then kill one of the nodes.
 * run *{{nodetool removenode}}* on the node you killed.
 * Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
 that it outputs a simple message regarding it's status.
 * Run *{{nodetool removenode force}}* again after the firsrt removenode 
 command finishes, you'll see this message and traceback:
 {code}
 $ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
 RemovalStatus: No token removals in process.
 Exception in thread main java.lang.UnsupportedOperationException: No tokens 
 to force removal on, call 'removetoken' first
   at 
 org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}
 Two issues I see with this traceback:
 * No tokens to force removal on is telling me the same thing that the 
 message before it tells me: RemovalStatus: No token removals in process., 
 So the entire traceback is redundant.
 * call 'removetoken' first - removetoken has been deprecated according to 
 the message output by removenode, so there is inconsistency in directions to 
 the user.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6397:


Fix Version/s: 2.0.4
   1.2.13

 removenode outputs confusing non-error
 --

 Key: CASSANDRA-6397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6397
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Priority: Trivial
  Labels: lhf
 Fix For: 1.2.13, 2.0.4


 *{{nodetool removenode force}}* outputs a slightly confusing error message 
 when there is nothing for it to do.
 * Start a cluster, then kill one of the nodes.
 * run *{{nodetool removenode}}* on the node you killed.
 * Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
 that it outputs a simple message regarding it's status.
 * Run *{{nodetool removenode force}}* again after the firsrt removenode 
 command finishes, you'll see this message and traceback:
 {code}
 $ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
 RemovalStatus: No token removals in process.
 Exception in thread main java.lang.UnsupportedOperationException: No tokens 
 to force removal on, call 'removetoken' first
   at 
 org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}
 Two issues I see with this traceback:
 * No tokens to force removal on is telling me the same thing that the 
 message before it tells me: RemovalStatus: No token removals in process., 
 So the entire traceback is redundant.
 * call 'removetoken' first - removetoken has been deprecated according to 
 the message output by removenode, so there is inconsistency in directions to 
 the user.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6397) removenode outputs confusing non-error

2013-11-22 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6397:


Labels: lhf  (was: )

 removenode outputs confusing non-error
 --

 Key: CASSANDRA-6397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6397
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Priority: Trivial
  Labels: lhf
 Fix For: 1.2.13, 2.0.4


 *{{nodetool removenode force}}* outputs a slightly confusing error message 
 when there is nothing for it to do.
 * Start a cluster, then kill one of the nodes.
 * run *{{nodetool removenode}}* on the node you killed.
 * Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
 that it outputs a simple message regarding it's status.
 * Run *{{nodetool removenode force}}* again after the firsrt removenode 
 command finishes, you'll see this message and traceback:
 {code}
 $ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
 RemovalStatus: No token removals in process.
 Exception in thread main java.lang.UnsupportedOperationException: No tokens 
 to force removal on, call 'removetoken' first
   at 
 org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}
 Two issues I see with this traceback:
 * No tokens to force removal on is telling me the same thing that the 
 message before it tells me: RemovalStatus: No token removals in process., 
 So the entire traceback is redundant.
 * call 'removetoken' first - removetoken has been deprecated according to 
 the message output by removenode, so there is inconsistency in directions to 
 the user.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6356) Proposal: Statistics.db (SSTableMetadata) format change

2013-11-22 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830251#comment-13830251
 ] 

Jonathan Ellis commented on CASSANDRA-6356:
---

Good point.

 Proposal: Statistics.db (SSTableMetadata) format change
 ---

 Key: CASSANDRA-6356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6356
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1


 We started to distinguish what's loaded to heap, and what's not from 
 Statistics.db. For now, ancestors are loaded as they needed.
 Current serialization format is so adhoc that adding new metadata that are 
 not permanently hold onto memory is somewhat difficult and messy. I propose 
 to change serialization format so that a group of stats can be loaded as 
 needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2013-11-22 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830255#comment-13830255
 ] 

Mikhail Stepura commented on CASSANDRA-6283:


[~Andie78] BTW, did you try to run a repair with your finalizer-patch? 
CASSANDRA-6275 fixed one leak and there might be others

 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Priority: Critical
  Labels: newbie, patch, test
 Fix For: 2.0.3, 2.1

 Attachments: screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
   @Override
   protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
   }
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6172) COPY TO command doesn't escape single quote in collections

2013-11-22 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6172:
---

Fix Version/s: (was: 2.0.3)
   2.0.4

 COPY TO command doesn't escape single quote in collections
 --

 Key: CASSANDRA-6172
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6172
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra 2.0.1, Linux
Reporter: Ivan Mykhailov
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 2.0.4


 {code}
 CREATE TABLE test (key text PRIMARY KEY , testcollection settext) ;
 INSERT INTO test (key, testcollection ) VALUES ( 'test', {'foo''bar'});
 COPY test TO '/tmp/test.csv';
 COPY test FROM '/tmp/test.csv';
 Bad Request: line 1:73 mismatched character 'EOF' expecting '''
 Aborting import at record #0 (line 1). Previously-inserted values still 
 present.
 {code}
 Content of generated '/tmp/test.csv':
 {code}
 test,{'foo'bar'}
 {code}
 Unfortunately, I didn't find workaround with any combination of COPY options 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6309) Pig CqlStorage generates ERROR 1108: Duplicate schema alias

2013-11-22 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830285#comment-13830285
 ] 

Brandon Williams commented on CASSANDRA-6309:
-

Patch fixes the duplicate schema problem, but the thrift test fails.

 Pig CqlStorage generates  ERROR 1108: Duplicate schema alias
 

 Key: CASSANDRA-6309
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6309
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Thunder Stumpges
Assignee: Alex Liu
 Attachments: 6309-2.0.txt, LOCAL_ONE-write-for-all-strategies.txt


 In Pig after loading a simple CQL3 table from Cassandra 2.0.1, and dumping 
 contents, I receive:
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
  cm = load 'cql://thunder_test/cassandra_messages' USING CqlStorage;
  dump cm
 ERROR org.apache.pig.tools.grunt.Grunt - 
 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to 
 open iterator for alias cm
 ...
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
 at 
 org.apache.pig.newplan.logical.visitor.SchemaAliasVisitor.validate(SchemaAliasVisitor.java:75)
 running 'describe cm' gives:
 cm: {message_id: chararray,author: chararray,author: chararray,body: 
 chararray,message_id: chararray}
 The original table schema in Cassandra is:
 CREATE TABLE cassandra_messages (
   message_id text,
   author text,
   body text,
   PRIMARY KEY (message_id, author)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='null' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 it appears that the code in CqlStorage.getColumnMetadata at ~line 478 takes 
 the keys columns (in my case, message_id and author) and appends the 
 columns from getColumnMeta (which has all three columns). Thus the keys 
 columns are duplicated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5906) Avoid allocating over-large bloom filters

2013-11-22 Thread Matt Abrams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830317#comment-13830317
 ] 

Matt Abrams commented on CASSANDRA-5906:


When SP  0 the algorithm uses a variant of a linear counter to get very 
accurate counts at small cardinality.  At some threshold the algorithm switches 
from a linear counter to HLL.   Linear counters grow in size as a function of 
the number of inputs where HLL's size is a function of the desired error rate.  
We could (should?) tune the threshold so that the size so that the conversion 
happens earlier.  Currently the threshold is equal to 2^p * .75.


 Avoid allocating over-large bloom filters
 -

 Key: CASSANDRA-5906
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5906
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
 Fix For: 2.1


 We conservatively estimate the number of partitions post-compaction to be the 
 total number of partitions pre-compaction.  That is, we assume the worst-case 
 scenario of no partition overlap at all.
 This can result in substantial memory wasted in sstables resulting from 
 highly overlapping compactions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6399) debian init script removes PID despite the return status

2013-11-22 Thread Peter Halliday (JIRA)
Peter Halliday created CASSANDRA-6399:
-

 Summary: debian init script removes PID despite the return status
 Key: CASSANDRA-6399
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6399
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday


If there's an error in running service cassandra stop it can return a 
non-successful code, but the do_stop() removes the PID file anyway.  This shows 
then via service cassandra status, that Cassandra is stopped, even though it's 
still running in the process list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6172) COPY TO command doesn't escape single quote in collections

2013-11-22 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6172:
---

Attachment: CASSANDRA-2.0-6172.patch

Patch: keep single quotes in a text value

 COPY TO command doesn't escape single quote in collections
 --

 Key: CASSANDRA-6172
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6172
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra 2.0.1, Linux
Reporter: Ivan Mykhailov
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 2.0.4

 Attachments: CASSANDRA-2.0-6172.patch


 {code}
 CREATE TABLE test (key text PRIMARY KEY , testcollection settext) ;
 INSERT INTO test (key, testcollection ) VALUES ( 'test', {'foo''bar'});
 COPY test TO '/tmp/test.csv';
 COPY test FROM '/tmp/test.csv';
 Bad Request: line 1:73 mismatched character 'EOF' expecting '''
 Aborting import at record #0 (line 1). Previously-inserted values still 
 present.
 {code}
 Content of generated '/tmp/test.csv':
 {code}
 test,{'foo'bar'}
 {code}
 Unfortunately, I didn't find workaround with any combination of COPY options 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6399) debian init script removes PID despite the return status

2013-11-22 Thread Peter Halliday (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830348#comment-13830348
 ] 

Peter Halliday commented on CASSANDRA-6399:
---

Also, start-stop-daemon doesn't block until it's stopped.  With the current 
command line.  So it returns success as soon as the signal was sent.  This 
could lead to situations if the signal was ignored for some reason that it was 
read as a success.  I suggest adding the --retry flag, which can have a timeout 
added as well if desired.

 debian init script removes PID despite the return status
 

 Key: CASSANDRA-6399
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6399
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday

 If there's an error in running service cassandra stop it can return a 
 non-successful code, but the do_stop() removes the PID file anyway.  This 
 shows then via service cassandra status, that Cassandra is stopped, even 
 though it's still running in the process list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2013-11-22 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830352#comment-13830352
 ] 

Andreas Schnitzerling commented on CASSANDRA-6283:
--

Not yet. Just read some of cassandra source and Im not familiar with cassandra 
at all. That finalizer patch is only emergency and doesnt fix the source of the 
problem (lost file handels).

 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Priority: Critical
  Labels: newbie, patch, test
 Fix For: 2.0.3, 2.1

 Attachments: screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
   @Override
   protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
   }
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6389) Check first and last key to potentially skip SSTable for reads

2013-11-22 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830351#comment-13830351
 ] 

Jonathan Ellis commented on CASSANDRA-6389:
---

On second look, this isn't as critical as I thought since intervaltree should 
keep the query-set to tables that satisfy the condition anyway.

If that's the case, maybe we can just change this to an assert to make sure.

 Check first and last key to potentially skip SSTable for reads
 --

 Key: CASSANDRA-6389
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6389
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
Priority: Minor
 Attachments: 6389.patch


 In {{SSTableReader.getPosition()}}, we use a -1 result from a binary search 
 on the index summary to check if the requested key falls before the start of 
 the sstable.  Instead, we can directly compare the requested key with the 
 {{first}} and {{last}} keys for the sstable, which will allow us to also skip 
 keys that fall after the last key in the sstable.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2013-11-22 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830361#comment-13830361
 ] 

Andreas Schnitzerling commented on CASSANDRA-6283:
--

btw I read somewhere, in java it is not a nice program style to write 
finalizers (but in C++ destrutors are common, for example used in resource 
objects, which automatic always close resources). whats your opinion about 
finalizers in java?

 Windows 7 data files keept open / can't be deleted after compaction.
 

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Priority: Critical
  Labels: newbie, patch, test
 Fix For: 2.0.3, 2.1

 Attachments: screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
   @Override
   protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
   }
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6309) Pig CqlStorage generates ERROR 1108: Duplicate schema alias

2013-11-22 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-6309:


Attachment: LOCAL_ONE-write-for-all-strategies-v2.txt
6309-v2-2.0-branch.txt

 Pig CqlStorage generates  ERROR 1108: Duplicate schema alias
 

 Key: CASSANDRA-6309
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6309
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Thunder Stumpges
Assignee: Alex Liu
 Attachments: 6309-2.0.txt, 6309-v2-2.0-branch.txt, 
 LOCAL_ONE-write-for-all-strategies-v2.txt, 
 LOCAL_ONE-write-for-all-strategies.txt


 In Pig after loading a simple CQL3 table from Cassandra 2.0.1, and dumping 
 contents, I receive:
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
  cm = load 'cql://thunder_test/cassandra_messages' USING CqlStorage;
  dump cm
 ERROR org.apache.pig.tools.grunt.Grunt - 
 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to 
 open iterator for alias cm
 ...
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
 at 
 org.apache.pig.newplan.logical.visitor.SchemaAliasVisitor.validate(SchemaAliasVisitor.java:75)
 running 'describe cm' gives:
 cm: {message_id: chararray,author: chararray,author: chararray,body: 
 chararray,message_id: chararray}
 The original table schema in Cassandra is:
 CREATE TABLE cassandra_messages (
   message_id text,
   author text,
   body text,
   PRIMARY KEY (message_id, author)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='null' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 it appears that the code in CqlStorage.getColumnMetadata at ~line 478 takes 
 the keys columns (in my case, message_id and author) and appends the 
 columns from getColumnMeta (which has all three columns). Thus the keys 
 columns are duplicated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6309) Pig CqlStorage generates ERROR 1108: Duplicate schema alias

2013-11-22 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830365#comment-13830365
 ] 

Alex Liu commented on CASSANDRA-6309:
-

V2 patch is attached, which fix the unit tests.

 Pig CqlStorage generates  ERROR 1108: Duplicate schema alias
 

 Key: CASSANDRA-6309
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6309
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Thunder Stumpges
Assignee: Alex Liu
 Attachments: 6309-2.0.txt, 6309-v2-2.0-branch.txt, 
 LOCAL_ONE-write-for-all-strategies-v2.txt, 
 LOCAL_ONE-write-for-all-strategies.txt


 In Pig after loading a simple CQL3 table from Cassandra 2.0.1, and dumping 
 contents, I receive:
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
  cm = load 'cql://thunder_test/cassandra_messages' USING CqlStorage;
  dump cm
 ERROR org.apache.pig.tools.grunt.Grunt - 
 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to 
 open iterator for alias cm
 ...
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
 at 
 org.apache.pig.newplan.logical.visitor.SchemaAliasVisitor.validate(SchemaAliasVisitor.java:75)
 running 'describe cm' gives:
 cm: {message_id: chararray,author: chararray,author: chararray,body: 
 chararray,message_id: chararray}
 The original table schema in Cassandra is:
 CREATE TABLE cassandra_messages (
   message_id text,
   author text,
   body text,
   PRIMARY KEY (message_id, author)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='null' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 it appears that the code in CqlStorage.getColumnMetadata at ~line 478 takes 
 the keys columns (in my case, message_id and author) and appends the 
 columns from getColumnMeta (which has all three columns). Thus the keys 
 columns are duplicated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6400) Update unit tests to use latest Partitioner

2013-11-22 Thread Alex Liu (JIRA)
Alex Liu created CASSANDRA-6400:
---

 Summary: Update unit tests to use latest Partitioner
 Key: CASSANDRA-6400
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6400
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Alex Liu
Priority: Minor


test/conf/cassandra.yaml uses the out-date ByteOrderedPartitioner, we should 
update it to Murmur3Partitioner.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6309) Pig CqlStorage generates ERROR 1108: Duplicate schema alias

2013-11-22 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830372#comment-13830372
 ] 

Brandon Williams commented on CASSANDRA-6309:
-

v2 removes all the license headers from the new tests, can you rebase it?

 Pig CqlStorage generates  ERROR 1108: Duplicate schema alias
 

 Key: CASSANDRA-6309
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6309
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Thunder Stumpges
Assignee: Alex Liu
 Attachments: 6309-2.0.txt, 6309-v2-2.0-branch.txt, 
 LOCAL_ONE-write-for-all-strategies-v2.txt, 
 LOCAL_ONE-write-for-all-strategies.txt


 In Pig after loading a simple CQL3 table from Cassandra 2.0.1, and dumping 
 contents, I receive:
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
  cm = load 'cql://thunder_test/cassandra_messages' USING CqlStorage;
  dump cm
 ERROR org.apache.pig.tools.grunt.Grunt - 
 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to 
 open iterator for alias cm
 ...
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
 at 
 org.apache.pig.newplan.logical.visitor.SchemaAliasVisitor.validate(SchemaAliasVisitor.java:75)
 running 'describe cm' gives:
 cm: {message_id: chararray,author: chararray,author: chararray,body: 
 chararray,message_id: chararray}
 The original table schema in Cassandra is:
 CREATE TABLE cassandra_messages (
   message_id text,
   author text,
   body text,
   PRIMARY KEY (message_id, author)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='null' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 it appears that the code in CqlStorage.getColumnMetadata at ~line 478 takes 
 the keys columns (in my case, message_id and author) and appends the 
 columns from getColumnMeta (which has all three columns). Thus the keys 
 columns are duplicated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4375) FD incorrectly using RPC timeout to ignore gossip heartbeats

2013-11-22 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830374#comment-13830374
 ] 

Brandon Williams commented on CASSANDRA-4375:
-

10s seemed 'most compatible' but I think you're right, that RING_DELAY is more 
semantically correct.  That said, I tend to think we should just set it to 30s 
instead of ring delay itself, since people often inflate ring delay for various 
things that might not translate well to this.

 FD incorrectly using RPC timeout to ignore gossip heartbeats
 

 Key: CASSANDRA-4375
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4375
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Brandon Williams
  Labels: gossip
 Fix For: 1.2.13

 Attachments: 4375.txt


 Short version: You can't run a cluster with short RPC timeouts because nodes 
 just constantly flap up/down.
 Long version:
 CASSANDRA-3273 tried to fix a problem resulting from the way the failure 
 detector works, but did so by introducing a much more sever bug: With low RPC 
 timeouts, that are lower than the typical gossip propagation time, a cluster 
 will just constantly have all nodes flapping other nodes up and down.
 The cause is this:
 {code}
 +// in the event of a long partition, never record an interval longer 
 than the rpc timeout,
 +// since if a host is regularly experiencing connectivity problems 
 lasting this long we'd
 +// rather mark it down quickly instead of adapting
 +private final double MAX_INTERVAL_IN_MS = 
 DatabaseDescriptor.getRpcTimeout();
 {code}
 And then:
 {code}
 -tLast_ = value;
 -arrivalIntervals_.add(interArrivalTime);
 +if (interArrivalTime = MAX_INTERVAL_IN_MS)
 +arrivalIntervals_.add(interArrivalTime);
 +else
 +logger_.debug(Ignoring interval time of {}, interArrivalTime);
 {code}
 Using the RPC timeout to ignore unreasonably long intervals is not correct, 
 as the RPC timeout is completely orthogonal to gossip propagation delay (see 
 CASSANDRA-3927 for a quick description of how the FD works).
 In practice, the propagation delay ends up being in the 0-3 second range on a 
 cluster with good local latency. With a low RPC timeout of say 200 ms, very 
 few heartbeat updates come in fast enough that it doesn't get ignored by the 
 failure detector. This in turn means that the FD records a completely skewed 
 average heartbeat interval, which in turn means that nodes almost always get 
 flapped on interpret() unless they happen to *just* have had their heartbeat 
 updated. Then they flap back up whenever the next heartbeat comes in (since 
 it gets brought up immediately).
 In our build, we are replacing the FD with an implementation that simply uses 
 a fixed {{N}} second time to convict, because this is just one of many ways 
 in which the current FD hurts, while we still haven't found a way it actually 
 helps relative to the trivial fixed-second conviction policy.
 For upstream, assuming people won't agree on changing it to a fixed timeout, 
 I suggest, at minimum, never using a value lower than something like 10 
 seconds or something, when determining whether to ignore. Slightly better is 
 to make it a config option.
 (I should note that if propagation delays are significantly off from the 
 expected level, other things than the FD already breaks - such as the whole 
 concept of {{RING_DELAY}}, which assumes the propagation time is roughly 
 constant with e.g. cluster size.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-4375) FD incorrectly using RPC timeout to ignore gossip heartbeats

2013-11-22 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830374#comment-13830374
 ] 

Brandon Williams edited comment on CASSANDRA-4375 at 11/22/13 10:41 PM:


10s seemed 'most compatible' but I think you're right, that RING_DELAY is more 
semantically correct.  That said, I tend to think we should just set it to 30s 
instead of ring delay itself, since people often inflate ring delay for various 
things that might not translate well to this, and since we already seed the 
initial value at 30s.


was (Author: brandon.williams):
10s seemed 'most compatible' but I think you're right, that RING_DELAY is more 
semantically correct.  That said, I tend to think we should just set it to 30s 
instead of ring delay itself, since people often inflate ring delay for various 
things that might not translate well to this.

 FD incorrectly using RPC timeout to ignore gossip heartbeats
 

 Key: CASSANDRA-4375
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4375
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Brandon Williams
  Labels: gossip
 Fix For: 1.2.13

 Attachments: 4375.txt


 Short version: You can't run a cluster with short RPC timeouts because nodes 
 just constantly flap up/down.
 Long version:
 CASSANDRA-3273 tried to fix a problem resulting from the way the failure 
 detector works, but did so by introducing a much more sever bug: With low RPC 
 timeouts, that are lower than the typical gossip propagation time, a cluster 
 will just constantly have all nodes flapping other nodes up and down.
 The cause is this:
 {code}
 +// in the event of a long partition, never record an interval longer 
 than the rpc timeout,
 +// since if a host is regularly experiencing connectivity problems 
 lasting this long we'd
 +// rather mark it down quickly instead of adapting
 +private final double MAX_INTERVAL_IN_MS = 
 DatabaseDescriptor.getRpcTimeout();
 {code}
 And then:
 {code}
 -tLast_ = value;
 -arrivalIntervals_.add(interArrivalTime);
 +if (interArrivalTime = MAX_INTERVAL_IN_MS)
 +arrivalIntervals_.add(interArrivalTime);
 +else
 +logger_.debug(Ignoring interval time of {}, interArrivalTime);
 {code}
 Using the RPC timeout to ignore unreasonably long intervals is not correct, 
 as the RPC timeout is completely orthogonal to gossip propagation delay (see 
 CASSANDRA-3927 for a quick description of how the FD works).
 In practice, the propagation delay ends up being in the 0-3 second range on a 
 cluster with good local latency. With a low RPC timeout of say 200 ms, very 
 few heartbeat updates come in fast enough that it doesn't get ignored by the 
 failure detector. This in turn means that the FD records a completely skewed 
 average heartbeat interval, which in turn means that nodes almost always get 
 flapped on interpret() unless they happen to *just* have had their heartbeat 
 updated. Then they flap back up whenever the next heartbeat comes in (since 
 it gets brought up immediately).
 In our build, we are replacing the FD with an implementation that simply uses 
 a fixed {{N}} second time to convict, because this is just one of many ways 
 in which the current FD hurts, while we still haven't found a way it actually 
 helps relative to the trivial fixed-second conviction policy.
 For upstream, assuming people won't agree on changing it to a fixed timeout, 
 I suggest, at minimum, never using a value lower than something like 10 
 seconds or something, when determining whether to ignore. Slightly better is 
 to make it a config option.
 (I should note that if propagation delays are significantly off from the 
 expected level, other things than the FD already breaks - such as the whole 
 concept of {{RING_DELAY}}, which assumes the propagation time is roughly 
 constant with e.g. cluster size.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Issue Comment Deleted] (CASSANDRA-6399) debian init script removes PID despite the return status

2013-11-22 Thread Peter Halliday (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Halliday updated CASSANDRA-6399:
--

Comment: was deleted

(was: Also, start-stop-daemon doesn't block until it's stopped.  With the 
current command line.  So it returns success as soon as the signal was sent.  
This could lead to situations if the signal was ignored for some reason that it 
was read as a success.  I suggest adding the --retry flag, which can have a 
timeout added as well if desired.)

 debian init script removes PID despite the return status
 

 Key: CASSANDRA-6399
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6399
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday

 If there's an error in running service cassandra stop it can return a 
 non-successful code, but the do_stop() removes the PID file anyway.  This 
 shows then via service cassandra status, that Cassandra is stopped, even 
 though it's still running in the process list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6399) debian init script removes PID despite the return status

2013-11-22 Thread Peter Halliday (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830382#comment-13830382
 ] 

Peter Halliday commented on CASSANDRA-6399:
---

Also, on debian, there's no /var/run/cassandra directory created.  So that 
means the PID file doesn't get created

{noformat}
ubuntu@domU-12-31-39-02-85-9C:~$ ps aux|grep cassandra
ubuntu@domU-12-31-39-02-85-9C:~$ sudo service cassandra start
xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
-Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
ubuntu@domU-12-31-39-02-85-9C:~$ sudo service cassandra status
xss =  -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar 
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1862M -Xmx1862M 
-Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss256k
 * Cassandra is not running
ubuntu@domU-12-31-39-02-85-9C:~$ 
{noformat}

 debian init script removes PID despite the return status
 

 Key: CASSANDRA-6399
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6399
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday

 If there's an error in running service cassandra stop it can return a 
 non-successful code, but the do_stop() removes the PID file anyway.  This 
 shows then via service cassandra status, that Cassandra is stopped, even 
 though it's still running in the process list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4375) FD incorrectly using RPC timeout to ignore gossip heartbeats

2013-11-22 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830386#comment-13830386
 ] 

Jonathan Ellis commented on CASSANDRA-4375:
---

Allowing people to tune it could be a feature. :)

 FD incorrectly using RPC timeout to ignore gossip heartbeats
 

 Key: CASSANDRA-4375
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4375
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Brandon Williams
  Labels: gossip
 Fix For: 1.2.13

 Attachments: 4375.txt


 Short version: You can't run a cluster with short RPC timeouts because nodes 
 just constantly flap up/down.
 Long version:
 CASSANDRA-3273 tried to fix a problem resulting from the way the failure 
 detector works, but did so by introducing a much more sever bug: With low RPC 
 timeouts, that are lower than the typical gossip propagation time, a cluster 
 will just constantly have all nodes flapping other nodes up and down.
 The cause is this:
 {code}
 +// in the event of a long partition, never record an interval longer 
 than the rpc timeout,
 +// since if a host is regularly experiencing connectivity problems 
 lasting this long we'd
 +// rather mark it down quickly instead of adapting
 +private final double MAX_INTERVAL_IN_MS = 
 DatabaseDescriptor.getRpcTimeout();
 {code}
 And then:
 {code}
 -tLast_ = value;
 -arrivalIntervals_.add(interArrivalTime);
 +if (interArrivalTime = MAX_INTERVAL_IN_MS)
 +arrivalIntervals_.add(interArrivalTime);
 +else
 +logger_.debug(Ignoring interval time of {}, interArrivalTime);
 {code}
 Using the RPC timeout to ignore unreasonably long intervals is not correct, 
 as the RPC timeout is completely orthogonal to gossip propagation delay (see 
 CASSANDRA-3927 for a quick description of how the FD works).
 In practice, the propagation delay ends up being in the 0-3 second range on a 
 cluster with good local latency. With a low RPC timeout of say 200 ms, very 
 few heartbeat updates come in fast enough that it doesn't get ignored by the 
 failure detector. This in turn means that the FD records a completely skewed 
 average heartbeat interval, which in turn means that nodes almost always get 
 flapped on interpret() unless they happen to *just* have had their heartbeat 
 updated. Then they flap back up whenever the next heartbeat comes in (since 
 it gets brought up immediately).
 In our build, we are replacing the FD with an implementation that simply uses 
 a fixed {{N}} second time to convict, because this is just one of many ways 
 in which the current FD hurts, while we still haven't found a way it actually 
 helps relative to the trivial fixed-second conviction policy.
 For upstream, assuming people won't agree on changing it to a fixed timeout, 
 I suggest, at minimum, never using a value lower than something like 10 
 seconds or something, when determining whether to ignore. Slightly better is 
 to make it a config option.
 (I should note that if propagation delays are significantly off from the 
 expected level, other things than the FD already breaks - such as the whole 
 concept of {{RING_DELAY}}, which assumes the propagation time is roughly 
 constant with e.g. cluster size.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4375) FD incorrectly using RPC timeout to ignore gossip heartbeats

2013-11-22 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830389#comment-13830389
 ] 

Brandon Williams commented on CASSANDRA-4375:
-

Shouldn't they be able to tune what we seed the FD with then too? ;)

 FD incorrectly using RPC timeout to ignore gossip heartbeats
 

 Key: CASSANDRA-4375
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4375
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Brandon Williams
  Labels: gossip
 Fix For: 1.2.13

 Attachments: 4375.txt


 Short version: You can't run a cluster with short RPC timeouts because nodes 
 just constantly flap up/down.
 Long version:
 CASSANDRA-3273 tried to fix a problem resulting from the way the failure 
 detector works, but did so by introducing a much more sever bug: With low RPC 
 timeouts, that are lower than the typical gossip propagation time, a cluster 
 will just constantly have all nodes flapping other nodes up and down.
 The cause is this:
 {code}
 +// in the event of a long partition, never record an interval longer 
 than the rpc timeout,
 +// since if a host is regularly experiencing connectivity problems 
 lasting this long we'd
 +// rather mark it down quickly instead of adapting
 +private final double MAX_INTERVAL_IN_MS = 
 DatabaseDescriptor.getRpcTimeout();
 {code}
 And then:
 {code}
 -tLast_ = value;
 -arrivalIntervals_.add(interArrivalTime);
 +if (interArrivalTime = MAX_INTERVAL_IN_MS)
 +arrivalIntervals_.add(interArrivalTime);
 +else
 +logger_.debug(Ignoring interval time of {}, interArrivalTime);
 {code}
 Using the RPC timeout to ignore unreasonably long intervals is not correct, 
 as the RPC timeout is completely orthogonal to gossip propagation delay (see 
 CASSANDRA-3927 for a quick description of how the FD works).
 In practice, the propagation delay ends up being in the 0-3 second range on a 
 cluster with good local latency. With a low RPC timeout of say 200 ms, very 
 few heartbeat updates come in fast enough that it doesn't get ignored by the 
 failure detector. This in turn means that the FD records a completely skewed 
 average heartbeat interval, which in turn means that nodes almost always get 
 flapped on interpret() unless they happen to *just* have had their heartbeat 
 updated. Then they flap back up whenever the next heartbeat comes in (since 
 it gets brought up immediately).
 In our build, we are replacing the FD with an implementation that simply uses 
 a fixed {{N}} second time to convict, because this is just one of many ways 
 in which the current FD hurts, while we still haven't found a way it actually 
 helps relative to the trivial fixed-second conviction policy.
 For upstream, assuming people won't agree on changing it to a fixed timeout, 
 I suggest, at minimum, never using a value lower than something like 10 
 seconds or something, when determining whether to ignore. Slightly better is 
 to make it a config option.
 (I should note that if propagation delays are significantly off from the 
 expected level, other things than the FD already breaks - such as the whole 
 concept of {{RING_DELAY}}, which assumes the propagation time is roughly 
 constant with e.g. cluster size.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5493) Confusing output of CommandDroppedTasks

2013-11-22 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830391#comment-13830391
 ] 

Mikhail Stepura commented on CASSANDRA-5493:


[~ondrej.cernos] Do you still encounter this with the latest 1.2.x?

 Confusing output of CommandDroppedTasks
 ---

 Key: CASSANDRA-5493
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5493
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Ondřej Černoš
Assignee: Mikhail Stepura
Priority: Minor

 We have 2 DCs, 3 nodes in each, using EC2 support. We are debugging nodetool 
 repair problems (roughly 1 out of 2 attempts just freezes). We looked into 
 the MessagingServiceBean to see what is going on using jmxterm. See the 
 following:
 {noformat}
 #mbean = org.apache.cassandra.net:type=MessagingService:
 CommandDroppedTasks = { 
  107.aaa.bbb.ccc = 0;
  166.ddd.eee.fff = 124320;
  10.ggg.hhh.iii = 0;
  107.jjj.kkk.lll = 0;
  166.mmm.nnn.ooo = 1336699;
  166.ppp.qqq.rrr = 1329171;
  10.sss.ttt.uuu = 0;
  107.vvv.www.xxx = 0;
 };
 {noformat}
 The problem with this output is it has 8 records. The node's neighbours (the 
 107 and 10 nodes) are mentioned twice in the output, once with their public 
 IPs and once with their private IPs. The nodes in remote DC (the 166 ones) 
 are reported only once. I am pretty sure this is a bug - the node should be 
 reported only with one of its addresses in all outputs from Cassandra and it 
 should be consistent.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[2/3] git commit: Fix setting last compacted key in the wrong level for LCS patch by Jiri Horky; reviewed by jbellis for CASSANDRA-6284

2013-11-22 Thread jbellis
Fix setting last compacted key in the wrong level for LCS
patch by Jiri Horky; reviewed by jbellis for CASSANDRA-6284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/02a93025
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/02a93025
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/02a93025

Branch: refs/heads/trunk
Commit: 02a93025e1db826216e4c24fbe6b5949405e4826
Parents: 4fd322f
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Nov 22 17:06:08 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Nov 22 17:06:13 2013 -0600

--
 CHANGES.txt  |  4 
 .../apache/cassandra/db/compaction/LeveledManifest.java  | 11 ++-
 2 files changed, 10 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/02a93025/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8163c94..e85ba23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.4
+ * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
+
+
 2.0.3
  * Fix FD leak on slice read path (CASSANDRA-6275)
  * Cancel read meter task when closing SSTR (CASSANDRA-6358)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/02a93025/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index 7348c29..5690bd8 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@ -137,8 +137,13 @@ public class LeveledManifest
 
 // the level for the added sstables is the max of the removed ones,
 // plus one if the removed were all on the same level
+int minLevel = Integer.MAX_VALUE;
+
 for (SSTableReader sstable : removed)
-remove(sstable);
+{
+int thisLevel = remove(sstable);
+minLevel = Math.min(minLevel, thisLevel);
+}
 
 // it's valid to do a remove w/o an add (e.g. on truncate)
 if (added.isEmpty())
@@ -147,12 +152,8 @@ public class LeveledManifest
 if (logger.isDebugEnabled())
 logger.debug(Adding [{}], toString(added));
 
-int minLevel = Integer.MAX_VALUE;
 for (SSTableReader ssTableReader : added)
-{
-minLevel = Math.min(minLevel, ssTableReader.getSSTableLevel());
 add(ssTableReader);
-}
 lastCompactedKeys[minLevel] = SSTable.sstableOrdering.max(added).last;
 }
 



[1/3] git commit: Fix setting last compacted key in the wrong level for LCS patch by Jiri Horky; reviewed by jbellis for CASSANDRA-6284

2013-11-22 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 4fd322f53 - 02a93025e
  refs/heads/trunk 87b39c8af - a10150542


Fix setting last compacted key in the wrong level for LCS
patch by Jiri Horky; reviewed by jbellis for CASSANDRA-6284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/02a93025
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/02a93025
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/02a93025

Branch: refs/heads/cassandra-2.0
Commit: 02a93025e1db826216e4c24fbe6b5949405e4826
Parents: 4fd322f
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Nov 22 17:06:08 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Nov 22 17:06:13 2013 -0600

--
 CHANGES.txt  |  4 
 .../apache/cassandra/db/compaction/LeveledManifest.java  | 11 ++-
 2 files changed, 10 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/02a93025/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8163c94..e85ba23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.4
+ * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
+
+
 2.0.3
  * Fix FD leak on slice read path (CASSANDRA-6275)
  * Cancel read meter task when closing SSTR (CASSANDRA-6358)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/02a93025/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index 7348c29..5690bd8 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@ -137,8 +137,13 @@ public class LeveledManifest
 
 // the level for the added sstables is the max of the removed ones,
 // plus one if the removed were all on the same level
+int minLevel = Integer.MAX_VALUE;
+
 for (SSTableReader sstable : removed)
-remove(sstable);
+{
+int thisLevel = remove(sstable);
+minLevel = Math.min(minLevel, thisLevel);
+}
 
 // it's valid to do a remove w/o an add (e.g. on truncate)
 if (added.isEmpty())
@@ -147,12 +152,8 @@ public class LeveledManifest
 if (logger.isDebugEnabled())
 logger.debug(Adding [{}], toString(added));
 
-int minLevel = Integer.MAX_VALUE;
 for (SSTableReader ssTableReader : added)
-{
-minLevel = Math.min(minLevel, ssTableReader.getSSTableLevel());
 add(ssTableReader);
-}
 lastCompactedKeys[minLevel] = SSTable.sstableOrdering.max(added).last;
 }
 



[3/3] git commit: merge from 2.0

2013-11-22 Thread jbellis
merge from 2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a1015054
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a1015054
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a1015054

Branch: refs/heads/trunk
Commit: a10150542c662a4cc69ce1b88f48636d1e6884f7
Parents: 87b39c8 02a9302
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Nov 22 17:07:37 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Nov 22 17:07:37 2013 -0600

--
 CHANGES.txt  |  4 
 .../apache/cassandra/db/compaction/LeveledManifest.java  | 11 ++-
 2 files changed, 10 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a1015054/CHANGES.txt
--
diff --cc CHANGES.txt
index f54afe1,e85ba23..3fdb8e7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,7 +1,24 @@@
 +2.1
 + * allocate fixed index summary memory pool and resample cold index summaries 
 +   to use less memory (CASSANDRA-5519)
 + * Removed multithreaded compaction (CASSANDRA-6142)
 + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337)
 + * change logging from log4j to logback (CASSANDRA-5883)
 + * switch to LZ4 compression for internode communication (CASSANDRA-5887)
 + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971)
 + * Remove 1.2 network compatibility code (CASSANDRA-5960)
 + * Remove leveled json manifest migration code (CASSANDRA-5996)
 + * Remove CFDefinition (CASSANDRA-6253)
 + * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278)
 + * User-defined types for CQL3 (CASSANDRA-5590)
 + * Use of o.a.c.metrics in nodetool (CASSANDRA-5871)
 + * Batch read from OTC's queue and cleanup (CASSANDRA-1632)
 +
 +
+ 2.0.4
+  * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
+ 
+ 
  2.0.3
   * Fix FD leak on slice read path (CASSANDRA-6275)
   * Cancel read meter task when closing SSTR (CASSANDRA-6358)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a1015054/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index 2b79493,5690bd8..76f51d1
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@@ -145,13 -152,9 +150,9 @@@ public class LeveledManifes
  if (logger.isDebugEnabled())
  logger.debug(Adding [{}], toString(added));
  
- int minLevel = Integer.MAX_VALUE;
  for (SSTableReader ssTableReader : added)
- {
- minLevel = Math.min(minLevel, ssTableReader.getSSTableLevel());
  add(ssTableReader);
- }
 -lastCompactedKeys[minLevel] = SSTable.sstableOrdering.max(added).last;
 +lastCompactedKeys[minLevel] = 
SSTableReader.sstableOrdering.max(added).last;
  }
  
  public synchronized void repairOverlappingSSTables(int level)



[jira] [Commented] (CASSANDRA-4375) FD incorrectly using RPC timeout to ignore gossip heartbeats

2013-11-22 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830398#comment-13830398
 ] 

Jonathan Ellis commented on CASSANDRA-4375:
---

I look forward to your patch. :)

 FD incorrectly using RPC timeout to ignore gossip heartbeats
 

 Key: CASSANDRA-4375
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4375
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Brandon Williams
  Labels: gossip
 Fix For: 1.2.13

 Attachments: 4375.txt


 Short version: You can't run a cluster with short RPC timeouts because nodes 
 just constantly flap up/down.
 Long version:
 CASSANDRA-3273 tried to fix a problem resulting from the way the failure 
 detector works, but did so by introducing a much more sever bug: With low RPC 
 timeouts, that are lower than the typical gossip propagation time, a cluster 
 will just constantly have all nodes flapping other nodes up and down.
 The cause is this:
 {code}
 +// in the event of a long partition, never record an interval longer 
 than the rpc timeout,
 +// since if a host is regularly experiencing connectivity problems 
 lasting this long we'd
 +// rather mark it down quickly instead of adapting
 +private final double MAX_INTERVAL_IN_MS = 
 DatabaseDescriptor.getRpcTimeout();
 {code}
 And then:
 {code}
 -tLast_ = value;
 -arrivalIntervals_.add(interArrivalTime);
 +if (interArrivalTime = MAX_INTERVAL_IN_MS)
 +arrivalIntervals_.add(interArrivalTime);
 +else
 +logger_.debug(Ignoring interval time of {}, interArrivalTime);
 {code}
 Using the RPC timeout to ignore unreasonably long intervals is not correct, 
 as the RPC timeout is completely orthogonal to gossip propagation delay (see 
 CASSANDRA-3927 for a quick description of how the FD works).
 In practice, the propagation delay ends up being in the 0-3 second range on a 
 cluster with good local latency. With a low RPC timeout of say 200 ms, very 
 few heartbeat updates come in fast enough that it doesn't get ignored by the 
 failure detector. This in turn means that the FD records a completely skewed 
 average heartbeat interval, which in turn means that nodes almost always get 
 flapped on interpret() unless they happen to *just* have had their heartbeat 
 updated. Then they flap back up whenever the next heartbeat comes in (since 
 it gets brought up immediately).
 In our build, we are replacing the FD with an implementation that simply uses 
 a fixed {{N}} second time to convict, because this is just one of many ways 
 in which the current FD hurts, while we still haven't found a way it actually 
 helps relative to the trivial fixed-second conviction policy.
 For upstream, assuming people won't agree on changing it to a fixed timeout, 
 I suggest, at minimum, never using a value lower than something like 10 
 seconds or something, when determining whether to ignore. Slightly better is 
 to make it a config option.
 (I should note that if propagation delays are significantly off from the 
 expected level, other things than the FD already breaks - such as the whole 
 concept of {{RING_DELAY}}, which assumes the propagation time is roughly 
 constant with e.g. cluster size.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering

2013-11-22 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830401#comment-13830401
 ] 

Alex Liu commented on CASSANDRA-6348:
-

rowsPerQuery is only used as page size for Index CF during 2i search.

maxColumns is the number of limit clause.  If meanColumns is a big number, then 
filter.maxColumns()/meanColumns is less than 1, rowsPerQuery is 2. The result 
paging size for index CF is 2 which is too small, we end up with too many 
random seeks between index CF and base CF, that's the reason why sometimes 2i 
index search is so slow. We need to avoid the page size of index CF too small. 
The goal is to set page size an enough large number but not too large to avoid 
OOM, so we can have less random seeks between index CF and base CF.

If there is data filtering involved and many base CF columns don't match the 
filter,  the small page size causes the issue even worse for we needs paging 
through more pages in index CF.

{code}
public int maxRows()
{
return countCQL3Rows ? Integer.MAX_VALUE : maxResults;
}

public int maxColumns()
{
return countCQL3Rows ? maxResults : Integer.MAX_VALUE;
}
{code}

for none-cql query,
{code}
rowsPerQuery = Math.max(Math.min(filter.maxResults, 
Integer.MAX_VALUE / meanColumns), 2);
most likely  becomes rowsPerQuery = Math.max(filter.maxResults, 2);
most likely becomes rowsPerQuery = filter.maxResults
which is the same number of rows to fetch
{code}

for cql query
{code}
rowsPerQuery = Math.max(Math.min(Integer.MAX_VALUE, 
filter.maxResults / meanColumns), 2);
most likely  becomes rowsPerQuery = Math.max(filter.maxResults/ 
meanColumns, 2);
most likely becomes rowsPerQuery = filter.maxResults/ meanColumns
if meanColumns is too big, it's a very small number less than 1 
possible.
if no limit clause in cql query, it becomes Integer.MAX_VALUE/ 
meanColumns which is a big number.
{code}

So the question is how to calculate page size for index CF, so we don't have 
too many random seeks between index CF and base CF and void fetching too many 
index columns to avoid OOM.



 TimeoutException throws if Cql query allows data filtering and index is too 
 big and it can't find the data in base CF after filtering 
 --

 Key: CASSANDRA-6348
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6348
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 6348.txt


 If index row is too big, and filtering can't find the match Cql row in base 
 CF, it keep scanning the index row and retrieving base CF until the index row 
 is scanned completely which may take too long and thrift server returns 
 TimeoutException. This is one of the reasons why we shouldn't index a column 
 if the index is too big.
 Multiple indexes merging can resolve the case where there are only EQUAL 
 clauses. (CASSANDRA-6048 addresses it).
 If the query has none-EQUAL clauses, we still need do data filtering which 
 might lead to timeout exception.
 We can either disable those kind of queries or WARN the user that data 
 filtering might lead to timeout exception or OOM.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6059) Improve memory-use defaults

2013-11-22 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830402#comment-13830402
 ] 

Robert Coli commented on CASSANDRA-6059:


FWIW, I agree about the 5x reduction in default timeout being likely to catch 
many people by surprise, that was my immediate reaction when I read about it in 
NEWS.txt... but I also agree that 10s is arbitrary and probably too large. So 
+1.

 Improve memory-use defaults
 ---

 Key: CASSANDRA-6059
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6059
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.0.2

 Attachments: 6059.txt


 Anecdotally, it's still too easy to OOM Cassandra even after moving sstable 
 internals off heap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering

2013-11-22 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830415#comment-13830415
 ] 

Alex Liu commented on CASSANDRA-6348:
-

If there is data filtering, for cql query, the total number of index columns 
needed is unknown, and it's not directly related to the limit clause, so we 
can't calculate it based on limit clause. Set it to a magic number which is 
large enough but not too large is a viable solution.

 TimeoutException throws if Cql query allows data filtering and index is too 
 big and it can't find the data in base CF after filtering 
 --

 Key: CASSANDRA-6348
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6348
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 6348.txt


 If index row is too big, and filtering can't find the match Cql row in base 
 CF, it keep scanning the index row and retrieving base CF until the index row 
 is scanned completely which may take too long and thrift server returns 
 TimeoutException. This is one of the reasons why we shouldn't index a column 
 if the index is too big.
 Multiple indexes merging can resolve the case where there are only EQUAL 
 clauses. (CASSANDRA-6048 addresses it).
 If the query has none-EQUAL clauses, we still need do data filtering which 
 might lead to timeout exception.
 We can either disable those kind of queries or WARN the user that data 
 filtering might lead to timeout exception or OOM.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: move setting lastCompactedKey to before the return-if-nothing-added

2013-11-22 Thread jbellis
Updated Branches:
  refs/heads/trunk a10150542 - f3dc188e2


move setting lastCompactedKey to before the return-if-nothing-added


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f3dc188e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f3dc188e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f3dc188e

Branch: refs/heads/trunk
Commit: f3dc188e203b3db980ee81df05390968043cb601
Parents: a101505
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Nov 22 17:33:07 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Nov 22 17:33:07 2013 -0600

--
 src/java/org/apache/cassandra/db/compaction/LeveledManifest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f3dc188e/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index 76f51d1..232d1f7 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@ -142,6 +142,7 @@ public class LeveledManifest
 int thisLevel = remove(sstable);
 minLevel = Math.min(minLevel, thisLevel);
 }
+lastCompactedKeys[minLevel] = 
SSTableReader.sstableOrdering.max(added).last;
 
 // it's valid to do a remove w/o an add (e.g. on truncate)
 if (added.isEmpty())
@@ -152,7 +153,6 @@ public class LeveledManifest
 
 for (SSTableReader ssTableReader : added)
 add(ssTableReader);
-lastCompactedKeys[minLevel] = 
SSTableReader.sstableOrdering.max(added).last;
 }
 
 public synchronized void repairOverlappingSSTables(int level)



[jira] [Commented] (CASSANDRA-6309) Pig CqlStorage generates ERROR 1108: Duplicate schema alias

2013-11-22 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830419#comment-13830419
 ] 

Alex Liu commented on CASSANDRA-6309:
-

https://github.com/apache/cassandra/blob/cassandra-2.0/test/unit/org/apache/cassandra/pig/ThriftColumnFamilyDataTypeTest.java#L20

https://github.com/apache/cassandra/blob/cassandra-2.0/test/unit/org/apache/cassandra/pig/CqlTableDataTypeTest.java#L20

The patch removes the duplicate license header.

 Pig CqlStorage generates  ERROR 1108: Duplicate schema alias
 

 Key: CASSANDRA-6309
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6309
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Thunder Stumpges
Assignee: Alex Liu
 Attachments: 6309-2.0.txt, 6309-v2-2.0-branch.txt, 
 LOCAL_ONE-write-for-all-strategies-v2.txt, 
 LOCAL_ONE-write-for-all-strategies.txt


 In Pig after loading a simple CQL3 table from Cassandra 2.0.1, and dumping 
 contents, I receive:
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
  cm = load 'cql://thunder_test/cassandra_messages' USING CqlStorage;
  dump cm
 ERROR org.apache.pig.tools.grunt.Grunt - 
 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to 
 open iterator for alias cm
 ...
 Caused by: org.apache.pig.impl.plan.PlanValidationException: ERROR 1108: 
 Duplicate schema alias: author in cm
 at 
 org.apache.pig.newplan.logical.visitor.SchemaAliasVisitor.validate(SchemaAliasVisitor.java:75)
 running 'describe cm' gives:
 cm: {message_id: chararray,author: chararray,author: chararray,body: 
 chararray,message_id: chararray}
 The original table schema in Cassandra is:
 CREATE TABLE cassandra_messages (
   message_id text,
   author text,
   body text,
   PRIMARY KEY (message_id, author)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='null' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 it appears that the code in CqlStorage.getColumnMetadata at ~line 478 takes 
 the keys columns (in my case, message_id and author) and appends the 
 columns from getColumnMeta (which has all three columns). Thus the keys 
 columns are duplicated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6399) debian init script removes PID despite the return status

2013-11-22 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830423#comment-13830423
 ] 

Brandon Williams commented on CASSANDRA-6399:
-

What version of cassandra was this against? We fixed a bunch of init script 
bugs recently.

 debian init script removes PID despite the return status
 

 Key: CASSANDRA-6399
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6399
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday

 If there's an error in running service cassandra stop it can return a 
 non-successful code, but the do_stop() removes the PID file anyway.  This 
 shows then via service cassandra status, that Cassandra is stopped, even 
 though it's still running in the process list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6234) Add metrics for native protocols

2013-11-22 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830432#comment-13830432
 ] 

Mikhail Stepura commented on CASSANDRA-6234:


[~slebresne] Will it be enough to expose the same set of metrics as for other 
thread pools? 


 Add metrics for native protocols
 

 Key: CASSANDRA-6234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6234
 Project: Cassandra
  Issue Type: New Feature
Reporter: Adam Hattrell
Assignee: Mikhail Stepura

 It would be very useful to expose metrics related to the native protocol.
 Initially I have a user that would like to be able to monitor the usage of 
 native transport threads.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6234) Add metrics for native protocols

2013-11-22 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6234:
---

Attachment: Oracle_Java_Mission_Control_2013-11-22_15-50-09.png

!Oracle_Java_Mission_Control_2013-11-22_15-50-09.png|thumbnail!

 Add metrics for native protocols
 

 Key: CASSANDRA-6234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6234
 Project: Cassandra
  Issue Type: New Feature
Reporter: Adam Hattrell
Assignee: Mikhail Stepura
 Attachments: Oracle_Java_Mission_Control_2013-11-22_15-50-09.png


 It would be very useful to expose metrics related to the native protocol.
 Initially I have a user that would like to be able to monitor the usage of 
 native transport threads.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6234) Add metrics for native protocols

2013-11-22 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830434#comment-13830434
 ] 

Mikhail Stepura edited comment on CASSANDRA-6234 at 11/22/13 11:57 PM:
---

!Oracle_Java_Mission_Control_2013-11-22_15-50-09.png|thumbnail!


was (Author: mishail):
!Oracle_Java_Mission_Control_2013-11-22_15-50-09.png!

 Add metrics for native protocols
 

 Key: CASSANDRA-6234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6234
 Project: Cassandra
  Issue Type: New Feature
Reporter: Adam Hattrell
Assignee: Mikhail Stepura
 Attachments: Oracle_Java_Mission_Control_2013-11-22_15-50-09.png


 It would be very useful to expose metrics related to the native protocol.
 Initially I have a user that would like to be able to monitor the usage of 
 native transport threads.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6234) Add metrics for native protocols

2013-11-22 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830434#comment-13830434
 ] 

Mikhail Stepura edited comment on CASSANDRA-6234 at 11/22/13 11:57 PM:
---

!Oracle_Java_Mission_Control_2013-11-22_15-50-09.png!


was (Author: mishail):
!Oracle_Java_Mission_Control_2013-11-22_15-50-09.png|thumbnail!

 Add metrics for native protocols
 

 Key: CASSANDRA-6234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6234
 Project: Cassandra
  Issue Type: New Feature
Reporter: Adam Hattrell
Assignee: Mikhail Stepura
 Attachments: Oracle_Java_Mission_Control_2013-11-22_15-50-09.png


 It would be very useful to expose metrics related to the native protocol.
 Initially I have a user that would like to be able to monitor the usage of 
 native transport threads.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Issue Comment Deleted] (CASSANDRA-6234) Add metrics for native protocols

2013-11-22 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6234:
---

Comment: was deleted

(was: !Oracle_Java_Mission_Control_2013-11-22_15-50-09.png|thumbnail!)

 Add metrics for native protocols
 

 Key: CASSANDRA-6234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6234
 Project: Cassandra
  Issue Type: New Feature
Reporter: Adam Hattrell
Assignee: Mikhail Stepura
 Attachments: Oracle_Java_Mission_Control_2013-11-22_15-50-09.png


 It would be very useful to expose metrics related to the native protocol.
 Initially I have a user that would like to be able to monitor the usage of 
 native transport threads.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6399) debian init script removes PID despite the return status

2013-11-22 Thread Peter Halliday (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830435#comment-13830435
 ] 

Peter Halliday commented on CASSANDRA-6399:
---

2.0.1

 debian init script removes PID despite the return status
 

 Key: CASSANDRA-6399
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6399
 Project: Cassandra
  Issue Type: Bug
Reporter: Peter Halliday

 If there's an error in running service cassandra stop it can return a 
 non-successful code, but the do_stop() removes the PID file anyway.  This 
 shows then via service cassandra status, that Cassandra is stopped, even 
 though it's still running in the process list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >