[jira] [Commented] (CASSANDRA-8205) ColumnFamilyMetrics#totalDiskSpaceUsed gets wrong value when SSTable is deleted

2014-10-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188081#comment-14188081
 ] 

Marcus Eriksson commented on CASSANDRA-8205:


+1

 ColumnFamilyMetrics#totalDiskSpaceUsed gets wrong value when SSTable is 
 deleted
 ---

 Key: CASSANDRA-8205
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8205
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.0.12, 2.1.2

 Attachments: 0001-ColumnFamilyMetrics-size-metrics-test.patch, 
 8205-2.0.txt


 ColumnFamilyMetrics#totalDiskSpaceUsed is decremented when actual SSTables 
 files are deleted from disk. The amount of decrement is calculated at the 
 beginning of SSTableReader instantiation(through 
 [SSTableDeletionTask|https://github.com/apache/cassandra/blob/cassandra-2.0.11/src/java/org/apache/cassandra/io/sstable/SSTableDeletingTask.java#L56]).
 But the size can change because Summary.db file may be re-created after 
 SSTableReader instantiation, and that leads to calculate wrong value for 
 totalDiskSpaceUsed.
 I attached unit test file for 2.0, but you can also compare the value after 
 doing TRUNCATE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8206) Removing items from a set breaks secondary index

2014-10-29 Thread Tuukka Mustonen (JIRA)
Tuukka Mustonen created CASSANDRA-8206:
--

 Summary: Removing items from a set breaks secondary index
 Key: CASSANDRA-8206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8206
 Project: Cassandra
  Issue Type: Bug
Reporter: Tuukka Mustonen


Removing items from a set breaks index for field {{id}}:

{noformat}
cqlsh:cs CREATE TABLE buckets (
  ...   tenant int,
  ...   id int,
  ...   items settext,
  ...   PRIMARY KEY (tenant, id)
  ... );
cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
cqlsh:cs INSERT INTO buckets (tenant, id, items) VALUES (1, 1, {'foo', 'bar'});
cqlsh:cs SELECT * FROM buckets;

 tenant | id | items
++
  1 |  1 | {'bar', 'foo'}

(1 rows)

cqlsh:cs SELECT * FROM buckets WHERE id = 1;

 tenant | id | items
++
  1 |  1 | {'bar', 'foo'}

(1 rows)

cqlsh:cs UPDATE buckets SET items=items-{'foo'} WHERE tenant=1 AND id=1;
cqlsh:cs SELECT * FROM buckets;

 tenant | id | items
++-
  1 |  1 | {'bar'}

(1 rows)

cqlsh:cs SELECT * FROM buckets WHERE id = 1;

(0 rows)
{noformat}

Re-building the index fixes the issue:

{noformat}
cqlsh:cs DROP INDEX buckets_ids;
cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
cqlsh:cs SELECT * FROM buckets WHERE id = 1;

 tenant | id | items
++-
  1 |  1 | {'bar'}

(1 rows)
{noformat}

Adding items does not cause similar failure, only delete. Also didn't test if 
other collections are also affected(?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7491) Incorrect thrift-server dependency in 2.0 poms

2014-10-29 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188155#comment-14188155
 ] 

Cyril Scetbon commented on CASSANDRA-7491:
--

[~brandon.williams] Here is a [patch|http://pastebin.com/akTkS0dx] that shows 
the way I fix the thrift-server jar version. I think we shouldn't have to embed 
available packages and should use this way to get/install them when building 
Cassandra's sources.

 Incorrect thrift-server dependency in 2.0 poms
 --

 Key: CASSANDRA-7491
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7491
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Sam Tunnicliffe
 Fix For: 2.0.12


 On the 2.0 branch we recently replaced thrift-server-0.3.3.jar with 
 thrift-server-internal-only-0.3.3.jar (commit says CASSANDRA-6545, but I 
 don't think that's right), but didn't update the generated pom that gets 
 deployed to mvn central. The upshot is that the poms on maven central for 
 2.0.8  2.0.9 specify their dependencies incorrectly. So any project pulling 
 in those versions of cassandra-all as a dependency will incorrectly include 
 the old jar.
 However, on 2.1  trunk the internal-only jar was subsequently replaced by 
 thrift-server-0.3.5.jar (CASSANDRA-6285), which *is* available in mvn 
 central. build.xml has also been updated correctly on these branches.
 [~xedin], is there any reason for not switching 2.0 to 
 thrift-server-0.3.5.jar ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2014-10-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188172#comment-14188172
 ] 

Sylvain Lebresne commented on CASSANDRA-8099:
-

bq. How is this looking two weeks later?

Prettier but not really near completion I'm afraid.

bq. Any potential blockers come up?

Not really blockers, but I'll admit that this is bigger/longer that I though.

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-10-29 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/cql3/UpdateParameters.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d285ead
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d285ead
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d285ead

Branch: refs/heads/cassandra-2.1
Commit: 1d285eadadc14251da271cc03b4cf8a1f8f33516
Parents: c937657 748b01d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Oct 29 10:40:51 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Oct 29 10:40:51 2014 +0100

--
 CHANGES.txt |  1 +
 doc/native_protocol_v3.spec |  8 +++--
 .../org/apache/cassandra/cql3/QueryOptions.java | 10 +++---
 .../apache/cassandra/cql3/UpdateParameters.java |  6 
 .../cassandra/cql3/statements/Selection.java|  4 +--
 .../apache/cassandra/cql3/TimestampTest.java| 36 
 6 files changed, 55 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d285ead/CHANGES.txt
--
diff --cc CHANGES.txt
index cdfd248,d2cb003..7afed1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,12 -1,5 +1,13 @@@
 -2.0.12:
 +2.1.2
 + * Avoid IllegalArgumentException while sorting sstables in
 +   IndexSummaryManager (CASSANDRA-8182)
 + * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)
 + * Add 'die' policy for commit log and disk failure (CASSANDRA-7927)
 + * Fix installing as service on Windows (CASSANDRA-8115)
 + * Fix CREATE TABLE for CQL2 (CASSANDRA-8144)
 + * Avoid boxing in ColumnStats min/max trackers (CASSANDRA-8109)
 +Merged from 2.0:
+  * Handle negative timestamp in writetime method (CASSANDRA-8139)
   * Pig: Remove errant LIMIT clause in CqlNativeStorage (CASSANDRA-8166)
   * Throw ConfigurationException when hsha is used with the default
 rpc_max_threads setting of 'unlimited' (CASSANDRA-8116)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d285ead/doc/native_protocol_v3.spec
--
diff --cc doc/native_protocol_v3.spec
index 13b6ac6,000..89a99ad
mode 100644,00..100644
--- a/doc/native_protocol_v3.spec
+++ b/doc/native_protocol_v3.spec
@@@ -1,914 -1,0 +1,916 @@@
 +
 + CQL BINARY PROTOCOL v3
 +
 +
 +Table of Contents
 +
 +  1. Overview
 +  2. Frame header
 +2.1. version
 +2.2. flags
 +2.3. stream
 +2.4. opcode
 +2.5. length
 +  3. Notations
 +  4. Messages
 +4.1. Requests
 +  4.1.1. STARTUP
 +  4.1.2. AUTH_RESPONSE
 +  4.1.3. OPTIONS
 +  4.1.4. QUERY
 +  4.1.5. PREPARE
 +  4.1.6. EXECUTE
 +  4.1.7. BATCH
 +  4.1.8. REGISTER
 +4.2. Responses
 +  4.2.1. ERROR
 +  4.2.2. READY
 +  4.2.3. AUTHENTICATE
 +  4.2.4. SUPPORTED
 +  4.2.5. RESULT
 +4.2.5.1. Void
 +4.2.5.2. Rows
 +4.2.5.3. Set_keyspace
 +4.2.5.4. Prepared
 +4.2.5.5. Schema_change
 +  4.2.6. EVENT
 +  4.2.7. AUTH_CHALLENGE
 +  4.2.8. AUTH_SUCCESS
 +  5. Compression
 +  6. Collection types
 +  7. User Defined and tuple types
 +  8. Result paging
 +  9. Error codes
 +  10. Changes from v2
 +
 +
 +1. Overview
 +
 +  The CQL binary protocol is a frame based protocol. Frames are defined as:
 +
 +  0 8162432 40
 +  +-+-+-+-+-+
 +  | version |  flags  |  stream   | opcode  |
 +  +-+-+-+-+-+
 +  |length |
 +  +-+-+-+-+
 +  |   |
 +  ....  body ...  .
 +  .   .
 +  .   .
 +  +
 +
 +  The protocol is big-endian (network byte order).
 +
 +  Each frame contains a fixed size header (9 bytes) followed by a variable 
size
 +  body. The header is described in Section 2. The content of the body depends
 +  on the header opcode value (the body can in particular be empty for some
 +  opcode values). The list of allowed opcode is defined Section 2.3 and the
 +  details of each corresponding message is described Section 4.
 +
 +  The protocol distinguishes 2 types of frames: requests and responses. 
Requests
 +  are those frame sent by the clients to the server, response are the ones 
sent
 +  by the server. Note however that the protocol supports server pushes 
(events)
 +  so responses does not necessarily come 

[1/2] git commit: Handle negative timestamps in writetime method

2014-10-29 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 c93765772 - 1d285eada


Handle negative timestamps in writetime method

patch by slebresne; reviewed by thobbs for CASSANDRA-8139


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/748b01d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/748b01d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/748b01d1

Branch: refs/heads/cassandra-2.1
Commit: 748b01d1d6c6591d0a241ced3f0f729a84ee3ef6
Parents: 1b332bc
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Oct 29 10:34:26 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Oct 29 10:34:26 2014 +0100

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/cql3/UpdateParameters.java | 6 ++
 src/java/org/apache/cassandra/cql3/statements/Selection.java | 4 ++--
 3 files changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/748b01d1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 11f6517..d2cb003 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Handle negative timestamp in writetime method (CASSANDRA-8139)
  * Pig: Remove errant LIMIT clause in CqlNativeStorage (CASSANDRA-8166)
  * Throw ConfigurationException when hsha is used with the default
rpc_max_threads setting of 'unlimited' (CASSANDRA-8116)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/748b01d1/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --git a/src/java/org/apache/cassandra/cql3/UpdateParameters.java 
b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
index c543d6c..6c71911 100644
--- a/src/java/org/apache/cassandra/cql3/UpdateParameters.java
+++ b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
@@ -43,6 +43,7 @@ public class UpdateParameters
 private final MapByteBuffer, ColumnGroupMap prefetchedLists;
 
 public UpdateParameters(CFMetaData metadata, ListByteBuffer variables, 
long timestamp, int ttl, MapByteBuffer, ColumnGroupMap prefetchedLists)
+throws InvalidRequestException
 {
 this.metadata = metadata;
 this.variables = variables;
@@ -50,6 +51,11 @@ public class UpdateParameters
 this.ttl = ttl;
 this.localDeletionTime = (int)(System.currentTimeMillis() / 1000);
 this.prefetchedLists = prefetchedLists;
+
+// We use MIN_VALUE internally to mean the absence of of timestamp (in 
Selection, in sstable stats, ...), so exclude
+// it to avoid potential confusion.
+if (timestamp == Long.MIN_VALUE)
+throw new InvalidRequestException(String.format(Out of bound 
timestamp, must be in [%d, %d], Long.MIN_VALUE + 1, Long.MAX_VALUE));
 }
 
 public Column makeColumn(ByteBuffer name, ByteBuffer value) throws 
InvalidRequestException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/748b01d1/src/java/org/apache/cassandra/cql3/statements/Selection.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/Selection.java 
b/src/java/org/apache/cassandra/cql3/statements/Selection.java
index 37ab384..18ca177 100644
--- a/src/java/org/apache/cassandra/cql3/statements/Selection.java
+++ b/src/java/org/apache/cassandra/cql3/statements/Selection.java
@@ -277,7 +277,7 @@ public abstract class Selection
 current.add(isDead(c) ? null : value(c));
 if (timestamps != null)
 {
-timestamps[current.size() - 1] = isDead(c) ? -1 : 
c.timestamp();
+timestamps[current.size() - 1] = isDead(c) ? Long.MIN_VALUE : 
c.timestamp();
 }
 if (ttls != null)
 {
@@ -437,7 +437,7 @@ public abstract class Selection
 if (isWritetime)
 {
 long ts = rs.timestamps[idx];
-return ts = 0 ? ByteBufferUtil.bytes(ts) : null;
+return ts != Long.MIN_VALUE ? ByteBufferUtil.bytes(ts) : null;
 }
 
 int ttl = rs.ttls[idx];



[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-10-29 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/cql3/UpdateParameters.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d285ead
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d285ead
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d285ead

Branch: refs/heads/trunk
Commit: 1d285eadadc14251da271cc03b4cf8a1f8f33516
Parents: c937657 748b01d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Oct 29 10:40:51 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Oct 29 10:40:51 2014 +0100

--
 CHANGES.txt |  1 +
 doc/native_protocol_v3.spec |  8 +++--
 .../org/apache/cassandra/cql3/QueryOptions.java | 10 +++---
 .../apache/cassandra/cql3/UpdateParameters.java |  6 
 .../cassandra/cql3/statements/Selection.java|  4 +--
 .../apache/cassandra/cql3/TimestampTest.java| 36 
 6 files changed, 55 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d285ead/CHANGES.txt
--
diff --cc CHANGES.txt
index cdfd248,d2cb003..7afed1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,12 -1,5 +1,13 @@@
 -2.0.12:
 +2.1.2
 + * Avoid IllegalArgumentException while sorting sstables in
 +   IndexSummaryManager (CASSANDRA-8182)
 + * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)
 + * Add 'die' policy for commit log and disk failure (CASSANDRA-7927)
 + * Fix installing as service on Windows (CASSANDRA-8115)
 + * Fix CREATE TABLE for CQL2 (CASSANDRA-8144)
 + * Avoid boxing in ColumnStats min/max trackers (CASSANDRA-8109)
 +Merged from 2.0:
+  * Handle negative timestamp in writetime method (CASSANDRA-8139)
   * Pig: Remove errant LIMIT clause in CqlNativeStorage (CASSANDRA-8166)
   * Throw ConfigurationException when hsha is used with the default
 rpc_max_threads setting of 'unlimited' (CASSANDRA-8116)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d285ead/doc/native_protocol_v3.spec
--
diff --cc doc/native_protocol_v3.spec
index 13b6ac6,000..89a99ad
mode 100644,00..100644
--- a/doc/native_protocol_v3.spec
+++ b/doc/native_protocol_v3.spec
@@@ -1,914 -1,0 +1,916 @@@
 +
 + CQL BINARY PROTOCOL v3
 +
 +
 +Table of Contents
 +
 +  1. Overview
 +  2. Frame header
 +2.1. version
 +2.2. flags
 +2.3. stream
 +2.4. opcode
 +2.5. length
 +  3. Notations
 +  4. Messages
 +4.1. Requests
 +  4.1.1. STARTUP
 +  4.1.2. AUTH_RESPONSE
 +  4.1.3. OPTIONS
 +  4.1.4. QUERY
 +  4.1.5. PREPARE
 +  4.1.6. EXECUTE
 +  4.1.7. BATCH
 +  4.1.8. REGISTER
 +4.2. Responses
 +  4.2.1. ERROR
 +  4.2.2. READY
 +  4.2.3. AUTHENTICATE
 +  4.2.4. SUPPORTED
 +  4.2.5. RESULT
 +4.2.5.1. Void
 +4.2.5.2. Rows
 +4.2.5.3. Set_keyspace
 +4.2.5.4. Prepared
 +4.2.5.5. Schema_change
 +  4.2.6. EVENT
 +  4.2.7. AUTH_CHALLENGE
 +  4.2.8. AUTH_SUCCESS
 +  5. Compression
 +  6. Collection types
 +  7. User Defined and tuple types
 +  8. Result paging
 +  9. Error codes
 +  10. Changes from v2
 +
 +
 +1. Overview
 +
 +  The CQL binary protocol is a frame based protocol. Frames are defined as:
 +
 +  0 8162432 40
 +  +-+-+-+-+-+
 +  | version |  flags  |  stream   | opcode  |
 +  +-+-+-+-+-+
 +  |length |
 +  +-+-+-+-+
 +  |   |
 +  ....  body ...  .
 +  .   .
 +  .   .
 +  +
 +
 +  The protocol is big-endian (network byte order).
 +
 +  Each frame contains a fixed size header (9 bytes) followed by a variable 
size
 +  body. The header is described in Section 2. The content of the body depends
 +  on the header opcode value (the body can in particular be empty for some
 +  opcode values). The list of allowed opcode is defined Section 2.3 and the
 +  details of each corresponding message is described Section 4.
 +
 +  The protocol distinguishes 2 types of frames: requests and responses. 
Requests
 +  are those frame sent by the clients to the server, response are the ones 
sent
 +  by the server. Note however that the protocol supports server pushes 
(events)
 +  so responses does not necessarily come right 

[1/3] git commit: Handle negative timestamps in writetime method

2014-10-29 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 708c6bafa - 45084f182


Handle negative timestamps in writetime method

patch by slebresne; reviewed by thobbs for CASSANDRA-8139


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/748b01d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/748b01d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/748b01d1

Branch: refs/heads/trunk
Commit: 748b01d1d6c6591d0a241ced3f0f729a84ee3ef6
Parents: 1b332bc
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Oct 29 10:34:26 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Oct 29 10:34:26 2014 +0100

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/cql3/UpdateParameters.java | 6 ++
 src/java/org/apache/cassandra/cql3/statements/Selection.java | 4 ++--
 3 files changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/748b01d1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 11f6517..d2cb003 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Handle negative timestamp in writetime method (CASSANDRA-8139)
  * Pig: Remove errant LIMIT clause in CqlNativeStorage (CASSANDRA-8166)
  * Throw ConfigurationException when hsha is used with the default
rpc_max_threads setting of 'unlimited' (CASSANDRA-8116)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/748b01d1/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --git a/src/java/org/apache/cassandra/cql3/UpdateParameters.java 
b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
index c543d6c..6c71911 100644
--- a/src/java/org/apache/cassandra/cql3/UpdateParameters.java
+++ b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
@@ -43,6 +43,7 @@ public class UpdateParameters
 private final MapByteBuffer, ColumnGroupMap prefetchedLists;
 
 public UpdateParameters(CFMetaData metadata, ListByteBuffer variables, 
long timestamp, int ttl, MapByteBuffer, ColumnGroupMap prefetchedLists)
+throws InvalidRequestException
 {
 this.metadata = metadata;
 this.variables = variables;
@@ -50,6 +51,11 @@ public class UpdateParameters
 this.ttl = ttl;
 this.localDeletionTime = (int)(System.currentTimeMillis() / 1000);
 this.prefetchedLists = prefetchedLists;
+
+// We use MIN_VALUE internally to mean the absence of of timestamp (in 
Selection, in sstable stats, ...), so exclude
+// it to avoid potential confusion.
+if (timestamp == Long.MIN_VALUE)
+throw new InvalidRequestException(String.format(Out of bound 
timestamp, must be in [%d, %d], Long.MIN_VALUE + 1, Long.MAX_VALUE));
 }
 
 public Column makeColumn(ByteBuffer name, ByteBuffer value) throws 
InvalidRequestException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/748b01d1/src/java/org/apache/cassandra/cql3/statements/Selection.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/Selection.java 
b/src/java/org/apache/cassandra/cql3/statements/Selection.java
index 37ab384..18ca177 100644
--- a/src/java/org/apache/cassandra/cql3/statements/Selection.java
+++ b/src/java/org/apache/cassandra/cql3/statements/Selection.java
@@ -277,7 +277,7 @@ public abstract class Selection
 current.add(isDead(c) ? null : value(c));
 if (timestamps != null)
 {
-timestamps[current.size() - 1] = isDead(c) ? -1 : 
c.timestamp();
+timestamps[current.size() - 1] = isDead(c) ? Long.MIN_VALUE : 
c.timestamp();
 }
 if (ttls != null)
 {
@@ -437,7 +437,7 @@ public abstract class Selection
 if (isWritetime)
 {
 long ts = rs.timestamps[idx];
-return ts = 0 ? ByteBufferUtil.bytes(ts) : null;
+return ts != Long.MIN_VALUE ? ByteBufferUtil.bytes(ts) : null;
 }
 
 int ttl = rs.ttls[idx];



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-10-29 Thread slebresne
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/cql3/statements/Selection.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/45084f18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/45084f18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/45084f18

Branch: refs/heads/trunk
Commit: 45084f182a46234243b94059fd1b6b53e927ead8
Parents: 708c6ba 1d285ea
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Oct 29 10:49:20 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Oct 29 10:49:20 2014 +0100

--
 CHANGES.txt |  1 +
 doc/native_protocol_v3.spec |  8 +++--
 .../org/apache/cassandra/cql3/QueryOptions.java | 10 +++---
 .../apache/cassandra/cql3/UpdateParameters.java |  6 
 .../cassandra/cql3/selection/Selection.java |  2 +-
 .../cql3/selection/WritetimeOrTTLSelector.java  |  4 +--
 .../apache/cassandra/cql3/TimestampTest.java| 36 
 7 files changed, 56 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/45084f18/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/45084f18/src/java/org/apache/cassandra/cql3/selection/Selection.java
--
diff --cc src/java/org/apache/cassandra/cql3/selection/Selection.java
index 67cce72,000..cd5e2a8
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/cql3/selection/Selection.java
+++ b/src/java/org/apache/cassandra/cql3/selection/Selection.java
@@@ -1,390 -1,0 +1,390 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.cql3.selection;
 +
 +import java.nio.ByteBuffer;
 +import java.util.ArrayList;
 +import java.util.Collection;
 +import java.util.Iterator;
 +import java.util.List;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.cql3.ColumnIdentifier;
 +import org.apache.cassandra.cql3.ColumnSpecification;
 +import org.apache.cassandra.cql3.ResultSet;
 +import org.apache.cassandra.db.Cell;
 +import org.apache.cassandra.db.CounterCell;
 +import org.apache.cassandra.db.ExpiringCell;
 +import org.apache.cassandra.db.context.CounterContext;
 +import org.apache.cassandra.exceptions.InvalidRequestException;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +
 +import com.google.common.collect.Iterators;
 +
 +public abstract class Selection
 +{
 +private final CollectionColumnDefinition columns;
 +private final ResultSet.Metadata metadata;
 +private final boolean collectTimestamps;
 +private final boolean collectTTLs;
 +
 +protected Selection(CollectionColumnDefinition columns, 
ListColumnSpecification metadata, boolean collectTimestamps, boolean 
collectTTLs)
 +{
 +this.columns = columns;
 +this.metadata = new ResultSet.Metadata(metadata);
 +this.collectTimestamps = collectTimestamps;
 +this.collectTTLs = collectTTLs;
 +}
 +
 +// Overriden by SimpleSelection when appropriate.
 +public boolean isWildcard()
 +{
 +return false;
 +}
 +
 +public ResultSet.Metadata getResultMetadata()
 +{
 +return metadata;
 +}
 +
 +public static Selection wildcard(CFMetaData cfm)
 +{
 +ListColumnDefinition all = new 
ArrayListColumnDefinition(cfm.allColumns().size());
 +Iterators.addAll(all, cfm.allColumnsInSelectOrder());
 +return new SimpleSelection(all, true);
 +}
 +
 +public static Selection forColumns(CollectionColumnDefinition columns)
 +{
 +return new SimpleSelection(columns, false);
 +}
 +
 +public int addColumnForOrdering(ColumnDefinition c)
 +{
 +columns.add(c);
 +metadata.addNonSerializedColumn(c);
 +return columns.size() 

[jira] [Updated] (CASSANDRA-8139) The WRITETIME function returns null for negative timestamp values

2014-10-29 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8139:

Fix Version/s: 2.1.2

 The WRITETIME function returns null for negative timestamp values
 -

 Key: CASSANDRA-8139
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8139
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Bremner
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.12, 2.1.2

 Attachments: 8139-2.1.txt, 8139.txt


 Insert a column with a negative timestamp value:
 {code}
 INSERT INTO my_table (col1, col2, col3)
 VALUES ('val1', 'val2', 'val3') 
 USING TIMESTAMP -1413614886750020;
 {code}
 Then attempt to read the *writetime*:
 {code}
 SELECT WRITETIME(col3) FROM my_table WHERE col1 = 'val1'
 {code}
 The result is *null*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8163) Complete restriction of a user to given keyspace

2014-10-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188208#comment-14188208
 ] 

Sylvain Lebresne commented on CASSANDRA-8163:
-

bq. is this some thing doable in 2.0.X?

Provided we keep the restriction to the keyspace level initially (that is, 
within a keyspace, a user can't see only some tables and not others), enforcing 
the restriction should be reasonably trivial technically (since all schema 
tables use the keyspace name as partition key). That said, we'd need a new type 
of permission and there is potentially backward compatibility concerns which 
might be the real blocker for a pre-3.0 commit. And for 2.0, we're in the fix 
bugs only phase at this point, so at best this could be 2.1 (provided again 
there is no strong backward compatibility issues, which I haven't checked 
because I'm not 100% up to date on authorization stuffs).

 Complete restriction of a user to given keyspace
 

 Key: CASSANDRA-8163
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8163
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Priority: Minor

 We have a cluster like this:
 project1_keyspace
 table101
 table102
 project2_keyspace
 table201
 table202
 We have set up following users and grants:
 project1_user has all access to project1_keyspace 
 project2_user has all access to project2_keyspace
 However project1_user can still do a 'describe schema' and get the schema for 
 project2_keyspace as well. We do not want project1_user to have any knowledge 
 for project2 in any way (cqlsh/java-driver etc) .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8163) Complete restriction of a user to given keyspace

2014-10-29 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188218#comment-14188218
 ] 

Aleksey Yeschenko commented on CASSANDRA-8163:
--

This will be 3.0-based, if at all.

 Complete restriction of a user to given keyspace
 

 Key: CASSANDRA-8163
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8163
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Priority: Minor

 We have a cluster like this:
 project1_keyspace
 table101
 table102
 project2_keyspace
 table201
 table202
 We have set up following users and grants:
 project1_user has all access to project1_keyspace 
 project2_user has all access to project2_keyspace
 However project1_user can still do a 'describe schema' and get the schema for 
 project2_keyspace as well. We do not want project1_user to have any knowledge 
 for project2 in any way (cqlsh/java-driver etc) .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8176) Intermittent NPE from RecoveryManagerTest RecoverPIT unit test

2014-10-29 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188237#comment-14188237
 ] 

Sam Tunnicliffe commented on CASSANDRA-8176:


I believe this is a problem which at the moment can only affect the tests. 
There's a pre-existing race which the changes introduced by CASSANDRA-6904 make 
more obvious. 

The race is in each of the RecoveryManagerTest tests:

{noformat}
1 - [main] calls CommitLog.instance.resetUnsafe() -- clears CL active/avail 
segments
2 - [main] calls CommitLog.instance.recover() -- attempts to replay all 
unmanaged files in the CL dir, then recycle them.
3 - [COMMIT-LOG-ALLOCATOR] detects that there are no available segments, so it:
3a - creates a new segment
3b - adds it to the list of available segments
{noformat}

The issue is when steps 2 and 3a/3b are interspersed. In that scenario, the new 
segment created in 3a is replayed by the {{\[main\]}} thread and subsequently 
recycled whereby its file is renamed. However, as well as being recycled it is 
added to the list of available segments (3b). So when the next test in the 
suite runs, this segment is used when performing the initial writes. The 
recycling has renamed the file but the segment id has not changed, so the 
checksums written are calculated using the id. Now when this second test resets 
then tries to recover, it discards the segment due to the checksum mismatches. 
Ultimately, that's what leads to the NPE  the test failure as the mutations 
from that dropped segment are not replayed. This race is highlighted by the 
changes in the commit [~mshuler] mentioned because now {{CL.recover()}} scans 
the commit log dir twice and even though in this test the archive/restore calls 
are no-ops, this widens the window for the race to occur.

So it's only the (ab)use of {{resetUnsafe()}} coupled with the way that 
{{recover()}} expects the segments it replays to be unmanaged that causes this 
problem. The simplest fix right now is to add another {{resetUnsafe()}} to the 
tests so that each starts with a clean CL, avoiding the possibility of reusing 
a segment that has already been recycled.

This is something to keep in mind with regard to CASSANDRA-7232 if we start 
replaying commit logs outside of a restart.

 Intermittent NPE from RecoveryManagerTest RecoverPIT unit test
 --

 Key: CASSANDRA-8176
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8176
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
Assignee: Sam Tunnicliffe
 Fix For: 2.1.2

 Attachments: RecoveryManagerTest_failure_system.log.gz


 {noformat}
 [junit] Testsuite: org.apache.cassandra.db.RecoveryManagerTest
 [junit] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
 7.654 sec
 [junit] 
 [junit] - Standard Output ---
 [junit] WARN  16:40:38 No host ID found, created 
 2cbd54a8-79a5-40e0-a8e6-c8bf2c575877 (Note: This should happen exactly once 
 per node).
 [junit] WARN  16:40:38 No host ID found, created 
 2cbd54a8-79a5-40e0-a8e6-c8bf2c575877 (Note: This should happen exactly once 
 per node).
 [junit] WARN  16:40:38 Encountered bad header at position 16 of commit 
 log 
 /home/mshuler/git/cassandra/build/test/cassandra/commitlog:0/CommitLog-4-1414082433807.log,
  with invalid CRC. The end of segment marker should be zero.
 [junit] WARN  16:40:38 Encountered bad header at position 16 of commit 
 log 
 /home/mshuler/git/cassandra/build/test/cassandra/commitlog:0/CommitLog-4-1414082433807.log,
  with invalid CRC. The end of segment marker should be zero.
 [junit] -  ---
 [junit] Testcase: 
 testRecoverPIT(org.apache.cassandra.db.RecoveryManagerTest):  Caused an 
 ERROR
 [junit] null
 [junit] java.lang.NullPointerException
 [junit] at 
 org.apache.cassandra.db.RecoveryManagerTest.testRecoverPIT(RecoveryManagerTest.java:129)
 [junit] 
 [junit] 
 [junit] Test org.apache.cassandra.db.RecoveryManagerTest FAILED
 {noformat}
 Test fails roughly 20-25% of CI runs. Several 10x and 25x bisections for 2.1 
 {{git bisect start cassandra-2.1 f03e505}} resulted in {noformat}first bad 
 commit: [1394b128c65ef1ad59f765e9c9c5058cac04ca69]{noformat} which is 
 CASSANDRA-6904.
 That patch went to 2.0 and I still need to dig there to see if we're getting 
 the same error, but I've attached the unit test failure system.log from 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8207) I have a 5 node cassandra cluster and i commissioned 1 new node to the cluster. when i added 1 node. it received streams from 3 nodes out of which 2 were completed su

2014-10-29 Thread venkat (JIRA)
venkat created CASSANDRA-8207:
-

 Summary: I have a 5 node cassandra cluster and i commissioned 1 
new node to the cluster. when i added 1 node. it received streams from 3 nodes 
out of which 2 were completed successfully and one stream got failed. how can i 
resume the stream which has failed?
 Key: CASSANDRA-8207
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8207
 Project: Cassandra
  Issue Type: Bug
Reporter: venkat






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8190) Compactions stop completely because of RuntimeException in CompactionExecutor

2014-10-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188353#comment-14188353
 ] 

Marcus Eriksson commented on CASSANDRA-8190:


[~ngrigoriev] you seem to be hitting a bunch of weird stuff lately, could you 
post your config and more details about your setup? and perhaps a full log 
(start node - this happens)? 

 Compactions stop completely because of RuntimeException in CompactionExecutor
 -

 Key: CASSANDRA-8190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8190
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2 (Cassandra 2.0.10)
Reporter: Nikolai Grigoriev
Assignee: Marcus Eriksson
 Attachments: jstack.txt.gz, system.log.gz


 I have a cluster that is recovering from being overloaded with writes.  I am 
 using the workaround from CASSANDRA-6621 to prevent the STCS fallback (which 
 is killing the cluster - see CASSANDRA-7949). 
 I have observed that after one or more exceptions like this
 {code}
 ERROR [CompactionExecutor:4087] 2014-10-26 22:50:05,016 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:4087,1,main]
 java.lang.RuntimeException: Last written key DecoratedKey(425124616570337476, 
 0010033523da10033523da10
 400100) = current key DecoratedKey(-8778432288598355336, 
 0010040c7a8f10040c7a8f10
 400100) writing into 
 /cassandra-data/disk2/myks/mytable/myks-mytable-tmp-jb-130379-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 the node completely stops the compactions and I end up in the state like this:
 {code}
 # nodetool compactionstats
 pending tasks: 1288
   compaction typekeyspace   table   completed 
   total  unit  progress
 Active compaction remaining time :n/a
 {code}
 The node recovers if restarted and starts compactions - until getting more 
 exceptions like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8190) Compactions stop completely because of RuntimeException in CompactionExecutor

2014-10-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188353#comment-14188353
 ] 

Marcus Eriksson edited comment on CASSANDRA-8190 at 10/29/14 1:51 PM:
--

[~ngrigoriev] you seem to be hitting a bunch of weird stuff lately, could you 
post your config and more details about your setup? and perhaps a full log 
(start node - this happens)? And, how do you write to the cluster?


was (Author: krummas):
[~ngrigoriev] you seem to be hitting a bunch of weird stuff lately, could you 
post your config and more details about your setup? and perhaps a full log 
(start node - this happens)? 

 Compactions stop completely because of RuntimeException in CompactionExecutor
 -

 Key: CASSANDRA-8190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8190
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2 (Cassandra 2.0.10)
Reporter: Nikolai Grigoriev
Assignee: Marcus Eriksson
 Attachments: jstack.txt.gz, system.log.gz


 I have a cluster that is recovering from being overloaded with writes.  I am 
 using the workaround from CASSANDRA-6621 to prevent the STCS fallback (which 
 is killing the cluster - see CASSANDRA-7949). 
 I have observed that after one or more exceptions like this
 {code}
 ERROR [CompactionExecutor:4087] 2014-10-26 22:50:05,016 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:4087,1,main]
 java.lang.RuntimeException: Last written key DecoratedKey(425124616570337476, 
 0010033523da10033523da10
 400100) = current key DecoratedKey(-8778432288598355336, 
 0010040c7a8f10040c7a8f10
 400100) writing into 
 /cassandra-data/disk2/myks/mytable/myks-mytable-tmp-jb-130379-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 the node completely stops the compactions and I end up in the state like this:
 {code}
 # nodetool compactionstats
 pending tasks: 1288
   compaction typekeyspace   table   completed 
   total  unit  progress
 Active compaction remaining time :n/a
 {code}
 The node recovers if restarted and starts compactions - until getting more 
 exceptions like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8190) Compactions stop completely because of RuntimeException in CompactionExecutor

2014-10-29 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188364#comment-14188364
 ] 

Nikolai Grigoriev commented on CASSANDRA-8190:
--

[~krummas]

Marcus, believe me I do not really enjoy hitting this weird stuff lately ;)

Most of the background is in CASSANDRA-7949 (the one you have marked resolved, 
although I am not sure I fully agree with that resolution).

The only detail I would add to CASSANDRA-7949 is that finally, after ~3 weeks 
the cluster has managed to finish all compactions. 3 weeks to compact the data 
created in ~4 days. In between I have lost the patience, stopped it and ran 
sstablesplit on all large sstables (anything larger than 1Gb) on each node. And 
then I started the nodes one by one once they were done with the split. Upon 
restart each node had between ~2K and 7K compactions to complete. I had to let 
them finish them. On the way I have seen these errors on different nodes at 
different time - so I reported them.

Yesterday night the last node has finished the compactions. I've been scrubbing 
each node after the compactions were done to make sure the data integrity is 
not broken. Now I am about to restart the load that updates and fetches the 
data. We are doing some kind of modelling for our real data, a capacity 
exercise to determine the size of the production cluster.

 Compactions stop completely because of RuntimeException in CompactionExecutor
 -

 Key: CASSANDRA-8190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8190
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2 (Cassandra 2.0.10)
Reporter: Nikolai Grigoriev
Assignee: Marcus Eriksson
 Attachments: jstack.txt.gz, system.log.gz


 I have a cluster that is recovering from being overloaded with writes.  I am 
 using the workaround from CASSANDRA-6621 to prevent the STCS fallback (which 
 is killing the cluster - see CASSANDRA-7949). 
 I have observed that after one or more exceptions like this
 {code}
 ERROR [CompactionExecutor:4087] 2014-10-26 22:50:05,016 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:4087,1,main]
 java.lang.RuntimeException: Last written key DecoratedKey(425124616570337476, 
 0010033523da10033523da10
 400100) = current key DecoratedKey(-8778432288598355336, 
 0010040c7a8f10040c7a8f10
 400100) writing into 
 /cassandra-data/disk2/myks/mytable/myks-mytable-tmp-jb-130379-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 the node completely stops the compactions and I end up in the state like this:
 {code}
 # nodetool compactionstats
 pending tasks: 1288
   compaction typekeyspace   table   completed 
   total  unit  progress
 Active compaction remaining time :n/a
 {code}
 The node recovers if restarted and starts compactions - until getting more 
 exceptions like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8190) Compactions stop completely because of RuntimeException in CompactionExecutor

2014-10-29 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188364#comment-14188364
 ] 

Nikolai Grigoriev edited comment on CASSANDRA-8190 at 10/29/14 2:09 PM:


[~krummas]

Marcus, believe me I do not really enjoy hitting this weird stuff lately ;)

Most of the background is in CASSANDRA-7949 (the one you have marked resolved, 
although I am not sure I fully agree with that resolution).

The only detail I would add to CASSANDRA-7949 is that finally, after ~3 weeks 
the cluster has managed to finish all compactions. 3 weeks to compact the data 
created in ~4 days. In between I have lost the patience, stopped it and ran 
sstablesplit on all large sstables (anything larger than 1Gb) on each node. And 
then I started the nodes one by one once they were done with the split. Upon 
restart each node had between ~2K and 7K compactions to complete. I had to let 
them finish them. On the way I have seen these errors on different nodes at 
different time - so I reported them. My goal was to get the system to the state 
with no pending compactions and all sstables having the size close to the 
target one. This is why I used the flag from CASSANDRA-6621 
(cassandra.disable_stcs_in_l0), otherwise the cluster would stay in unusable 
state forever.

Yesterday night the last node has finished the compactions. I've been scrubbing 
each node after the compactions were done to make sure the data integrity is 
not broken. Now I am about to restart the load that updates and fetches the 
data. We are doing some kind of modelling for our real data, a capacity 
exercise to determine the size of the production cluster.


was (Author: ngrigor...@gmail.com):
[~krummas]

Marcus, believe me I do not really enjoy hitting this weird stuff lately ;)

Most of the background is in CASSANDRA-7949 (the one you have marked resolved, 
although I am not sure I fully agree with that resolution).

The only detail I would add to CASSANDRA-7949 is that finally, after ~3 weeks 
the cluster has managed to finish all compactions. 3 weeks to compact the data 
created in ~4 days. In between I have lost the patience, stopped it and ran 
sstablesplit on all large sstables (anything larger than 1Gb) on each node. And 
then I started the nodes one by one once they were done with the split. Upon 
restart each node had between ~2K and 7K compactions to complete. I had to let 
them finish them. On the way I have seen these errors on different nodes at 
different time - so I reported them.

Yesterday night the last node has finished the compactions. I've been scrubbing 
each node after the compactions were done to make sure the data integrity is 
not broken. Now I am about to restart the load that updates and fetches the 
data. We are doing some kind of modelling for our real data, a capacity 
exercise to determine the size of the production cluster.

 Compactions stop completely because of RuntimeException in CompactionExecutor
 -

 Key: CASSANDRA-8190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8190
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2 (Cassandra 2.0.10)
Reporter: Nikolai Grigoriev
Assignee: Marcus Eriksson
 Attachments: jstack.txt.gz, system.log.gz


 I have a cluster that is recovering from being overloaded with writes.  I am 
 using the workaround from CASSANDRA-6621 to prevent the STCS fallback (which 
 is killing the cluster - see CASSANDRA-7949). 
 I have observed that after one or more exceptions like this
 {code}
 ERROR [CompactionExecutor:4087] 2014-10-26 22:50:05,016 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:4087,1,main]
 java.lang.RuntimeException: Last written key DecoratedKey(425124616570337476, 
 0010033523da10033523da10
 400100) = current key DecoratedKey(-8778432288598355336, 
 0010040c7a8f10040c7a8f10
 400100) writing into 
 /cassandra-data/disk2/myks/mytable/myks-mytable-tmp-jb-130379-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 

[jira] [Comment Edited] (CASSANDRA-8190) Compactions stop completely because of RuntimeException in CompactionExecutor

2014-10-29 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188364#comment-14188364
 ] 

Nikolai Grigoriev edited comment on CASSANDRA-8190 at 10/29/14 2:10 PM:


[~krummas]

Marcus, believe me I do not really enjoy hitting this weird stuff lately ;)

Most of the background is in CASSANDRA-7949 (the one you have marked resolved, 
although I am not sure I fully agree with that resolution).

The only detail I would add to CASSANDRA-7949 is that finally, after ~3 weeks 
the cluster has managed to finish all compactions. 3 weeks to compact the data 
created in ~4 days. In between I have lost the patience, stopped it and ran 
sstablesplit on all large sstables (anything larger than 1Gb) on each node. And 
then I started the nodes one by one once they were done with the split. Upon 
restart each node had between ~2K and 7K compactions to complete. I had to let 
them finish them. On the way I have seen these errors on different nodes at 
different time - so I reported them. My goal was to get the system to the state 
with no pending compactions and all sstables having the size close to the 
target one. This is why I used the flag from CASSANDRA-6621 
(cassandra.disable_stcs_in_l0), otherwise the cluster would stay in unusable 
state forever.

Yesterday night the last node has finished the compactions. I've been scrubbing 
each node after the compactions were done to make sure the data integrity is 
not broken. Now I am about to restart the load that updates and fetches the 
data. We are doing some kind of modelling for our real data, a capacity 
exercise to determine the size of the production cluster.

Note that the configuration I am attaching was modified a bit to attempt to 
speed up compactions. There was not too much to tune but stillLike 0 
compaction throughput limit etc.


was (Author: ngrigor...@gmail.com):
[~krummas]

Marcus, believe me I do not really enjoy hitting this weird stuff lately ;)

Most of the background is in CASSANDRA-7949 (the one you have marked resolved, 
although I am not sure I fully agree with that resolution).

The only detail I would add to CASSANDRA-7949 is that finally, after ~3 weeks 
the cluster has managed to finish all compactions. 3 weeks to compact the data 
created in ~4 days. In between I have lost the patience, stopped it and ran 
sstablesplit on all large sstables (anything larger than 1Gb) on each node. And 
then I started the nodes one by one once they were done with the split. Upon 
restart each node had between ~2K and 7K compactions to complete. I had to let 
them finish them. On the way I have seen these errors on different nodes at 
different time - so I reported them. My goal was to get the system to the state 
with no pending compactions and all sstables having the size close to the 
target one. This is why I used the flag from CASSANDRA-6621 
(cassandra.disable_stcs_in_l0), otherwise the cluster would stay in unusable 
state forever.

Yesterday night the last node has finished the compactions. I've been scrubbing 
each node after the compactions were done to make sure the data integrity is 
not broken. Now I am about to restart the load that updates and fetches the 
data. We are doing some kind of modelling for our real data, a capacity 
exercise to determine the size of the production cluster.

 Compactions stop completely because of RuntimeException in CompactionExecutor
 -

 Key: CASSANDRA-8190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8190
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2 (Cassandra 2.0.10)
Reporter: Nikolai Grigoriev
Assignee: Marcus Eriksson
 Attachments: cassandra-env.sh, cassandra.yaml, jstack.txt.gz, 
 system.log.gz


 I have a cluster that is recovering from being overloaded with writes.  I am 
 using the workaround from CASSANDRA-6621 to prevent the STCS fallback (which 
 is killing the cluster - see CASSANDRA-7949). 
 I have observed that after one or more exceptions like this
 {code}
 ERROR [CompactionExecutor:4087] 2014-10-26 22:50:05,016 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:4087,1,main]
 java.lang.RuntimeException: Last written key DecoratedKey(425124616570337476, 
 0010033523da10033523da10
 400100) = current key DecoratedKey(-8778432288598355336, 
 0010040c7a8f10040c7a8f10
 400100) writing into 
 /cassandra-data/disk2/myks/mytable/myks-mytable-tmp-jb-130379-Data.db
 at 
 

[jira] [Updated] (CASSANDRA-8190) Compactions stop completely because of RuntimeException in CompactionExecutor

2014-10-29 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-8190:
-
Attachment: cassandra.yaml
cassandra-env.sh

config files

 Compactions stop completely because of RuntimeException in CompactionExecutor
 -

 Key: CASSANDRA-8190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8190
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2 (Cassandra 2.0.10)
Reporter: Nikolai Grigoriev
Assignee: Marcus Eriksson
 Attachments: cassandra-env.sh, cassandra.yaml, jstack.txt.gz, 
 system.log.gz


 I have a cluster that is recovering from being overloaded with writes.  I am 
 using the workaround from CASSANDRA-6621 to prevent the STCS fallback (which 
 is killing the cluster - see CASSANDRA-7949). 
 I have observed that after one or more exceptions like this
 {code}
 ERROR [CompactionExecutor:4087] 2014-10-26 22:50:05,016 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:4087,1,main]
 java.lang.RuntimeException: Last written key DecoratedKey(425124616570337476, 
 0010033523da10033523da10
 400100) = current key DecoratedKey(-8778432288598355336, 
 0010040c7a8f10040c7a8f10
 400100) writing into 
 /cassandra-data/disk2/myks/mytable/myks-mytable-tmp-jb-130379-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 the node completely stops the compactions and I end up in the state like this:
 {code}
 # nodetool compactionstats
 pending tasks: 1288
   compaction typekeyspace   table   completed 
   total  unit  progress
 Active compaction remaining time :n/a
 {code}
 The node recovers if restarted and starts compactions - until getting more 
 exceptions like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8208) Inconsistent failure handling with repair

2014-10-29 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-8208:
--

 Summary: Inconsistent failure handling with repair
 Key: CASSANDRA-8208
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8208
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
 Fix For: 3.0


I think we introduced this with CASSANDRA-6455, problem is that we now treat 
all repair futures as a single unit (Futures.allAsList(..)) which makes the 
whole thing fail if one sub-future fails. Also, when one of those fail, we 
notify nodetool that we failed and we stop the executor with shutdownNow() 
which throws out any pending RepairJobs.

[~yukim] I think we used to be able to proceed with the other RepairSessions 
even if one fails, right? If not, we should probably call cancel on the 
RepairJob runnables which are in queue for the executor after calling 
shutdownNow() in repairComplete() in StorageService. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8190) Compactions stop completely because of RuntimeException in CompactionExecutor

2014-10-29 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-8190:
-
Attachment: system.log.gz

a sample log

 Compactions stop completely because of RuntimeException in CompactionExecutor
 -

 Key: CASSANDRA-8190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8190
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2 (Cassandra 2.0.10)
Reporter: Nikolai Grigoriev
Assignee: Marcus Eriksson
 Attachments: cassandra-env.sh, cassandra.yaml, jstack.txt.gz, 
 system.log.gz, system.log.gz


 I have a cluster that is recovering from being overloaded with writes.  I am 
 using the workaround from CASSANDRA-6621 to prevent the STCS fallback (which 
 is killing the cluster - see CASSANDRA-7949). 
 I have observed that after one or more exceptions like this
 {code}
 ERROR [CompactionExecutor:4087] 2014-10-26 22:50:05,016 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:4087,1,main]
 java.lang.RuntimeException: Last written key DecoratedKey(425124616570337476, 
 0010033523da10033523da10
 400100) = current key DecoratedKey(-8778432288598355336, 
 0010040c7a8f10040c7a8f10
 400100) writing into 
 /cassandra-data/disk2/myks/mytable/myks-mytable-tmp-jb-130379-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 the node completely stops the compactions and I end up in the state like this:
 {code}
 # nodetool compactionstats
 pending tasks: 1288
   compaction typekeyspace   table   completed 
   total  unit  progress
 Active compaction remaining time :n/a
 {code}
 The node recovers if restarted and starts compactions - until getting more 
 exceptions like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2014-10-29 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188434#comment-14188434
 ] 

Jonathan Ellis commented on CASSANDRA-5256:
---

That is a different error; please open a new ticket.

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256-v2.txt, 5256-v4.txt, 5256-v5.txt, 5256.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the 

[jira] [Commented] (CASSANDRA-8168) Require Java 8

2014-10-29 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188438#comment-14188438
 ] 

Jonathan Ellis commented on CASSANDRA-8168:
---

I don't see how we can reasonably endorse openjdk stronger than go ahead and 
try it, but if it crashes go back to oracle.  It's less stable by design.

 Require Java 8
 --

 Key: CASSANDRA-8168
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8168
 Project: Cassandra
  Issue Type: Task
Reporter: T Jake Luciani
 Fix For: 3.0


 This is to discuss requiring Java 8 for version = 3.0  
 There are a couple big reasons for this.
 * Better support for complex async work  e.g (CASSANDRA-5239)
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html
 * Use Nashorn for Javascript UDFs CASSANDRA-7395



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8206) Removing items from a set breaks secondary index

2014-10-29 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-8206:
--

Assignee: Tyler Hobbs

 Removing items from a set breaks secondary index
 

 Key: CASSANDRA-8206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8206
 Project: Cassandra
  Issue Type: Bug
Reporter: Tuukka Mustonen
Assignee: Tyler Hobbs

 Removing items from a set breaks index for field {{id}}:
 {noformat}
 cqlsh:cs CREATE TABLE buckets (
   ...   tenant int,
   ...   id int,
   ...   items settext,
   ...   PRIMARY KEY (tenant, id)
   ... );
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs INSERT INTO buckets (tenant, id, items) VALUES (1, 1, {'foo', 
 'bar'});
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs UPDATE buckets SET items=items-{'foo'} WHERE tenant=1 AND id=1;
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
 (0 rows)
 {noformat}
 Re-building the index fixes the issue:
 {noformat}
 cqlsh:cs DROP INDEX buckets_ids;
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 {noformat}
 Adding items does not cause similar failure, only delete. Also didn't test if 
 other collections are also affected(?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] git commit: merge from 2.1

2014-10-29 Thread jbellis
merge from 2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/242d54b4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/242d54b4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/242d54b4

Branch: refs/heads/trunk
Commit: 242d54b493cc7266f6ad14b4d71d1c2309a0220d
Parents: 45084f1 fee7137
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Oct 29 10:25:25 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Oct 29 10:25:25 2014 -0500

--
 CHANGES.txt| 1 +
 test/unit/org/apache/cassandra/db/RecoveryManagerTest.java | 8 +++-
 2 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/242d54b4/CHANGES.txt
--
diff --cc CHANGES.txt
index 5a3f8ce,274d07a..61ec5c1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,36 -1,5 +1,37 @@@
 +3.0
 + * Extend Descriptor to include a format value and refactor reader/writer 
apis (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
 +   7924, 7812, 8063)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * improve concurrency of repair (CASSANDRA-6455)
 +
 +
  2.1.2
+  * Fix race in RecoveryManagerTest (CASSANDRA-8176)
   * Avoid IllegalArgumentException while sorting sstables in
 IndexSummaryManager (CASSANDRA-8182)
   * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/242d54b4/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
--
diff --cc test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
index ff90998,460c267..78e82f3
--- a/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
+++ b/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
@@@ -44,40 -37,21 +44,43 @@@ import static org.apache.cassandra.db.K
  import static org.apache.cassandra.Util.cellname;
  
  @RunWith(OrderedJUnit4ClassRunner.class)
 -public class RecoveryManagerTest extends SchemaLoader
 +public class RecoveryManagerTest
  {
 +private static final String KEYSPACE1 = RecoveryManagerTest1;
 +private static final String CF_STANDARD1 = Standard1;
 +private static final String CF_COUNTER1 = Counter1;
 +
 +private static final String KEYSPACE2 = RecoveryManagerTest2;
 +private static final String CF_STANDARD3 = Standard3;
 +
 +@BeforeClass
 +public static void defineSchema() throws ConfigurationException
 +{
 +SchemaLoader.prepareServer();
 +SchemaLoader.createKeyspace(KEYSPACE1,
 +SimpleStrategy.class,
 +KSMetaData.optsWithRF(1),
 +SchemaLoader.standardCFMD(KEYSPACE1, 
CF_STANDARD1),
 +CFMetaData.denseCFMetaData(KEYSPACE1, 
CF_COUNTER1, BytesType.instance).defaultValidator(CounterColumnType.instance));
 +SchemaLoader.createKeyspace(KEYSPACE2,
 +SimpleStrategy.class,
 +KSMetaData.optsWithRF(1),
 +SchemaLoader.standardCFMD(KEYSPACE2, 
CF_STANDARD3));
 +}
 +
  @Test
- public 

[jira] [Created] (CASSANDRA-8209) Cassandra crashes when running on JDK8 update 40

2014-10-29 Thread Jaroslav Kamenik (JIRA)
Jaroslav Kamenik created CASSANDRA-8209:
---

 Summary: Cassandra crashes when running on JDK8 update 40
 Key: CASSANDRA-8209
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8209
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jaroslav Kamenik


It seems that problem is change made in update 40 - 

https://bugs.openjdk.java.net/browse/JDK-6642881

result is:

java.lang.SecurityException: Cannot make java.lang.Class.classLoader accessible
at 
java.lang.reflect.AccessibleObject.setAccessible0(AccessibleObject.java:147) 
~[na:1.8.0_40-ea]
at 
java.lang.reflect.AccessibleObject.setAccessible(AccessibleObject.java:129) 
~[na:1.8.0_40-ea]
at org.github.jamm.MemoryMeter.addFieldChildren(MemoryMeter.java:204) 
~[jamm-0.2.6.jar:na]
at org.github.jamm.MemoryMeter.measureDeep(MemoryMeter.java:158) 
~[jamm-0.2.6.jar:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement.measureForPreparedCache(SelectStatement.java:166)
 ~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.cql3.QueryProcessor.measure(QueryProcessor.java:546) 
~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.cql3.QueryProcessor.storePreparedStatement(QueryProcessor.java:441)
 ~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.cql3.QueryProcessor.prepare(QueryProcessor.java:404) 
~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.cql3.QueryProcessor.prepare(QueryProcessor.java:388) 
~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.transport.messages.PrepareMessage.execute(PrepareMessage.java:77)
 ~[apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.1.jar:2.1.1]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_40-ea]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [apache-cassandra-2.1.1.jar:2.1.1]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.1.jar:2.1.1]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea]




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] git commit: Fix race in RecoveryManagerTest patch by Sam Tunnicliffe; reviewed by jbellis for CASSANDRA-8176

2014-10-29 Thread jbellis
Fix race in RecoveryManagerTest
patch by Sam Tunnicliffe; reviewed by jbellis for CASSANDRA-8176


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fee71377
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fee71377
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fee71377

Branch: refs/heads/trunk
Commit: fee713775cda0704e1cc188610ffe9005b6e2201
Parents: 1d285ea
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Oct 29 10:23:35 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Oct 29 10:23:35 2014 -0500

--
 CHANGES.txt| 1 +
 test/unit/org/apache/cassandra/db/RecoveryManagerTest.java | 8 +++-
 2 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fee71377/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7afed1e..274d07a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.2
+ * Fix race in RecoveryManagerTest (CASSANDRA-8176)
  * Avoid IllegalArgumentException while sorting sstables in
IndexSummaryManager (CASSANDRA-8182)
  * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fee71377/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java 
b/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
index 2d820f3..460c267 100644
--- a/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
+++ b/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
@@ -40,13 +40,16 @@ import static org.apache.cassandra.Util.cellname;
 public class RecoveryManagerTest extends SchemaLoader
 {
 @Test
-public void testNothingToRecover() throws IOException {
+public void testNothingToRecover() throws IOException
+{
+CommitLog.instance.resetUnsafe();
 CommitLog.instance.recover();
 }
 
 @Test
 public void testOne() throws IOException
 {
+CommitLog.instance.resetUnsafe();
 Keyspace keyspace1 = Keyspace.open(Keyspace1);
 Keyspace keyspace2 = Keyspace.open(Keyspace2);
 
@@ -77,6 +80,7 @@ public class RecoveryManagerTest extends SchemaLoader
 @Test
 public void testRecoverCounter() throws IOException
 {
+CommitLog.instance.resetUnsafe();
 Keyspace keyspace1 = Keyspace.open(Keyspace1);
 
 Mutation rm;
@@ -108,6 +112,7 @@ public class RecoveryManagerTest extends SchemaLoader
 @Test
 public void testRecoverPIT() throws Exception
 {
+CommitLog.instance.resetUnsafe();
 Date date = CommitLogArchiver.format.parse(2112:12:12 12:12:12);
 long timeMS = date.getTime() - 5000;
 
@@ -133,6 +138,7 @@ public class RecoveryManagerTest extends SchemaLoader
 @Test
 public void testRecoverPITUnordered() throws Exception
 {
+CommitLog.instance.resetUnsafe();
 Date date = CommitLogArchiver.format.parse(2112:12:12 12:12:12);
 long timeMS = date.getTime();
 



[1/3] git commit: Fix race in RecoveryManagerTest patch by Sam Tunnicliffe; reviewed by jbellis for CASSANDRA-8176

2014-10-29 Thread jbellis
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 1d285eada - fee713775
  refs/heads/trunk 45084f182 - 242d54b49


Fix race in RecoveryManagerTest
patch by Sam Tunnicliffe; reviewed by jbellis for CASSANDRA-8176


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fee71377
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fee71377
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fee71377

Branch: refs/heads/cassandra-2.1
Commit: fee713775cda0704e1cc188610ffe9005b6e2201
Parents: 1d285ea
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Oct 29 10:23:35 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Oct 29 10:23:35 2014 -0500

--
 CHANGES.txt| 1 +
 test/unit/org/apache/cassandra/db/RecoveryManagerTest.java | 8 +++-
 2 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fee71377/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7afed1e..274d07a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.2
+ * Fix race in RecoveryManagerTest (CASSANDRA-8176)
  * Avoid IllegalArgumentException while sorting sstables in
IndexSummaryManager (CASSANDRA-8182)
  * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fee71377/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java 
b/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
index 2d820f3..460c267 100644
--- a/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
+++ b/test/unit/org/apache/cassandra/db/RecoveryManagerTest.java
@@ -40,13 +40,16 @@ import static org.apache.cassandra.Util.cellname;
 public class RecoveryManagerTest extends SchemaLoader
 {
 @Test
-public void testNothingToRecover() throws IOException {
+public void testNothingToRecover() throws IOException
+{
+CommitLog.instance.resetUnsafe();
 CommitLog.instance.recover();
 }
 
 @Test
 public void testOne() throws IOException
 {
+CommitLog.instance.resetUnsafe();
 Keyspace keyspace1 = Keyspace.open(Keyspace1);
 Keyspace keyspace2 = Keyspace.open(Keyspace2);
 
@@ -77,6 +80,7 @@ public class RecoveryManagerTest extends SchemaLoader
 @Test
 public void testRecoverCounter() throws IOException
 {
+CommitLog.instance.resetUnsafe();
 Keyspace keyspace1 = Keyspace.open(Keyspace1);
 
 Mutation rm;
@@ -108,6 +112,7 @@ public class RecoveryManagerTest extends SchemaLoader
 @Test
 public void testRecoverPIT() throws Exception
 {
+CommitLog.instance.resetUnsafe();
 Date date = CommitLogArchiver.format.parse(2112:12:12 12:12:12);
 long timeMS = date.getTime() - 5000;
 
@@ -133,6 +138,7 @@ public class RecoveryManagerTest extends SchemaLoader
 @Test
 public void testRecoverPITUnordered() throws Exception
 {
+CommitLog.instance.resetUnsafe();
 Date date = CommitLogArchiver.format.parse(2112:12:12 12:12:12);
 long timeMS = date.getTime();
 



[jira] [Resolved] (CASSANDRA-8207) I have a 5 node cassandra cluster and i commissioned 1 new node to the cluster. when i added 1 node. it received streams from 3 nodes out of which 2 were completed s

2014-10-29 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-8207.
---
Resolution: Invalid

 I have a 5 node cassandra cluster and i commissioned 1 new node to the 
 cluster. when i added 1 node. it received streams from 3 nodes out of which 2 
 were completed successfully and one stream got failed. how can i resume the 
 stream which has failed?
 -

 Key: CASSANDRA-8207
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8207
 Project: Cassandra
  Issue Type: Bug
Reporter: venkat





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2014-10-29 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-5256:
-
Attachment: cassandra.yaml
cassandra-env.sh

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256-v2.txt, 5256-v4.txt, 5256-v5.txt, 5256.txt, 
 cassandra-env.sh, cassandra.yaml, occurence frequency.png


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2014-10-29 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-5256:
-
Attachment: occurence frequency.png

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256-v2.txt, 5256-v4.txt, 5256-v5.txt, 5256.txt, 
 cassandra-env.sh, cassandra.yaml, occurence frequency.png


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code 

[jira] [Created] (CASSANDRA-8210) java.lang.AssertionError: Memory was freed exception in CompactionExecutor

2014-10-29 Thread Nikolai Grigoriev (JIRA)
Nikolai Grigoriev created CASSANDRA-8210:


 Summary: java.lang.AssertionError: Memory was freed exception in 
CompactionExecutor
 Key: CASSANDRA-8210
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8210
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2, Cassandra 2.0.10, OEL 6.5, kernel 
3.8.13-44.el6uek.x86_64, 128Gb of RAM, swap disabled, JRE 1.7.0_67-b01
Reporter: Nikolai Grigoriev
Priority: Minor


I have just got this problem on multiple nodes. Cassandra 2.0.10 (DSE 4.5.2). 
After looking through the history I have found that it was actually happening 
on all nodes since the start of large compaction process (I've loaded tons of 
data in the system and then turned off all load to let it compact the data).

{code}
ERROR [CompactionExecutor:1196] 2014-10-28 17:14:50,124 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:1196,1,main]
java.lang.AssertionError: Memory was freed
at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
at 
org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
at 
org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
at 
org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
at 
org.apache.cassandra.io.sstable.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:692)
at 
org.apache.cassandra.io.sstable.SSTableReader.estimatedKeysForRanges(SSTableReader.java:663)
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:328)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.findDroppableSSTable(LeveledCompactionStrategy.java:354)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getMaximalTask(LeveledCompactionStrategy.java:125)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:113)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:192)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8210) java.lang.AssertionError: Memory was freed exception in CompactionExecutor

2014-10-29 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-8210:
-
Attachment: cassandra.yaml
cassandra-env.sh
occurence frequency.png

 java.lang.AssertionError: Memory was freed exception in CompactionExecutor
 

 Key: CASSANDRA-8210
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8210
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2, Cassandra 2.0.10, OEL 6.5, kernel 
 3.8.13-44.el6uek.x86_64, 128Gb of RAM, swap disabled, JRE 1.7.0_67-b01
Reporter: Nikolai Grigoriev
Priority: Minor
 Attachments: cassandra-env.sh, cassandra.yaml, occurence frequency.png


 I have just got this problem on multiple nodes. Cassandra 2.0.10 (DSE 4.5.2). 
 After looking through the history I have found that it was actually happening 
 on all nodes since the start of large compaction process (I've loaded tons of 
 data in the system and then turned off all load to let it compact the data).
 {code}
 ERROR [CompactionExecutor:1196] 2014-10-28 17:14:50,124 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1196,1,main]
 java.lang.AssertionError: Memory was freed
 at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
 at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:692)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.estimatedKeysForRanges(SSTableReader.java:663)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:328)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.findDroppableSSTable(LeveledCompactionStrategy.java:354)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getMaximalTask(LeveledCompactionStrategy.java:125)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:113)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:192)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2014-10-29 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-5256:
-
Attachment: (was: occurence frequency.png)

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256-v2.txt, 5256-v4.txt, 5256-v5.txt, 5256.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2014-10-29 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-5256:
-
Attachment: (was: cassandra.yaml)

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256-v2.txt, 5256-v4.txt, 5256-v5.txt, 5256.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2014-10-29 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-5256:
-
Attachment: (was: cassandra-env.sh)

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256-v2.txt, 5256-v4.txt, 5256-v5.txt, 5256.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 

[jira] [Updated] (CASSANDRA-8210) java.lang.AssertionError: Memory was freed exception in CompactionExecutor

2014-10-29 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev updated CASSANDRA-8210:
-
Attachment: system.log.gz

 java.lang.AssertionError: Memory was freed exception in CompactionExecutor
 

 Key: CASSANDRA-8210
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8210
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2, Cassandra 2.0.10, OEL 6.5, kernel 
 3.8.13-44.el6uek.x86_64, 128Gb of RAM, swap disabled, JRE 1.7.0_67-b01
Reporter: Nikolai Grigoriev
Priority: Minor
 Attachments: cassandra-env.sh, cassandra.yaml, occurence 
 frequency.png, system.log.gz


 I have just got this problem on multiple nodes. Cassandra 2.0.10 (DSE 4.5.2). 
 After looking through the history I have found that it was actually happening 
 on all nodes since the start of large compaction process (I've loaded tons of 
 data in the system and then turned off all load to let it compact the data).
 {code}
 ERROR [CompactionExecutor:1196] 2014-10-28 17:14:50,124 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1196,1,main]
 java.lang.AssertionError: Memory was freed
 at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
 at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:692)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.estimatedKeysForRanges(SSTableReader.java:663)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:328)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.findDroppableSSTable(LeveledCompactionStrategy.java:354)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getMaximalTask(LeveledCompactionStrategy.java:125)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:113)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:192)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8210) java.lang.AssertionError: Memory was freed exception in CompactionExecutor

2014-10-29 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188456#comment-14188456
 ] 

Nikolai Grigoriev commented on CASSANDRA-8210:
--

Opened new ticket as per [~jbellis]'s recommendation in response to my comment 
to CASSANDRA-5256

 java.lang.AssertionError: Memory was freed exception in CompactionExecutor
 

 Key: CASSANDRA-8210
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8210
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2, Cassandra 2.0.10, OEL 6.5, kernel 
 3.8.13-44.el6uek.x86_64, 128Gb of RAM, swap disabled, JRE 1.7.0_67-b01
Reporter: Nikolai Grigoriev
Priority: Minor
 Attachments: cassandra-env.sh, cassandra.yaml, occurence 
 frequency.png, system.log.gz


 I have just got this problem on multiple nodes. Cassandra 2.0.10 (DSE 4.5.2). 
 After looking through the history I have found that it was actually happening 
 on all nodes since the start of large compaction process (I've loaded tons of 
 data in the system and then turned off all load to let it compact the data).
 {code}
 ERROR [CompactionExecutor:1196] 2014-10-28 17:14:50,124 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1196,1,main]
 java.lang.AssertionError: Memory was freed
 at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
 at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:692)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.estimatedKeysForRanges(SSTableReader.java:663)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:328)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.findDroppableSSTable(LeveledCompactionStrategy.java:354)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getMaximalTask(LeveledCompactionStrategy.java:125)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:113)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:192)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8211) Overlapping sstables in L1+

2014-10-29 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-8211:
--

 Summary: Overlapping sstables in L1+
 Key: CASSANDRA-8211
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8211
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
 Fix For: 2.0.12


Seems we have a bug that can create overlapping sstables in L1:

{code}
WARN [main] 2014-10-28 04:09:42,295 LeveledManifest.java (line 164) At level 2, 
SSTableReader(path='sstable') [DecoratedKey(2838397575996053472, 00
10066059b210066059b210400100),
 DecoratedKey(5516674013223138308, 
001000ff2d161000ff2d160
00010400100)] overlaps 
SSTableReader(path='sstable') [DecoratedKey(2839992722300822584, 
0010
00229ad21000229ad210400100),
 DecoratedKey(5532836928694021724, 
0010034b05a610034b05a6100
000400100)].  This could be caused by a bug in 
Cassandra 1.1.0 .. 1.1.3 or due to the fact that you have dropped sstables from 
another node into the data directory. Sending back to L0.  If
 you didn't drop in sstables, and have not yet run scrub, you should do so 
since you may also have rows out-of-order within an sstable
{code}

Which might manifest itself during compaction with this exception:
{code}
ERROR [CompactionExecutor:3152] 2014-10-28 00:24:06,134 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:3152,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(5516674013223138308, 
001000ff2d161000ff2d1610400100)
 = current key DecoratedKey(2839992722300822584, 
001000229ad21000229ad210400100)
 writing into sstable
{code}
since we use LeveledScanner when compacting (the backing sstable scanner might 
go beyond the start of the next sstable scanner)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8206) Removing items from a set breaks secondary index

2014-10-29 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8206:
---
Fix Version/s: 2.1.2

 Removing items from a set breaks secondary index
 

 Key: CASSANDRA-8206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8206
 Project: Cassandra
  Issue Type: Bug
Reporter: Tuukka Mustonen
Assignee: Tyler Hobbs
 Fix For: 2.1.2


 Removing items from a set breaks index for field {{id}}:
 {noformat}
 cqlsh:cs CREATE TABLE buckets (
   ...   tenant int,
   ...   id int,
   ...   items settext,
   ...   PRIMARY KEY (tenant, id)
   ... );
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs INSERT INTO buckets (tenant, id, items) VALUES (1, 1, {'foo', 
 'bar'});
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs UPDATE buckets SET items=items-{'foo'} WHERE tenant=1 AND id=1;
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
 (0 rows)
 {noformat}
 Re-building the index fixes the issue:
 {noformat}
 cqlsh:cs DROP INDEX buckets_ids;
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 {noformat}
 Adding items does not cause similar failure, only delete. Also didn't test if 
 other collections are also affected(?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8190) Compactions stop completely because of RuntimeException in CompactionExecutor

2014-10-29 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-8190.

Resolution: Won't Fix

The compactions probably stop since we don't call 'submitBackground()' if the 
CompactionTask throws an exception and there are no writes to the node so no 
new compaction tasks are triggered

This could be 'fixed' by doing submitBackground() in a finally block after the 
compaction task is executed, but I think that might be a bad idea since we 
could end up in weird infinite loop situations if we keep throwing exceptions 
in the compaction tasks. And, if we do throw, the node might need some manual 
intervention anyway

Created CASSANDRA-8211 for the cause of the exception in this issue

 Compactions stop completely because of RuntimeException in CompactionExecutor
 -

 Key: CASSANDRA-8190
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8190
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2 (Cassandra 2.0.10)
Reporter: Nikolai Grigoriev
Assignee: Marcus Eriksson
 Attachments: cassandra-env.sh, cassandra.yaml, jstack.txt.gz, 
 system.log.gz, system.log.gz


 I have a cluster that is recovering from being overloaded with writes.  I am 
 using the workaround from CASSANDRA-6621 to prevent the STCS fallback (which 
 is killing the cluster - see CASSANDRA-7949). 
 I have observed that after one or more exceptions like this
 {code}
 ERROR [CompactionExecutor:4087] 2014-10-26 22:50:05,016 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:4087,1,main]
 java.lang.RuntimeException: Last written key DecoratedKey(425124616570337476, 
 0010033523da10033523da10
 400100) = current key DecoratedKey(-8778432288598355336, 
 0010040c7a8f10040c7a8f10
 400100) writing into 
 /cassandra-data/disk2/myks/mytable/myks-mytable-tmp-jb-130379-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 the node completely stops the compactions and I end up in the state like this:
 {code}
 # nodetool compactionstats
 pending tasks: 1288
   compaction typekeyspace   table   completed 
   total  unit  progress
 Active compaction remaining time :n/a
 {code}
 The node recovers if restarted and starts compactions - until getting more 
 exceptions like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8087) Multiple non-DISTINCT rows returned when page_size set

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reopened CASSANDRA-8087:

Reproduced In: 2.0.11, 2.0.10  (was: 2.0.10)

The test is still failing with the same problem, despite CASSANDRA-8108 being 
marked as resolved. We aren't seeing the issue in 2.1.1.

 Multiple non-DISTINCT rows returned when page_size set
 --

 Key: CASSANDRA-8087
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8087
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Holmberg
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.11


 Using the following statements to reproduce:
 {code}
 CREATE TABLE test (
 k int,
 p int,
 s int static,
 PRIMARY KEY (k, p)
 );
 INSERT INTO test (k, p) VALUES (1, 1);
 INSERT INTO test (k, p) VALUES (1, 2);
 SELECT DISTINCT k, s FROM test ;
 {code}
 Native clients that set result_page_size in the query message receive 
 multiple non-distinct rows back (one per clustered value p in row k).
 This is only reproduced on 2.0.10. Does not appear in 2.1.0
 It does not appear in cqlsh for 2.0.10 because thrift.
 See https://datastax-oss.atlassian.net/browse/PYTHON-164 for background



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-10-29 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188471#comment-14188471
 ] 

Marcus Eriksson commented on CASSANDRA-6285:


I think the cause of the latest exceptions in this ticket is CASSANDRA-8211

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8171) Clean up generics

2014-10-29 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188482#comment-14188482
 ] 

Joshua McKenzie commented on CASSANDRA-8171:


Might want a trivial comment on intention of package-private on 'final C token' 
in AbstractToken - can add on commit. The reduction in noise surrounding 
generics on Token is particularly nice - TokenFactory's are a lot cleaner now. 
Reducing 1/4 of our warnings is a great step - I'm +1 on us investigating 
moving to a warning as error paradigm on builds down the road someday; 
perhaps we should consider adding a build target w/warnings shown on a separate 
ticket? (Assuming we don't already have this and I'm just missing it in the 
build.xml...)

I'm +1 on commit as-is.  [~jbellis]: 2.1.2 or 3.0?  It's a pretty 
straightforward refactor which leans me toward 2.1.2 but it's also dealing with 
some fundamental primitives of the system.  I'm on the fence.

 Clean up generics
 -

 Key: CASSANDRA-8171
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8171
 Project: Cassandra
  Issue Type: Improvement
Reporter: Branimir Lambov
Assignee: Branimir Lambov
Priority: Minor
 Attachments: 8171.patch


 Some uses of generics in the code are causing much more harm than good, and 
 in some cases generic types are used unsafely, hiding potential problems in 
 the code.
 Generics need to be cleaned up to clarify the types, remove unnecessary type 
 specialization when it does not make sense, and significantly reduce the 
 number of unhelpful warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-4959) CQLSH insert help has typo

2014-10-29 Thread Ricardo Devis Agullo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ricardo Devis Agullo updated CASSANDRA-4959:

Attachment: CASSANDRA-4959.txt

Fixed typo on CQLSH help insert.

 CQLSH insert help has typo
 --

 Key: CASSANDRA-4959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4959
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Affects Versions: 1.2.0 beta 2
Reporter: Edward Capriolo
Priority: Trivial
  Labels: cqlsh
 Attachments: CASSANDRA-4959.txt


 [cqlsh 2.3.0 | Cassandra 1.2.0-beta2-SNAPSHOT | CQL spec 3.0.0 | Thrift 
 protocol 19.35.0]
 Use HELP for help.
 cqlsh help INSERT
 INSERT INTO [keyspace.]tablename
 ( colname1, colname2 [, colname3 [, ...]] )
VALUES ( colval1, colval2 [, colval3 [, ...]] )
[USING TIMESTAMP timestamp]
  [AND TTL timeToLive]];
 Should be. 
 {quote}
 [AND TTL timeToLive]];
 {quote}
 Also it was not clear to me initially that you could just do:
 {quote}
 USING TTL timeToLive
 {quote}
 But maybe that is just me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-4959) CQLSH insert help has typo

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-4959:
---
Reviewer: Tyler Hobbs

 CQLSH insert help has typo
 --

 Key: CASSANDRA-4959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4959
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Affects Versions: 1.2.0 beta 2
Reporter: Edward Capriolo
Priority: Trivial
  Labels: cqlsh
 Attachments: CASSANDRA-4959.txt


 [cqlsh 2.3.0 | Cassandra 1.2.0-beta2-SNAPSHOT | CQL spec 3.0.0 | Thrift 
 protocol 19.35.0]
 Use HELP for help.
 cqlsh help INSERT
 INSERT INTO [keyspace.]tablename
 ( colname1, colname2 [, colname3 [, ...]] )
VALUES ( colval1, colval2 [, colval3 [, ...]] )
[USING TIMESTAMP timestamp]
  [AND TTL timeToLive]];
 Should be. 
 {quote}
 [AND TTL timeToLive]];
 {quote}
 Also it was not clear to me initially that you could just do:
 {quote}
 USING TTL timeToLive
 {quote}
 But maybe that is just me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8209) Cassandra crashes when running on JDK8 update 40

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8209:
---
Reproduced In: 2.1.1

 Cassandra crashes when running on JDK8 update 40
 

 Key: CASSANDRA-8209
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8209
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jaroslav Kamenik

 It seems that problem is change made in update 40 - 
 https://bugs.openjdk.java.net/browse/JDK-6642881
 result is:
 java.lang.SecurityException: Cannot make java.lang.Class.classLoader 
 accessible
 at 
 java.lang.reflect.AccessibleObject.setAccessible0(AccessibleObject.java:147) 
 ~[na:1.8.0_40-ea]
 at 
 java.lang.reflect.AccessibleObject.setAccessible(AccessibleObject.java:129) 
 ~[na:1.8.0_40-ea]
 at org.github.jamm.MemoryMeter.addFieldChildren(MemoryMeter.java:204) 
 ~[jamm-0.2.6.jar:na]
 at org.github.jamm.MemoryMeter.measureDeep(MemoryMeter.java:158) 
 ~[jamm-0.2.6.jar:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.measureForPreparedCache(SelectStatement.java:166)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.cql3.QueryProcessor.measure(QueryProcessor.java:546) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.cql3.QueryProcessor.storePreparedStatement(QueryProcessor.java:441)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.cql3.QueryProcessor.prepare(QueryProcessor.java:404) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.cql3.QueryProcessor.prepare(QueryProcessor.java:388) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.transport.messages.PrepareMessage.execute(PrepareMessage.java:77)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
  [apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
  [apache-cassandra-2.1.1.jar:2.1.1]
 at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
 [na:1.8.0_40-ea]
 at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  [apache-cassandra-2.1.1.jar:2.1.1]
 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [apache-cassandra-2.1.1.jar:2.1.1]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8207) I have a 5 node cassandra cluster and i commissioned 1 new node to the cluster. when i added 1 node. it received streams from 3 nodes out of which 2 were completed

2014-10-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188517#comment-14188517
 ] 

Philip Thompson commented on CASSANDRA-8207:


Please refer these questions to IRC or the mailing list. JIRA is for bugs and 
feature requests.

 I have a 5 node cassandra cluster and i commissioned 1 new node to the 
 cluster. when i added 1 node. it received streams from 3 nodes out of which 2 
 were completed successfully and one stream got failed. how can i resume the 
 stream which has failed?
 -

 Key: CASSANDRA-8207
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8207
 Project: Cassandra
  Issue Type: Bug
Reporter: venkat





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8206) Removing items from a set breaks secondary index

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8206:
---
Tester: Philip Thompson

 Removing items from a set breaks secondary index
 

 Key: CASSANDRA-8206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8206
 Project: Cassandra
  Issue Type: Bug
Reporter: Tuukka Mustonen
Assignee: Tyler Hobbs
 Fix For: 2.1.2


 Removing items from a set breaks index for field {{id}}:
 {noformat}
 cqlsh:cs CREATE TABLE buckets (
   ...   tenant int,
   ...   id int,
   ...   items settext,
   ...   PRIMARY KEY (tenant, id)
   ... );
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs INSERT INTO buckets (tenant, id, items) VALUES (1, 1, {'foo', 
 'bar'});
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs UPDATE buckets SET items=items-{'foo'} WHERE tenant=1 AND id=1;
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
 (0 rows)
 {noformat}
 Re-building the index fixes the issue:
 {noformat}
 cqlsh:cs DROP INDEX buckets_ids;
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 {noformat}
 Adding items does not cause similar failure, only delete. Also didn't test if 
 other collections are also affected(?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5640) Allow JOIN on partition key for tables in the same keyspace

2014-10-29 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188542#comment-14188542
 ] 

Jeremiah Jordan commented on CASSANDRA-5640:


Just came up with one reason to need this.  Getting counter and non counter 
data in the same query.

 Allow JOIN on partition key for tables in the same keyspace
 ---

 Key: CASSANDRA-5640
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5640
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jeremiah Jordan
Priority: Minor
  Labels: cql

 People whine about there not being any JOIN in CQL a lot.  I was thinking we 
 could add JOIN with the restriction that you can only join on partition key, 
 and only if the tables are in the same keyspace.  That way it could be done 
 local to each node, and would not need to be a true distributed join.
 Feel free to shoot me down ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8031) Custom Index describe broken again

2014-10-29 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8031:
---
Fix Version/s: 2.1.2

 Custom Index describe broken again
 --

 Key: CASSANDRA-8031
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8031
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
  Labels: cqlsh
 Fix For: 2.1.2


 Since we switched over to the python driver for cqlsh, describe of custom 
 indexes is broken again. Previously added in CASSANDRA-5760
 Driver bug: https://datastax-oss.atlassian.net/browse/PYTHON-165



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8031) Custom Index describe broken again

2014-10-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188546#comment-14188546
 ] 

Philip Thompson commented on CASSANDRA-8031:


PYTHON-165 was resolved. Updating cqlsh to use cassandra-driver 2.1.2 should 
fix this issue.

 Custom Index describe broken again
 --

 Key: CASSANDRA-8031
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8031
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
  Labels: cqlsh
 Fix For: 2.1.2


 Since we switched over to the python driver for cqlsh, describe of custom 
 indexes is broken again. Previously added in CASSANDRA-5760
 Driver bug: https://datastax-oss.atlassian.net/browse/PYTHON-165



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8031) Custom Index describe broken again

2014-10-29 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-8031:
--

Assignee: Tyler Hobbs

 Custom Index describe broken again
 --

 Key: CASSANDRA-8031
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8031
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
  Labels: cqlsh
 Fix For: 2.1.2


 Since we switched over to the python driver for cqlsh, describe of custom 
 indexes is broken again. Previously added in CASSANDRA-5760
 Driver bug: https://datastax-oss.atlassian.net/browse/PYTHON-165



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8203) Unexpected null static column

2014-10-29 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188549#comment-14188549
 ] 

Michael Shuler commented on CASSANDRA-8203:
---

Could you attach an example INSERT to populate some data that reproduces this 
issue? I am unable to. Here's what I did for a quick try, and my select queries 
did not contain any nulls:

{noformat}
echo CREATE KEYSPACE iss WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1}; | cqlsh
echo CREATE TABLE iss.target_state ( cell ascii, host ascii, service ascii, 
active ascii, cell_etag timeuuid static, etag timeuuid, states mapascii, 
ascii, PRIMARY KEY (cell, host, service) ) WITH CLUSTERING ORDER BY (host ASC, 
service ASC); | cqlsh
for i in `seq 1 2000`; do echo INSERT INTO iss.target_state (cell, host, 
service, active, cell_etag, etag, states) VALUES ('cell${i}', 'host${i}', 
'service${i}', 'active${i}', now(), now(), {'map${i}' : 'map${i}'});; done | 
cqlsh
{noformat}

Was this schema created as specified from day 1, or was it altered along the 
way?

 Unexpected null static column
 -

 Key: CASSANDRA-8203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8203
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov

 I'm using static column.
 When I select whole table (with large limit) some of rows contain null static 
 column value, but it cannot happen accoring to application logic.
 When I select same row using keys, static column has non-null value as 
 expected.
 Schema:
 {quote}
 cqlsh:iss describe table target_state;
 CREATE TABLE iss.target_state (
 cell ascii,
 host ascii,
 service ascii,
 active ascii,
 cell_etag timeuuid static,
 etag timeuuid,
 states mapascii, ascii,
 PRIMARY KEY (cell, host, service)
 ) WITH CLUSTERING ORDER BY (host ASC, service ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '\{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = \{'min_threshold': '4', 
 'unchecked_tombstone_compaction': 'true', 'tombstone_compaction_interval': 
 '43200', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = \{'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 86400
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {quote}
 Whole table query:
 {quote}
 cqlsh:iss select cell, host, service, cell_etag, etag from target_state 
 LIMIT 5;
 {quote}
 Contains in the middle of results:
 {quote}
 606365955656309747054552016:ya(200,15) |  
 ams1-0003.search.yandex.net |   18265 | null 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 {quote}
 Query of a single row gives non-null cell_etag column:
 {quote}
 cqlsh:iss select host, service, cell_etag, etag from target_state where cell 
 = '606365955656309747054552016:ya(200,15)' and host = 
 'ams1-0003.search.yandex.net' and service = '18265';
  host| service | cell_etag
 | etag
 -+-+--+--
  ams1-0003.search.yandex.net |   18265 | bc635e60-5ec3-11e4-bdfd-9f65487c7454 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 (1 rows)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8203) Unexpected null static column

2014-10-29 Thread Alexander Sterligov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188563#comment-14188563
 ] 

Alexander Sterligov commented on CASSANDRA-8203:


We're using NetworkReplicationStrategy, but I don't thing that it matters.
{quote}
CREATE KEYSPACE iss WITH replication = {'class': 'NetworkTopologyStrategy', 
'FOL_A': '3', 'IVA': '3', 'MYT_B': '3', 'SAS_1': '2', 'UGR_4': '3', 'UGR_B': 
'3'}  AND durable_writes = true;
{quote}

Schema was never altered, but this table was truncated several times.

We don't insert data, but update already created columns. I'll try to create 
test case which reproduces the problem.

 Unexpected null static column
 -

 Key: CASSANDRA-8203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8203
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov

 I'm using static column.
 When I select whole table (with large limit) some of rows contain null static 
 column value, but it cannot happen accoring to application logic.
 When I select same row using keys, static column has non-null value as 
 expected.
 Schema:
 {quote}
 cqlsh:iss describe table target_state;
 CREATE TABLE iss.target_state (
 cell ascii,
 host ascii,
 service ascii,
 active ascii,
 cell_etag timeuuid static,
 etag timeuuid,
 states mapascii, ascii,
 PRIMARY KEY (cell, host, service)
 ) WITH CLUSTERING ORDER BY (host ASC, service ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '\{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = \{'min_threshold': '4', 
 'unchecked_tombstone_compaction': 'true', 'tombstone_compaction_interval': 
 '43200', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = \{'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 86400
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {quote}
 Whole table query:
 {quote}
 cqlsh:iss select cell, host, service, cell_etag, etag from target_state 
 LIMIT 5;
 {quote}
 Contains in the middle of results:
 {quote}
 606365955656309747054552016:ya(200,15) |  
 ams1-0003.search.yandex.net |   18265 | null 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 {quote}
 Query of a single row gives non-null cell_etag column:
 {quote}
 cqlsh:iss select host, service, cell_etag, etag from target_state where cell 
 = '606365955656309747054552016:ya(200,15)' and host = 
 'ams1-0003.search.yandex.net' and service = '18265';
  host| service | cell_etag
 | etag
 -+-+--+--
  ams1-0003.search.yandex.net |   18265 | bc635e60-5ec3-11e4-bdfd-9f65487c7454 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 (1 rows)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8212) Archive Commitlog Test Failing

2014-10-29 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8212:
--

 Summary: Archive Commitlog Test Failing
 Key: CASSANDRA-8212
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8212
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson


The test snapshot_test.TestArchiveCommitlog.test_archive_commitlog is failing 
on 2.0.11, but not 2.1.1. We attempt to replay 65000 rows, but in 2.0.11 only 
63000 rows succeed. URL for test output:

http://cassci.datastax.com/job/cassandra-2.0_dtest/lastCompletedBuild/testReport/snapshot_test/TestArchiveCommitlog/test_archive_commitlog/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8210) java.lang.AssertionError: Memory was freed exception in CompactionExecutor

2014-10-29 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-8210.

Resolution: Duplicate

I suspect this is related to CASSANDRA-8211 - if we pick an sstable that is 
already compacting, this can happen, and the bug in 8211 is likely to be that 
we pick bad compaction candidates

 java.lang.AssertionError: Memory was freed exception in CompactionExecutor
 

 Key: CASSANDRA-8210
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8210
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.2, Cassandra 2.0.10, OEL 6.5, kernel 
 3.8.13-44.el6uek.x86_64, 128Gb of RAM, swap disabled, JRE 1.7.0_67-b01
Reporter: Nikolai Grigoriev
Priority: Minor
 Attachments: cassandra-env.sh, cassandra.yaml, occurence 
 frequency.png, system.log.gz


 I have just got this problem on multiple nodes. Cassandra 2.0.10 (DSE 4.5.2). 
 After looking through the history I have found that it was actually happening 
 on all nodes since the start of large compaction process (I've loaded tons of 
 data in the system and then turned off all load to let it compact the data).
 {code}
 ERROR [CompactionExecutor:1196] 2014-10-28 17:14:50,124 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1196,1,main]
 java.lang.AssertionError: Memory was freed
 at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
 at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:58)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:692)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.estimatedKeysForRanges(SSTableReader.java:663)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:328)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.findDroppableSSTable(LeveledCompactionStrategy.java:354)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getMaximalTask(LeveledCompactionStrategy.java:125)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:113)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:192)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8206) Deleting columns breaks secondary index on clustering column

2014-10-29 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8206:
---
Priority: Critical  (was: Major)
 Summary: Deleting columns breaks secondary index on clustering column  
(was: Removing items from a set breaks secondary index)

It looks like this isn't limited to removing collection items.  Deleting any 
non-pk column will remove the index entry for secondary indexes on clustering 
columns like this:

{noformat}
cqlsh:ks1 CREATE TABLE bar (a int, b int, c int, d int, PRIMARY KEY (a, b));
cqlsh:ks1 CREATE INDEX ON bar (b);
cqlsh:ks1 INSERT INTO bar (a, b, c, d) VALUES (0, 0, 0, 0);
cqlsh:ks1 SELECT * FROM bar WHERE b = 0;

 a | b | c | d
---+---+---+---
 0 | 0 | 0 | 0

(1 rows)
cqlsh:ks1 UPDATE bar SET c = null WHERE a = 0 AND b = 0;
cqlsh:ks1 SELECT * FROM bar WHERE b = 0;

 a | b | c | d
---+---+---+---

(0 rows)
{noformat}

 Deleting columns breaks secondary index on clustering column
 

 Key: CASSANDRA-8206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8206
 Project: Cassandra
  Issue Type: Bug
Reporter: Tuukka Mustonen
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.2


 Removing items from a set breaks index for field {{id}}:
 {noformat}
 cqlsh:cs CREATE TABLE buckets (
   ...   tenant int,
   ...   id int,
   ...   items settext,
   ...   PRIMARY KEY (tenant, id)
   ... );
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs INSERT INTO buckets (tenant, id, items) VALUES (1, 1, {'foo', 
 'bar'});
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs UPDATE buckets SET items=items-{'foo'} WHERE tenant=1 AND id=1;
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
 (0 rows)
 {noformat}
 Re-building the index fixes the issue:
 {noformat}
 cqlsh:cs DROP INDEX buckets_ids;
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 {noformat}
 Adding items does not cause similar failure, only delete. Also didn't test if 
 other collections are also affected(?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8116) HSHA fails with default rpc_max_threads setting

2014-10-29 Thread Peter Haggerty (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188625#comment-14188625
 ] 

Peter Haggerty commented on CASSANDRA-8116:
---

The latest 2.0.x release of Cassandra using hsha with default settings either 
stalls after a few minutes of operation or crashes.

This does not seem like it should have a priority of Minor. This is a major 
problem. The longer that 2.0.11 is the latest version the bigger the problem 
becomes for new users and existing users that have automation and high levels 
of trust in minor version upgrades.



 HSHA fails with default rpc_max_threads setting
 ---

 Key: CASSANDRA-8116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8116
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Mike Adamson
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.12, 2.1.2

 Attachments: 8116-throw-exc-2.0.txt, 8116.txt


 The HSHA server fails with 'Out of heap space' error if the rpc_max_threads 
 is left at its default setting (unlimited) in cassandra.yaml.
 I'm not proposing any code change for this but have submitted a patch for a 
 comment change in cassandra.yaml to indicate that rpc_max_threads needs to be 
 changed if you use HSHA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8203) Unexpected null static column

2014-10-29 Thread Alexander Sterligov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188652#comment-14188652
 ] 

Alexander Sterligov commented on CASSANDRA-8203:


Following code reproduces it. It generates exactly one null always at the same 
column.
{quote}
echo drop keyspace if exists test; | ./cqlsh
echo CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1}; | ./cqlsh
echo CREATE TABLE test.test ( row ascii, col ascii, uuid_static timeuuid 
static, uuid timeuuid, PRIMARY KEY (row, col) ); | ./cqlsh
for i in `seq 1 1`; do echo UPDATE test.test SET uuid_static = now(), uuid 
= now() WHERE row = '$((i/1000))' AND col = '$((i%1000))';; done | ./cqlsh
echo SELECT * FROM test.test WHERE row = '3' AND col = '999'; | ./cqlsh
echo SELECT * FROM test.test; | ./cqlsh | grep null
{quote}


 Unexpected null static column
 -

 Key: CASSANDRA-8203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8203
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov

 I'm using static column.
 When I select whole table (with large limit) some of rows contain null static 
 column value, but it cannot happen accoring to application logic.
 When I select same row using keys, static column has non-null value as 
 expected.
 Schema:
 {quote}
 cqlsh:iss describe table target_state;
 CREATE TABLE iss.target_state (
 cell ascii,
 host ascii,
 service ascii,
 active ascii,
 cell_etag timeuuid static,
 etag timeuuid,
 states mapascii, ascii,
 PRIMARY KEY (cell, host, service)
 ) WITH CLUSTERING ORDER BY (host ASC, service ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '\{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = \{'min_threshold': '4', 
 'unchecked_tombstone_compaction': 'true', 'tombstone_compaction_interval': 
 '43200', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = \{'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 86400
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {quote}
 Whole table query:
 {quote}
 cqlsh:iss select cell, host, service, cell_etag, etag from target_state 
 LIMIT 5;
 {quote}
 Contains in the middle of results:
 {quote}
 606365955656309747054552016:ya(200,15) |  
 ams1-0003.search.yandex.net |   18265 | null 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 {quote}
 Query of a single row gives non-null cell_etag column:
 {quote}
 cqlsh:iss select host, service, cell_etag, etag from target_state where cell 
 = '606365955656309747054552016:ya(200,15)' and host = 
 'ams1-0003.search.yandex.net' and service = '18265';
  host| service | cell_etag
 | etag
 -+-+--+--
  ams1-0003.search.yandex.net |   18265 | bc635e60-5ec3-11e4-bdfd-9f65487c7454 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 (1 rows)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-10-29 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8213:
--

 Summary: Grant Permission fails if permission had been revoked 
previously
 Key: CASSANDRA-8213
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
 Fix For: 2.1.2


The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 

{code}
cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
cathy.execute(ALTER TABLE ks.cf ADD val int)

cassandra.execute(REVOKE ALTER ON ks.cf FROM cathy)
self.assertUnauthorized(User cathy has no ALTER permission on table 
ks.cf or any of its parents,
cathy, CREATE INDEX ON ks.cf(val))

cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
cathy.execute(CREATE INDEX ON ks.cf(val))
{code}

In this section of code, the user cathy is granted ALTER permissions on 
'ks.cf', then they are revoked, then granted again. Monitoring 
system_auth.permissions during this section of code show that the permission is 
added with the initial grant, and revoked properly, but the table remains empty 
after the second grant.

When the cathy user attempts to create an index, the following exception is 
thrown:

{code}
Unauthorized: code=2100 [Unauthorized] message=User cathy has no ALTER 
permission on table ks.cf or any of its parents
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-10-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188709#comment-14188709
 ] 

Philip Thompson commented on CASSANDRA-8213:


grant_revoke_cleanup_test is having the exact same issue but with creating and 
dropping a user. After dropping a user, recreating the user shows no changes in 
the system_auth.user folder

 Grant Permission fails if permission had been revoked previously
 

 Key: CASSANDRA-8213
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
 Fix For: 2.1.2


 The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
 {code}
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(ALTER TABLE ks.cf ADD val int)
 cassandra.execute(REVOKE ALTER ON ks.cf FROM cathy)
 self.assertUnauthorized(User cathy has no ALTER permission on table 
 ks.cf or any of its parents,
 cathy, CREATE INDEX ON ks.cf(val))
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(CREATE INDEX ON ks.cf(val))
 {code}
 In this section of code, the user cathy is granted ALTER permissions on 
 'ks.cf', then they are revoked, then granted again. Monitoring 
 system_auth.permissions during this section of code show that the permission 
 is added with the initial grant, and revoked properly, but the table remains 
 empty after the second grant.
 When the cathy user attempts to create an index, the following exception is 
 thrown:
 {code}
 Unauthorized: code=2100 [Unauthorized] message=User cathy has no ALTER 
 permission on table ks.cf or any of its parents
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-10-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188711#comment-14188711
 ] 

Philip Thompson commented on CASSANDRA-8213:


We are only using one node, so this is not a replication problem.

 Grant Permission fails if permission had been revoked previously
 

 Key: CASSANDRA-8213
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
 Fix For: 2.1.2


 The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
 {code}
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(ALTER TABLE ks.cf ADD val int)
 cassandra.execute(REVOKE ALTER ON ks.cf FROM cathy)
 self.assertUnauthorized(User cathy has no ALTER permission on table 
 ks.cf or any of its parents,
 cathy, CREATE INDEX ON ks.cf(val))
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(CREATE INDEX ON ks.cf(val))
 {code}
 In this section of code, the user cathy is granted ALTER permissions on 
 'ks.cf', then they are revoked, then granted again. Monitoring 
 system_auth.permissions during this section of code show that the permission 
 is added with the initial grant, and revoked properly, but the table remains 
 empty after the second grant.
 When the cathy user attempts to create an index, the following exception is 
 thrown:
 {code}
 Unauthorized: code=2100 [Unauthorized] message=User cathy has no ALTER 
 permission on table ks.cf or any of its parents
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8203) Unexpected null static column

2014-10-29 Thread Alexander Sterligov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Sterligov updated CASSANDRA-8203:
---
Reproduced In: 2.1.1, 2.1.0  (was: 2.1.0)

 Unexpected null static column
 -

 Key: CASSANDRA-8203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8203
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov

 I'm using static column.
 When I select whole table (with large limit) some of rows contain null static 
 column value, but it cannot happen accoring to application logic.
 When I select same row using keys, static column has non-null value as 
 expected.
 Schema:
 {quote}
 cqlsh:iss describe table target_state;
 CREATE TABLE iss.target_state (
 cell ascii,
 host ascii,
 service ascii,
 active ascii,
 cell_etag timeuuid static,
 etag timeuuid,
 states mapascii, ascii,
 PRIMARY KEY (cell, host, service)
 ) WITH CLUSTERING ORDER BY (host ASC, service ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '\{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = \{'min_threshold': '4', 
 'unchecked_tombstone_compaction': 'true', 'tombstone_compaction_interval': 
 '43200', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = \{'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 86400
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {quote}
 Whole table query:
 {quote}
 cqlsh:iss select cell, host, service, cell_etag, etag from target_state 
 LIMIT 5;
 {quote}
 Contains in the middle of results:
 {quote}
 606365955656309747054552016:ya(200,15) |  
 ams1-0003.search.yandex.net |   18265 | null 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 {quote}
 Query of a single row gives non-null cell_etag column:
 {quote}
 cqlsh:iss select host, service, cell_etag, etag from target_state where cell 
 = '606365955656309747054552016:ya(200,15)' and host = 
 'ams1-0003.search.yandex.net' and service = '18265';
  host| service | cell_etag
 | etag
 -+-+--+--
  ams1-0003.search.yandex.net |   18265 | bc635e60-5ec3-11e4-bdfd-9f65487c7454 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 (1 rows)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8213:
---
Reproduced In: 2.1.2  (was: 2.1.1)

 Grant Permission fails if permission had been revoked previously
 

 Key: CASSANDRA-8213
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
 Fix For: 2.1.2


 The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
 {code}
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(ALTER TABLE ks.cf ADD val int)
 cassandra.execute(REVOKE ALTER ON ks.cf FROM cathy)
 self.assertUnauthorized(User cathy has no ALTER permission on table 
 ks.cf or any of its parents,
 cathy, CREATE INDEX ON ks.cf(val))
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(CREATE INDEX ON ks.cf(val))
 {code}
 In this section of code, the user cathy is granted ALTER permissions on 
 'ks.cf', then they are revoked, then granted again. Monitoring 
 system_auth.permissions during this section of code show that the permission 
 is added with the initial grant, and revoked properly, but the table remains 
 empty after the second grant.
 When the cathy user attempts to create an index, the following exception is 
 thrown:
 {code}
 Unauthorized: code=2100 [Unauthorized] message=User cathy has no ALTER 
 permission on table ks.cf or any of its parents
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8203) Unexpected null static column

2014-10-29 Thread Alexander Sterligov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188717#comment-14188717
 ] 

Alexander Sterligov commented on CASSANDRA-8203:


On 2.1.1. there are even more null static columns.

 Unexpected null static column
 -

 Key: CASSANDRA-8203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8203
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov

 I'm using static column.
 When I select whole table (with large limit) some of rows contain null static 
 column value, but it cannot happen accoring to application logic.
 When I select same row using keys, static column has non-null value as 
 expected.
 Schema:
 {quote}
 cqlsh:iss describe table target_state;
 CREATE TABLE iss.target_state (
 cell ascii,
 host ascii,
 service ascii,
 active ascii,
 cell_etag timeuuid static,
 etag timeuuid,
 states mapascii, ascii,
 PRIMARY KEY (cell, host, service)
 ) WITH CLUSTERING ORDER BY (host ASC, service ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '\{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = \{'min_threshold': '4', 
 'unchecked_tombstone_compaction': 'true', 'tombstone_compaction_interval': 
 '43200', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = \{'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 86400
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {quote}
 Whole table query:
 {quote}
 cqlsh:iss select cell, host, service, cell_etag, etag from target_state 
 LIMIT 5;
 {quote}
 Contains in the middle of results:
 {quote}
 606365955656309747054552016:ya(200,15) |  
 ams1-0003.search.yandex.net |   18265 | null 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 {quote}
 Query of a single row gives non-null cell_etag column:
 {quote}
 cqlsh:iss select host, service, cell_etag, etag from target_state where cell 
 = '606365955656309747054552016:ya(200,15)' and host = 
 'ams1-0003.search.yandex.net' and service = '18265';
  host| service | cell_etag
 | etag
 -+-+--+--
  ams1-0003.search.yandex.net |   18265 | bc635e60-5ec3-11e4-bdfd-9f65487c7454 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 (1 rows)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8203) Unexpected null static column

2014-10-29 Thread Alexander Sterligov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188652#comment-14188652
 ] 

Alexander Sterligov edited comment on CASSANDRA-8203 at 10/29/14 6:13 PM:
--

Following code reproduces it. It generates exactly one null always at the same 
column.
{quote}
echo drop keyspace if exists test; | ./cqlsh
echo CREATE KEYSPACE test WITH replication = \{'class': 'SimpleStrategy', 
'replication_factor': 1}; | ./cqlsh
echo CREATE TABLE test.test ( row ascii, col ascii, uuid_static timeuuid 
static, uuid timeuuid, PRIMARY KEY (row, col) ); | ./cqlsh
for i in `seq 1 1`; do echo UPDATE test.test SET uuid_static = now(), uuid 
= now() WHERE row = '$((i/1000))' AND col = '$((i%1000))';; done | ./cqlsh
echo SELECT * FROM test.test WHERE row = '3' AND col = '999'; | ./cqlsh
echo SELECT * FROM test.test; | ./cqlsh | grep null
{quote}



was (Author: sterligovak):
Following code reproduces it. It generates exactly one null always at the same 
column.
{quote}
echo drop keyspace if exists test; | ./cqlsh
echo CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1}; | ./cqlsh
echo CREATE TABLE test.test ( row ascii, col ascii, uuid_static timeuuid 
static, uuid timeuuid, PRIMARY KEY (row, col) ); | ./cqlsh
for i in `seq 1 1`; do echo UPDATE test.test SET uuid_static = now(), uuid 
= now() WHERE row = '$((i/1000))' AND col = '$((i%1000))';; done | ./cqlsh
echo SELECT * FROM test.test WHERE row = '3' AND col = '999'; | ./cqlsh
echo SELECT * FROM test.test; | ./cqlsh | grep null
{quote}


 Unexpected null static column
 -

 Key: CASSANDRA-8203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8203
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov

 I'm using static column.
 When I select whole table (with large limit) some of rows contain null static 
 column value, but it cannot happen accoring to application logic.
 When I select same row using keys, static column has non-null value as 
 expected.
 Schema:
 {quote}
 cqlsh:iss describe table target_state;
 CREATE TABLE iss.target_state (
 cell ascii,
 host ascii,
 service ascii,
 active ascii,
 cell_etag timeuuid static,
 etag timeuuid,
 states mapascii, ascii,
 PRIMARY KEY (cell, host, service)
 ) WITH CLUSTERING ORDER BY (host ASC, service ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '\{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = \{'min_threshold': '4', 
 'unchecked_tombstone_compaction': 'true', 'tombstone_compaction_interval': 
 '43200', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = \{'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 86400
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {quote}
 Whole table query:
 {quote}
 cqlsh:iss select cell, host, service, cell_etag, etag from target_state 
 LIMIT 5;
 {quote}
 Contains in the middle of results:
 {quote}
 606365955656309747054552016:ya(200,15) |  
 ams1-0003.search.yandex.net |   18265 | null 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 {quote}
 Query of a single row gives non-null cell_etag column:
 {quote}
 cqlsh:iss select host, service, cell_etag, etag from target_state where cell 
 = '606365955656309747054552016:ya(200,15)' and host = 
 'ams1-0003.search.yandex.net' and service = '18265';
  host| service | cell_etag
 | etag
 -+-+--+--
  ams1-0003.search.yandex.net |   18265 | bc635e60-5ec3-11e4-bdfd-9f65487c7454 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 (1 rows)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8203) Unexpected null static column

2014-10-29 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188720#comment-14188720
 ] 

Michael Shuler commented on CASSANDRA-8203:
---

Thanks for the reproduction! I'm testing on the cassandra-2.1 branch HEAD 
(commit fee7137 currently)

{noformat}
mshuler@hana:~$ echo SELECT * FROM test.test WHERE row = '3' AND col = '999'; 
| cqlsh

 row | col | uuid_static  | uuid
-+-+--+--
   3 | 999 | 7da87cb0-5f94-11e4-ad04-5ddd8099f62f | 
7da87cb1-5f94-11e4-ad04-5ddd8099f62f

(1 rows)
mshuler@hana:~$ echo SELECT * FROM test.test; | cqlsh | egrep 'row \| col|3 
\| 999'
 row | col | uuid_static  | uuid
   3 | 999 | null | 
7da87cb1-5f94-11e4-ad04-5ddd8099f62f
mshuler@hana:~$ echo SELECT * FROM test.test; | cqlsh | grep null | wc -l
9003
{noformat}

 Unexpected null static column
 -

 Key: CASSANDRA-8203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8203
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov

 I'm using static column.
 When I select whole table (with large limit) some of rows contain null static 
 column value, but it cannot happen accoring to application logic.
 When I select same row using keys, static column has non-null value as 
 expected.
 Schema:
 {quote}
 cqlsh:iss describe table target_state;
 CREATE TABLE iss.target_state (
 cell ascii,
 host ascii,
 service ascii,
 active ascii,
 cell_etag timeuuid static,
 etag timeuuid,
 states mapascii, ascii,
 PRIMARY KEY (cell, host, service)
 ) WITH CLUSTERING ORDER BY (host ASC, service ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '\{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = \{'min_threshold': '4', 
 'unchecked_tombstone_compaction': 'true', 'tombstone_compaction_interval': 
 '43200', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = \{'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 86400
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {quote}
 Whole table query:
 {quote}
 cqlsh:iss select cell, host, service, cell_etag, etag from target_state 
 LIMIT 5;
 {quote}
 Contains in the middle of results:
 {quote}
 606365955656309747054552016:ya(200,15) |  
 ams1-0003.search.yandex.net |   18265 | null 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 {quote}
 Query of a single row gives non-null cell_etag column:
 {quote}
 cqlsh:iss select host, service, cell_etag, etag from target_state where cell 
 = '606365955656309747054552016:ya(200,15)' and host = 
 'ams1-0003.search.yandex.net' and service = '18265';
  host| service | cell_etag
 | etag
 -+-+--+--
  ams1-0003.search.yandex.net |   18265 | bc635e60-5ec3-11e4-bdfd-9f65487c7454 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 (1 rows)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8203) Unexpected null static column

2014-10-29 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8203:
--
Assignee: Sylvain Lebresne

 Unexpected null static column
 -

 Key: CASSANDRA-8203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8203
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov
Assignee: Sylvain Lebresne

 I'm using static column.
 When I select whole table (with large limit) some of rows contain null static 
 column value, but it cannot happen accoring to application logic.
 When I select same row using keys, static column has non-null value as 
 expected.
 Schema:
 {quote}
 cqlsh:iss describe table target_state;
 CREATE TABLE iss.target_state (
 cell ascii,
 host ascii,
 service ascii,
 active ascii,
 cell_etag timeuuid static,
 etag timeuuid,
 states mapascii, ascii,
 PRIMARY KEY (cell, host, service)
 ) WITH CLUSTERING ORDER BY (host ASC, service ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '\{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = \{'min_threshold': '4', 
 'unchecked_tombstone_compaction': 'true', 'tombstone_compaction_interval': 
 '43200', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = \{'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 86400
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {quote}
 Whole table query:
 {quote}
 cqlsh:iss select cell, host, service, cell_etag, etag from target_state 
 LIMIT 5;
 {quote}
 Contains in the middle of results:
 {quote}
 606365955656309747054552016:ya(200,15) |  
 ams1-0003.search.yandex.net |   18265 | null 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 {quote}
 Query of a single row gives non-null cell_etag column:
 {quote}
 cqlsh:iss select host, service, cell_etag, etag from target_state where cell 
 = '606365955656309747054552016:ya(200,15)' and host = 
 'ams1-0003.search.yandex.net' and service = '18265';
  host| service | cell_etag
 | etag
 -+-+--+--
  ams1-0003.search.yandex.net |   18265 | bc635e60-5ec3-11e4-bdfd-9f65487c7454 
 | 737a05e0-5532-11e4-bf0b-efa731c31cd0
 (1 rows)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-10-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188737#comment-14188737
 ] 

Philip Thompson commented on CASSANDRA-8213:


git bisect is currently pointing to {code}
commit 1d285eadadc14251da271cc03b4cf8a1f8f33516
Merge: c937657 748b01d
Author: Sylvain Lebresne sylv...@datastax.com
Date:   Wed Oct 29 10:40:51 2014 +0100

Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/cql3/UpdateParameters.java
{code} as the culprit

 Grant Permission fails if permission had been revoked previously
 

 Key: CASSANDRA-8213
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
 Fix For: 2.1.2


 The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
 {code}
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(ALTER TABLE ks.cf ADD val int)
 cassandra.execute(REVOKE ALTER ON ks.cf FROM cathy)
 self.assertUnauthorized(User cathy has no ALTER permission on table 
 ks.cf or any of its parents,
 cathy, CREATE INDEX ON ks.cf(val))
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(CREATE INDEX ON ks.cf(val))
 {code}
 In this section of code, the user cathy is granted ALTER permissions on 
 'ks.cf', then they are revoked, then granted again. Monitoring 
 system_auth.permissions during this section of code show that the permission 
 is added with the initial grant, and revoked properly, but the table remains 
 empty after the second grant.
 When the cathy user attempts to create an index, the following exception is 
 thrown:
 {code}
 Unauthorized: code=2100 [Unauthorized] message=User cathy has no ALTER 
 permission on table ks.cf or any of its parents
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-10-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188738#comment-14188738
 ] 

Philip Thompson commented on CASSANDRA-8213:


The test is not failing against 2.0-HEAD but it is failing against 2.1-HEAD and 
trunk HEAD.

 Grant Permission fails if permission had been revoked previously
 

 Key: CASSANDRA-8213
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
 Fix For: 2.1.2


 The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
 {code}
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(ALTER TABLE ks.cf ADD val int)
 cassandra.execute(REVOKE ALTER ON ks.cf FROM cathy)
 self.assertUnauthorized(User cathy has no ALTER permission on table 
 ks.cf or any of its parents,
 cathy, CREATE INDEX ON ks.cf(val))
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(CREATE INDEX ON ks.cf(val))
 {code}
 In this section of code, the user cathy is granted ALTER permissions on 
 'ks.cf', then they are revoked, then granted again. Monitoring 
 system_auth.permissions during this section of code show that the permission 
 is added with the initial grant, and revoked properly, but the table remains 
 empty after the second grant.
 When the cathy user attempts to create an index, the following exception is 
 thrown:
 {code}
 Unauthorized: code=2100 [Unauthorized] message=User cathy has no ALTER 
 permission on table ks.cf or any of its parents
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-10-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188745#comment-14188745
 ] 

Philip Thompson commented on CASSANDRA-8213:


I performed the bisect using test_auth.py:TestAuth.grant_revoke_cleanup_test

 Grant Permission fails if permission had been revoked previously
 

 Key: CASSANDRA-8213
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Aleksey Yeschenko
 Fix For: 2.1.2


 The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
 {code}
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(ALTER TABLE ks.cf ADD val int)
 cassandra.execute(REVOKE ALTER ON ks.cf FROM cathy)
 self.assertUnauthorized(User cathy has no ALTER permission on table 
 ks.cf or any of its parents,
 cathy, CREATE INDEX ON ks.cf(val))
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(CREATE INDEX ON ks.cf(val))
 {code}
 In this section of code, the user cathy is granted ALTER permissions on 
 'ks.cf', then they are revoked, then granted again. Monitoring 
 system_auth.permissions during this section of code show that the permission 
 is added with the initial grant, and revoked properly, but the table remains 
 empty after the second grant.
 When the cathy user attempts to create an index, the following exception is 
 thrown:
 {code}
 Unauthorized: code=2100 [Unauthorized] message=User cathy has no ALTER 
 permission on table ks.cf or any of its parents
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8213:
---
Assignee: Aleksey Yeschenko

 Grant Permission fails if permission had been revoked previously
 

 Key: CASSANDRA-8213
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Aleksey Yeschenko
 Fix For: 2.1.2


 The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
 {code}
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(ALTER TABLE ks.cf ADD val int)
 cassandra.execute(REVOKE ALTER ON ks.cf FROM cathy)
 self.assertUnauthorized(User cathy has no ALTER permission on table 
 ks.cf or any of its parents,
 cathy, CREATE INDEX ON ks.cf(val))
 cassandra.execute(GRANT ALTER ON ks.cf TO cathy)
 cathy.execute(CREATE INDEX ON ks.cf(val))
 {code}
 In this section of code, the user cathy is granted ALTER permissions on 
 'ks.cf', then they are revoked, then granted again. Monitoring 
 system_auth.permissions during this section of code show that the permission 
 is added with the initial grant, and revoked properly, but the table remains 
 empty after the second grant.
 When the cathy user attempts to create an index, the following exception is 
 thrown:
 {code}
 Unauthorized: code=2100 [Unauthorized] message=User cathy has no ALTER 
 permission on table ks.cf or any of its parents
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-10-29 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188766#comment-14188766
 ] 

Joshua McKenzie commented on CASSANDRA-5483:


Think that was just me not refreshing the tab I had open in Chrome w/this 
ticket in it - missed the update concerning local build problems.

Re: isDone: miscommunication in ticket.  Changing from bit-magic to named 
variables w/flow changed isDone into a self-commenting method and addressed my 
concerns with it.  I appreciate the javadoc @ the top to give just general 
overview as well.  The method is simple enough that switching to BlockingQueue 
isn't really necessary at this time imo.

Re: unfiltered traces: are these extraneous messages or just more verbosity 
surrounding the activity of the repair?  It does generate a lot of output but 
this is for tracing and I'd be comfortable with us filtering down as a future 
effort if we decide it's necessary.

Last nit: isDone should be renamed since it does more than just check if things 
are done.  We could either split out the isDone check from waitAndDouble() or 
just rename to waitWithDoubleTimeout() or something like that?

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Fix For: 3.0

 Attachments: 5483-full-trunk.txt, 
 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
  5483-v07-08-Fix-brace-style.patch, 
 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
 5483-v08-11-Shorten-trace-messages.-Use-Tracing-begin.patch, 
 5483-v08-12-Trace-streaming-in-Differencer-StreamingRepairTask.patch, 
 5483-v08-13-sendNotification-of-local-traces-back-to-nodetool.patch, 
 5483-v08-14-Poll-system_traces.events.patch, 
 5483-v08-15-Limit-trace-notifications.-Add-exponential-backoff.patch, 
 5483-v09-16-Fix-hang-caused-by-incorrect-exit-code.patch, 
 5483-v10-17-minor-bugfixes-and-changes.patch, 
 5483-v10-rebased-and-squashed-471f5cc.patch, 5483-v11-01-squashed.patch, 
 5483-v11-squashed-nits.patch, 5483-v12-02-cassandra-yaml-ttl-doc.patch, 
 5483-v13-608fb03-May-14-trace-formatting-changes.patch, 
 5483-v14-01-squashed.patch, 
 5483-v15-02-Hook-up-exponential-backoff-functionality.patch, 
 5483-v15-03-Exact-doubling-for-exponential-backoff.patch, 
 5483-v15-04-Re-add-old-StorageService-JMX-signatures.patch, 
 5483-v15-05-Move-command-column-to-system_traces.sessions.patch, 
 5483-v15.patch, 5483-v17-00.patch, 5483-v17-01.patch, 5483-v17.patch, 
 ccm-repair-test, cqlsh-left-justify-text-columns.patch, 
 prerepair-vs-postbuggedrepair.diff, test-5483-system_traces-events.txt, 
 trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
 trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
 v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
 v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8188) don't block SocketThread for MessagingService

2014-10-29 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8188:
-
Reviewer: Brandon Williams  (was: Aleksey Yeschenko)

 don't block SocketThread for MessagingService
 -

 Key: CASSANDRA-8188
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8188
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: yangwei
Assignee: yangwei
 Attachments: 0001-don-t-block-SocketThread-for-MessagingService.patch


 We have two datacenters A and B.
 The node in A cannot handshake version with nodes in B, logs in A as follow:
 {noformat}
   INFO [HANDSHAKE-/B] 2014-10-24 04:29:49,075 OutboundTcpConnection.java 
 (line 395) Cannot handshake version with B
 TRACE [WRITE-/B] 2014-10-24 11:02:49,044 OutboundTcpConnection.java (line 
 368) unable to connect to /B
   java.net.ConnectException: Connection refused
 at sun.nio.ch.Net.connect0(Native Method)
 at sun.nio.ch.Net.connect(Net.java:364)
 at sun.nio.ch.Net.connect(Net.java:356)
 at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:623)
 at java.nio.channels.SocketChannel.open(SocketChannel.java:184)
 at 
 org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket(OutboundTcpConnectionPool.java:134)
 at 
 org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket(OutboundTcpConnectionPool.java:119)
 at 
 org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:299)
 at 
 org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:150)
 {noformat}
 
 The jstack output of nodes in B shows it blocks in inputStream.readInt 
 resulting in SocketThread not accept socket any more, logs as follow:
 {noformat}
  java.lang.Thread.State: RUNNABLE
 at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
 at sun.nio.ch.IOUtil.read(IOUtil.java:197)
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
 - locked 0x0007963747e8 (a java.lang.Object)
 at 
 sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:203)
 - locked 0x000796374848 (a java.lang.Object)
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
 - locked 0x0007a5c7ca88 (a 
 sun.nio.ch.SocketAdaptor$SocketInputStream)
 at java.io.InputStream.read(InputStream.java:101)
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:81)
 - locked 0x0007a5c7ca88 (a 
 sun.nio.ch.SocketAdaptor$SocketInputStream)
 at java.io.DataInputStream.readInt(DataInputStream.java:387)
 at 
 org.apache.cassandra.net.MessagingService$SocketThread.run(MessagingService.java:879)
 {noformat}

 In nodes of B tcpdump shows retransmission of SYN,ACK during the tcp 
 three-way handshake phase because tcp implementation drops the last ack when 
 the backlog queue is full.
 In nodes of B ss -tl shows Recv-Q 51 Send-Q 50.
 
 In nodes of B netstat -s shows “SYNs to LISTEN sockets dropped” and “times 
 the listen queue of a socket overflowed” are both increasing.
 This patch sets read timeout to 2 * 
 OutboundTcpConnection.WAIT_FOR_VERSION_MAX_TIME for the accepted socket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8214) Select Count(*) on empty table does not return 0 in trunk

2014-10-29 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8214:
--

 Summary: Select Count(*) on empty table does not return 0 in trunk
 Key: CASSANDRA-8214
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8214
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a 
{code]select count (*){code} on an empty table. It is expecting to have [0] 
returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
trunk HEAD deviating.

With cqlsh I can see a change in how select count(*) changes on an empty table 
as well:
http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8214) Select Count(*) on empty table does not return 0 in trunk

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8214:
---
Description: 
The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a {code} 
select count (*) {code} on an empty table. It is expecting to have [0] 
returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
trunk HEAD deviating.

With cqlsh I can see a change in how select count(*) changes on an empty table 
as well:
http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/

  was:
The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a {code] 
select count (*) {code} on an empty table. It is expecting to have [0] 
returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
trunk HEAD deviating.

With cqlsh I can see a change in how select count(*) changes on an empty table 
as well:
http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/


 Select Count(*) on empty table does not return 0 in trunk
 -

 Key: CASSANDRA-8214
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8214
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


 The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a 
 {code} select count (*) {code} on an empty table. It is expecting to have [0] 
 returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
 trunk HEAD deviating.
 With cqlsh I can see a change in how select count(*) changes on an empty 
 table as well:
 http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8214) Select Count(*) on empty table does not return 0 in trunk

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8214:
---
Description: 
The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a {code] 
select count (*) {code} on an empty table. It is expecting to have [0] 
returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
trunk HEAD deviating.

With cqlsh I can see a change in how select count(*) changes on an empty table 
as well:
http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/

  was:
The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a 
{code]select count (*){code} on an empty table. It is expecting to have [0] 
returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
trunk HEAD deviating.

With cqlsh I can see a change in how select count(*) changes on an empty table 
as well:
http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/

 Tester: Philip Thompson

 Select Count(*) on empty table does not return 0 in trunk
 -

 Key: CASSANDRA-8214
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8214
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


 The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a 
 {code] select count (*) {code} on an empty table. It is expecting to have [0] 
 returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
 trunk HEAD deviating.
 With cqlsh I can see a change in how select count(*) changes on an empty 
 table as well:
 http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8206) Deleting columns breaks secondary index on clustering column

2014-10-29 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8206:
---
  Component/s: Core
Since Version: 2.0.6
Fix Version/s: 2.0.12

 Deleting columns breaks secondary index on clustering column
 

 Key: CASSANDRA-8206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tuukka Mustonen
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.0.12, 2.1.2


 Removing items from a set breaks index for field {{id}}:
 {noformat}
 cqlsh:cs CREATE TABLE buckets (
   ...   tenant int,
   ...   id int,
   ...   items settext,
   ...   PRIMARY KEY (tenant, id)
   ... );
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs INSERT INTO buckets (tenant, id, items) VALUES (1, 1, {'foo', 
 'bar'});
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs UPDATE buckets SET items=items-{'foo'} WHERE tenant=1 AND id=1;
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
 (0 rows)
 {noformat}
 Re-building the index fixes the issue:
 {noformat}
 cqlsh:cs DROP INDEX buckets_ids;
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 {noformat}
 Adding items does not cause similar failure, only delete. Also didn't test if 
 other collections are also affected(?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8147) Secondary indexing of map keys does not work properly when mixing contains and contains_key

2014-10-29 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188794#comment-14188794
 ] 

Tyler Hobbs commented on CASSANDRA-8147:


Git bisect shows that CASSANDRA-6782 is the cause for this.

 Secondary indexing of map keys does not work properly when mixing contains 
 and contains_key
 ---

 Key: CASSANDRA-8147
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8147
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer
Priority: Minor
 Attachments: CASSANDRA-8147-V2.txt, CASSANDRA-8147.txt


 If you have a table with a map column and an index on the map key selecting 
 data using a contains key and a contains will not return the expected data.
 The problem can be reproduced using the following unit test:
 {code}
 @Test
 public void testMapKeyContainsAndValueContains() throws Throwable
 {
 createTable(CREATE TABLE %s (account text, id int, categories 
 maptext,text, PRIMARY KEY (account, id)));
 createIndex(CREATE INDEX ON %s(keys(categories)));
 execute(INSERT INTO %s (account, id , categories) VALUES (?, ?, ?), 
 test, 5, map(lmn, foo));
 assertRows(execute(SELECT * FROM %s WHERE account = ? AND id = ? AND 
 categories CONTAINS KEY ? AND categories CONTAINS ? ALLOW FILTERING, test, 
 5, lmn, foo), row(test, 5, map(lmn, foo)));  
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[5/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-10-29 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6f1b7496
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6f1b7496
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6f1b7496

Branch: refs/heads/cassandra-2.1
Commit: 6f1b74963b6297881a83d1cbbfa81efde465f7f0
Parents: fee7137 f13ce55
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Oct 29 14:01:45 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Oct 29 14:01:45 2014 -0500

--

--




[4/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-10-29 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6f1b7496
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6f1b7496
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6f1b7496

Branch: refs/heads/trunk
Commit: 6f1b74963b6297881a83d1cbbfa81efde465f7f0
Parents: fee7137 f13ce55
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Oct 29 14:01:45 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Oct 29 14:01:45 2014 -0500

--

--




[2/6] git commit: Update CQLSSTableWriter to allow parallel writing of SSTables on the same table within the same JVM

2014-10-29 Thread brandonwilliams
Update CQLSSTableWriter to allow parallel writing of SSTables on the same table 
within the same JVM

Patch by Carl Yeksigian, reviewed by Benjamin Lerer for CASSANDRA-7463


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f13ce558
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f13ce558
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f13ce558

Branch: refs/heads/cassandra-2.1
Commit: f13ce558cc410f959634a6f0d31fcf7bd69be85d
Parents: 748b01d
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Oct 29 13:57:25 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Oct 29 13:57:25 2014 -0500

--
 CHANGES.txt |  2 +
 .../io/sstable/AbstractSSTableSimpleWriter.java | 12 ++-
 .../cassandra/io/sstable/CQLSSTableWriter.java  | 38 +---
 .../io/sstable/CQLSSTableWriterTest.java| 93 
 4 files changed, 131 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f13ce558/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d2cb003..9051b34 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,6 +3,8 @@
  * Pig: Remove errant LIMIT clause in CqlNativeStorage (CASSANDRA-8166)
  * Throw ConfigurationException when hsha is used with the default
rpc_max_threads setting of 'unlimited' (CASSANDRA-8116)
+ * Allow concurrent writing of the same table in the same JVM using
+   CQLSSTableWriter (CASSANDRA-7463)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f13ce558/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java 
b/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
index 2c6f82a..af1c43c 100644
--- a/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
@@ -24,6 +24,7 @@ import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.util.HashSet;
 import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
@@ -43,6 +44,7 @@ public abstract class AbstractSSTableSimpleWriter implements 
Closeable
 protected ColumnFamily columnFamily;
 protected ByteBuffer currentSuperColumn;
 protected final CounterId counterid = CounterId.generate();
+protected static AtomicInteger generation = new AtomicInteger(0);
 
 public AbstractSSTableSimpleWriter(File directory, CFMetaData metadata, 
IPartitioner partitioner)
 {
@@ -80,9 +82,15 @@ public abstract class AbstractSSTableSimpleWriter implements 
Closeable
 return false;
 }
 });
-int maxGen = 0;
+int maxGen = generation.getAndIncrement();
 for (Descriptor desc : existing)
-maxGen = Math.max(maxGen, desc.generation);
+{
+while (desc.generation  maxGen)
+{
+maxGen = generation.getAndIncrement();
+}
+}
+
 return new Descriptor(directory, keyspace, columnFamily, maxGen + 1, 
true).filenameFor(Component.DATA);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f13ce558/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
index 49a1259..93d3dcf 100644
--- a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
@@ -28,6 +28,7 @@ import java.util.List;
 import java.util.Map;
 
 import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
 
 import org.apache.cassandra.cql3.statements.*;
 import org.apache.cassandra.cql3.*;
@@ -335,18 +336,31 @@ public class CQLSSTableWriter implements Closeable
 {
 try
 {
-this.schema = getStatement(schema, CreateTableStatement.class, 
CREATE TABLE).left.getCFMetaData().rebuild();
-
-// We need to register the keyspace/table metadata through 
Schema, otherwise we won't be able to properly
-// build the insert statement in using().
-KSMetaData ksm = KSMetaData.newKeyspace(this.schema.ksName,
-

[3/6] git commit: Update CQLSSTableWriter to allow parallel writing of SSTables on the same table within the same JVM

2014-10-29 Thread brandonwilliams
Update CQLSSTableWriter to allow parallel writing of SSTables on the same table 
within the same JVM

Patch by Carl Yeksigian, reviewed by Benjamin Lerer for CASSANDRA-7463


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f13ce558
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f13ce558
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f13ce558

Branch: refs/heads/trunk
Commit: f13ce558cc410f959634a6f0d31fcf7bd69be85d
Parents: 748b01d
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Oct 29 13:57:25 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Oct 29 13:57:25 2014 -0500

--
 CHANGES.txt |  2 +
 .../io/sstable/AbstractSSTableSimpleWriter.java | 12 ++-
 .../cassandra/io/sstable/CQLSSTableWriter.java  | 38 +---
 .../io/sstable/CQLSSTableWriterTest.java| 93 
 4 files changed, 131 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f13ce558/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d2cb003..9051b34 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,6 +3,8 @@
  * Pig: Remove errant LIMIT clause in CqlNativeStorage (CASSANDRA-8166)
  * Throw ConfigurationException when hsha is used with the default
rpc_max_threads setting of 'unlimited' (CASSANDRA-8116)
+ * Allow concurrent writing of the same table in the same JVM using
+   CQLSSTableWriter (CASSANDRA-7463)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f13ce558/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java 
b/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
index 2c6f82a..af1c43c 100644
--- a/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
@@ -24,6 +24,7 @@ import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.util.HashSet;
 import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
@@ -43,6 +44,7 @@ public abstract class AbstractSSTableSimpleWriter implements 
Closeable
 protected ColumnFamily columnFamily;
 protected ByteBuffer currentSuperColumn;
 protected final CounterId counterid = CounterId.generate();
+protected static AtomicInteger generation = new AtomicInteger(0);
 
 public AbstractSSTableSimpleWriter(File directory, CFMetaData metadata, 
IPartitioner partitioner)
 {
@@ -80,9 +82,15 @@ public abstract class AbstractSSTableSimpleWriter implements 
Closeable
 return false;
 }
 });
-int maxGen = 0;
+int maxGen = generation.getAndIncrement();
 for (Descriptor desc : existing)
-maxGen = Math.max(maxGen, desc.generation);
+{
+while (desc.generation  maxGen)
+{
+maxGen = generation.getAndIncrement();
+}
+}
+
 return new Descriptor(directory, keyspace, columnFamily, maxGen + 1, 
true).filenameFor(Component.DATA);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f13ce558/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
index 49a1259..93d3dcf 100644
--- a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
@@ -28,6 +28,7 @@ import java.util.List;
 import java.util.Map;
 
 import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
 
 import org.apache.cassandra.cql3.statements.*;
 import org.apache.cassandra.cql3.*;
@@ -335,18 +336,31 @@ public class CQLSSTableWriter implements Closeable
 {
 try
 {
-this.schema = getStatement(schema, CreateTableStatement.class, 
CREATE TABLE).left.getCFMetaData().rebuild();
-
-// We need to register the keyspace/table metadata through 
Schema, otherwise we won't be able to properly
-// build the insert statement in using().
-KSMetaData ksm = KSMetaData.newKeyspace(this.schema.ksName,
-

[6/6] git commit: Merge branch 'cassandra-2.1' into trunk

2014-10-29 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/999a448d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/999a448d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/999a448d

Branch: refs/heads/trunk
Commit: 999a448d671381978cefe6ff27370a7ae5050374
Parents: 242d54b 6f1b749
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Oct 29 14:01:57 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Oct 29 14:01:57 2014 -0500

--

--




[1/6] git commit: Update CQLSSTableWriter to allow parallel writing of SSTables on the same table within the same JVM

2014-10-29 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 748b01d1d - f13ce558c
  refs/heads/cassandra-2.1 fee713775 - 6f1b74963
  refs/heads/trunk 242d54b49 - 999a448d6


Update CQLSSTableWriter to allow parallel writing of SSTables on the same table 
within the same JVM

Patch by Carl Yeksigian, reviewed by Benjamin Lerer for CASSANDRA-7463


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f13ce558
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f13ce558
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f13ce558

Branch: refs/heads/cassandra-2.0
Commit: f13ce558cc410f959634a6f0d31fcf7bd69be85d
Parents: 748b01d
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Oct 29 13:57:25 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Oct 29 13:57:25 2014 -0500

--
 CHANGES.txt |  2 +
 .../io/sstable/AbstractSSTableSimpleWriter.java | 12 ++-
 .../cassandra/io/sstable/CQLSSTableWriter.java  | 38 +---
 .../io/sstable/CQLSSTableWriterTest.java| 93 
 4 files changed, 131 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f13ce558/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d2cb003..9051b34 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,6 +3,8 @@
  * Pig: Remove errant LIMIT clause in CqlNativeStorage (CASSANDRA-8166)
  * Throw ConfigurationException when hsha is used with the default
rpc_max_threads setting of 'unlimited' (CASSANDRA-8116)
+ * Allow concurrent writing of the same table in the same JVM using
+   CQLSSTableWriter (CASSANDRA-7463)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f13ce558/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java 
b/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
index 2c6f82a..af1c43c 100644
--- a/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/AbstractSSTableSimpleWriter.java
@@ -24,6 +24,7 @@ import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.util.HashSet;
 import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
@@ -43,6 +44,7 @@ public abstract class AbstractSSTableSimpleWriter implements 
Closeable
 protected ColumnFamily columnFamily;
 protected ByteBuffer currentSuperColumn;
 protected final CounterId counterid = CounterId.generate();
+protected static AtomicInteger generation = new AtomicInteger(0);
 
 public AbstractSSTableSimpleWriter(File directory, CFMetaData metadata, 
IPartitioner partitioner)
 {
@@ -80,9 +82,15 @@ public abstract class AbstractSSTableSimpleWriter implements 
Closeable
 return false;
 }
 });
-int maxGen = 0;
+int maxGen = generation.getAndIncrement();
 for (Descriptor desc : existing)
-maxGen = Math.max(maxGen, desc.generation);
+{
+while (desc.generation  maxGen)
+{
+maxGen = generation.getAndIncrement();
+}
+}
+
 return new Descriptor(directory, keyspace, columnFamily, maxGen + 1, 
true).filenameFor(Component.DATA);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f13ce558/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
index 49a1259..93d3dcf 100644
--- a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
@@ -28,6 +28,7 @@ import java.util.List;
 import java.util.Map;
 
 import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
 
 import org.apache.cassandra.cql3.statements.*;
 import org.apache.cassandra.cql3.*;
@@ -335,18 +336,31 @@ public class CQLSSTableWriter implements Closeable
 {
 try
 {
-this.schema = getStatement(schema, CreateTableStatement.class, 
CREATE TABLE).left.getCFMetaData().rebuild();
-
-// We need to register the keyspace/table metadata through 
Schema, otherwise we won't be able to properly
-// build the insert 

[jira] [Commented] (CASSANDRA-7463) Update CQLSSTableWriter to allow parallel writing of SSTables on the same table within the same JVM

2014-10-29 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188797#comment-14188797
 ] 

Brandon Williams commented on CASSANDRA-7463:
-

Committed to 2.0, but lots of conflicts when merging, can you post a 2.1 patch 
too?

 Update CQLSSTableWriter to allow parallel writing of SSTables on the same 
 table within the same JVM
 ---

 Key: CASSANDRA-7463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7463
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Assignee: Carl Yeksigian
 Fix For: 2.0.12

 Attachments: 7463-2.0-v3.txt, 7463-v2.txt, 7463.patch


 Currently it is not possible to programatically write multiple SSTables for 
 the same table in parallel using the CQLSSTableWriter. This is quite a 
 limitation and the workaround of attempting to do this in a separate JVM is 
 not a great solution.
 See: 
 http://stackoverflow.com/questions/24396902/using-cqlsstablewriter-concurrently



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8206) Deleting columns breaks secondary index on clustering column

2014-10-29 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188800#comment-14188800
 ] 

Tyler Hobbs commented on CASSANDRA-8206:


Git bisect shows that CASSANDRA-6782 is the cause for this.

 Deleting columns breaks secondary index on clustering column
 

 Key: CASSANDRA-8206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tuukka Mustonen
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.0.12, 2.1.2


 Removing items from a set breaks index for field {{id}}:
 {noformat}
 cqlsh:cs CREATE TABLE buckets (
   ...   tenant int,
   ...   id int,
   ...   items settext,
   ...   PRIMARY KEY (tenant, id)
   ... );
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs INSERT INTO buckets (tenant, id, items) VALUES (1, 1, {'foo', 
 'bar'});
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++
   1 |  1 | {'bar', 'foo'}
 (1 rows)
 cqlsh:cs UPDATE buckets SET items=items-{'foo'} WHERE tenant=1 AND id=1;
 cqlsh:cs SELECT * FROM buckets;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
 (0 rows)
 {noformat}
 Re-building the index fixes the issue:
 {noformat}
 cqlsh:cs DROP INDEX buckets_ids;
 cqlsh:cs CREATE INDEX buckets_ids ON buckets(id);
 cqlsh:cs SELECT * FROM buckets WHERE id = 1;
  tenant | id | items
 ++-
   1 |  1 | {'bar'}
 (1 rows)
 {noformat}
 Adding items does not cause similar failure, only delete. Also didn't test if 
 other collections are also affected(?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8096) Make cache serializers pluggable

2014-10-29 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-8096:
---
Attachment: CASSANDRA-8096-v3.patch

attached v3 patch just uses a stream factory which can be switched out at 
runtime.

 Make cache serializers pluggable
 

 Key: CASSANDRA-8096
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8096
 Project: Cassandra
  Issue Type: Improvement
Reporter: Blake Eggleston
Assignee: Blake Eggleston
Priority: Minor
 Fix For: 2.1.2

 Attachments: CASSANDRA-8096-v2.patch, CASSANDRA-8096-v3.patch, 
 CASSANDRA-8096.patch


 Make cache serializers configurable via system properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8215) Empty IN Clause still returns data

2014-10-29 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8215:
--

 Summary: Empty IN Clause still returns data
 Key: CASSANDRA-8215
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8215
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson


The dtest cql_tests.py:TestCQL.empty_in_test is failing on trunk HEAD but not 
on 2.1-HEAD.

The test uses the following table: {code} CREATE TABLE test (k1 int, k2 int, v 
int, PRIMARY KEY (k1, k2)) {code} then performs a number of inserts.

The test then asserts that {code} SELECT v FROM test WHERE k1 = 0 AND k2 IN () 
{code} returns no data, however it is returning everywhere k1 = 0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8215) Empty IN Clause still returns data

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8215:
---
Assignee: Tyler Hobbs

 Empty IN Clause still returns data
 --

 Key: CASSANDRA-8215
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8215
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


 The dtest cql_tests.py:TestCQL.empty_in_test is failing on trunk HEAD but not 
 on 2.1-HEAD.
 The test uses the following table: {code} CREATE TABLE test (k1 int, k2 int, 
 v int, PRIMARY KEY (k1, k2)) {code} then performs a number of inserts.
 The test then asserts that {code} SELECT v FROM test WHERE k1 = 0 AND k2 IN 
 () {code} returns no data, however it is returning everywhere k1 = 0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8214) Select Count(*) on empty table does not return 0 in trunk

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8214:
---
Since Version: 3.0

 Select Count(*) on empty table does not return 0 in trunk
 -

 Key: CASSANDRA-8214
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8214
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
 Fix For: 3.0


 The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a 
 {code} select count (*) {code} on an empty table. It is expecting to have [0] 
 returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
 trunk HEAD deviating.
 With cqlsh I can see a change in how select count(*) changes on an empty 
 table as well:
 http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8214) Select Count(*) on empty table does not return 0 in trunk

2014-10-29 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8214:
---
Assignee: Benjamin Lerer  (was: Tyler Hobbs)

 Select Count(*) on empty table does not return 0 in trunk
 -

 Key: CASSANDRA-8214
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8214
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
 Fix For: 3.0


 The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a 
 {code} select count (*) {code} on an empty table. It is expecting to have [0] 
 returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
 trunk HEAD deviating.
 With cqlsh I can see a change in how select count(*) changes on an empty 
 table as well:
 http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8215) Empty IN Clause still returns data

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8215:
---
Fix Version/s: 3.0

 Empty IN Clause still returns data
 --

 Key: CASSANDRA-8215
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8215
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


 The dtest cql_tests.py:TestCQL.empty_in_test is failing on trunk HEAD but not 
 on 2.1-HEAD.
 The test uses the following table: {code} CREATE TABLE test (k1 int, k2 int, 
 v int, PRIMARY KEY (k1, k2)) {code} then performs a number of inserts.
 The test then asserts that {code} SELECT v FROM test WHERE k1 = 0 AND k2 IN 
 () {code} returns no data, however it is returning everywhere k1 = 0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8214) Select Count(*) on empty table does not return 0 in trunk

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8214:
---
Description: 
The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a {code} 
select count (*) {code} on an empty table. It is expecting to have [0] 
returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
trunk HEAD deviating.

With cqlsh I can see a change in how select count(\*) changes on an empty table 
as well:
http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/

  was:
The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a {code} 
select count (*) {code} on an empty table. It is expecting to have [0] 
returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
trunk HEAD deviating.

With cqlsh I can see a change in how select count(*) changes on an empty table 
as well:
http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/


 Select Count(*) on empty table does not return 0 in trunk
 -

 Key: CASSANDRA-8214
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8214
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
 Fix For: 3.0


 The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a 
 {code} select count (*) {code} on an empty table. It is expecting to have [0] 
 returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
 trunk HEAD deviating.
 With cqlsh I can see a change in how select count(\*) changes on an empty 
 table as well:
 http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8216) Select Count with Limit returns wrong value

2014-10-29 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8216:
--

 Summary: Select Count with Limit returns wrong value
 Key: CASSANDRA-8216
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8216
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
 Fix For: 3.0


The dtest cql_tests.py:TestCQL.select_count_paging_test is failing on trunk 
HEAD but not 2.1-HEAD.

The query {code} select count(*) from test where field3 = false limit 1; {code} 
is returning 2, where obviously it should only return 1 because of the limit. 
This may end up having the same root cause of #8214, I will be bisecting them 
both soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8216) Select Count with Limit returns wrong value

2014-10-29 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188844#comment-14188844
 ] 

Philip Thompson commented on CASSANDRA-8216:


{code} 0cad81aeb9ddaf5dad8b2ab9c6ff6955402c9310 is the first bad commit
commit 0cad81aeb9ddaf5dad8b2ab9c6ff6955402c9310
Author: Benjamin Lerer benjamin.le...@datastax.com
Date:   Tue Oct 7 12:18:52 2014 +0200

Add support for aggregation functions

patch by blerer; reviewed by slebresne for CASSANDRA-4914

:100644 100644 d10747c9f0c9db471ab2fa06350739f584941da8 
132396ec68ddb17f954e11868ed6d25f41ce3abb M  CHANGES.txt
:04 04 001fd67ee56589270d57bc74e45beba29f02959a 
0f5ed97b546ba4d0d26b2050667e3b70aee619c1 M  src
:04 04 54f62d2c82dea2ae2f50faaf3917ea306e245f92 
1868af191ae31a38beae7d06a47af0377c810439 M  test
{code} is what I get from git bisect

 Select Count with Limit returns wrong value
 ---

 Key: CASSANDRA-8216
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8216
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
 Fix For: 3.0


 The dtest cql_tests.py:TestCQL.select_count_paging_test is failing on trunk 
 HEAD but not 2.1-HEAD.
 The query {code} select count(*) from test where field3 = false limit 1; 
 {code} is returning 2, where obviously it should only return 1 because of the 
 limit. This may end up having the same root cause of #8214, I will be 
 bisecting them both soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-10-29 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188858#comment-14188858
 ] 

Joshua McKenzie commented on CASSANDRA-8192:


The portion of your system.log I'm looking for is specifically right after 
initialization where the JVM params are listed - something similar to:
{noformat}
INFO  [main] 2014-10-29 13:47:13,766 CassandraDaemon.java:102 - Hostname: 
WIN-UGJOT801054
INFO  [main] 2014-10-29 13:47:13,782 YamlConfigurationLoader.java:80 - Loading 
settings from file:/C:/Users/jmckenzie/Desktop/cassandra/conf/cassandra.yaml
INFO  [main] 2014-10-29 13:47:13,860 YamlConfigurationLoader.java:123 - Node 
configuration:[authenticator=AllowAllAuthenticator; 
authorizer=AllowAllAuthorizer; auto_snapshot=true; 
batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; 
cas_contention_timeout_in_ms=1000; client_encryption_options=REDACTED; 
cluster_name=Test Cluster; column_index_size_in_kb=64; 
commit_failure_policy=stop; commitlog_segment_size_in_mb=32; 
commitlog_sync=periodic; commitlog_sync_period_in_ms=1; 
compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; 
concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; 
counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; 
cross_node_timeout=false; disk_failure_policy=stop; 
dynamic_snitch_badness_threshold=0.1; 
dynamic_snitch_reset_interval_in_ms=60; 
dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=SimpleSnitch; 
hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; 
incremental_backups=false; index_summary_capacity_in_mb=null; 
index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; 
internode_compression=all; key_cache_save_period=14400; 
key_cache_size_in_mb=null; listen_address=localhost; 
max_hint_window_in_ms=1080; max_hints_delivery_threads=2; 
memtable_allocation_type=heap_buffers; native_transport_port=9042; 
num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; 
permissions_validity_in_ms=2000; range_request_timeout_in_ms=1; 
read_request_timeout_in_ms=5000; 
request_scheduler=org.apache.cassandra.scheduler.NoScheduler; 
request_timeout_in_ms=1; row_cache_save_period=0; row_cache_size_in_mb=0; 
rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; 
seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, 
parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=REDACTED; 
snapshot_before_compaction=false; ssl_storage_port=7001; 
sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; 
start_rpc=true; storage_port=7000; thrift_framed_transport_size_in_mb=15; 
tombstone_failure_threshold=10; tombstone_warn_threshold=1000; 
trickle_fsync=false; trickle_fsync_interval_in_kb=10240; 
truncate_request_timeout_in_ms=6; write_request_timeout_in_ms=2000]
INFO  [main] 2014-10-29 13:47:14,016 DatabaseDescriptor.java:198 - 
DiskAccessMode 'auto' determined to be standard, indexAccessMode is standard
INFO  [main] 2014-10-29 13:47:14,031 DatabaseDescriptor.java:286 - Global 
memtable on-heap threshold is enabled at 249MB
INFO  [main] 2014-10-29 13:47:14,031 DatabaseDescriptor.java:290 - Global 
memtable off-heap threshold is enabled at 249MB
INFO  [main] 2014-10-29 13:47:14,328 YamlConfigurationLoader.java:80 - Loading 
settings from file:/C:/Users/jmckenzie/Desktop/cassandra/conf/cassandra.yaml
INFO  [main] 2014-10-29 13:47:14,343 YamlConfigurationLoader.java:123 - Node 
configuration:[authenticator=AllowAllAuthenticator; 
authorizer=AllowAllAuthorizer; auto_snapshot=true; 
batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; 
cas_contention_timeout_in_ms=1000; client_encryption_options=REDACTED; 
cluster_name=Test Cluster; column_index_size_in_kb=64; 
commit_failure_policy=stop; commitlog_segment_size_in_mb=32; 
commitlog_sync=periodic; commitlog_sync_period_in_ms=1; 
compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; 
concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; 
counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; 
cross_node_timeout=false; disk_failure_policy=stop; 
dynamic_snitch_badness_threshold=0.1; 
dynamic_snitch_reset_interval_in_ms=60; 
dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=SimpleSnitch; 
hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; 
incremental_backups=false; index_summary_capacity_in_mb=null; 
index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; 
internode_compression=all; key_cache_save_period=14400; 
key_cache_size_in_mb=null; listen_address=localhost; 
max_hint_window_in_ms=1080; max_hints_delivery_threads=2; 
memtable_allocation_type=heap_buffers; native_transport_port=9042; 
num_tokens=256; 

[jira] [Resolved] (CASSANDRA-8214) Select Count(*) on empty table does not return 0 in trunk

2014-10-29 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-8214.

Resolution: Fixed

 Select Count(*) on empty table does not return 0 in trunk
 -

 Key: CASSANDRA-8214
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8214
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
 Fix For: 3.0


 The test cql_tests.py:TestCQL.bug_6612_test is failing when performing a 
 {code} select count (*) {code} on an empty table. It is expecting to have [0] 
 returned, but instead [] is returned. 2.1.1 shows the expected behavior, with 
 trunk HEAD deviating.
 With cqlsh I can see a change in how select count(\*) changes on an empty 
 table as well:
 http://aep.appspot.com/display/xnJyEP7iOBMY6YvFNQdk8vofvsE/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >