[jira] [Commented] (CASSANDRA-3907) Support compression using BulkWriter

2012-02-15 Thread Erik Forsberg (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208267#comment-13208267
 ] 

Erik Forsberg commented on CASSANDRA-3907:
--

So, with this patch, if I enable compression with BulkWriter, sstables will be 
compressed on the hadoop side, streamed in compressed form and the cassandra 
daemon does not have to compress them before writing to disk?

Or am I misunderstanding how this works?

 Support compression using BulkWriter
 

 Key: CASSANDRA-3907
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3907
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Chris Goffinet
Assignee: Chris Goffinet
 Fix For: 1.1.0

 Attachments: 0001-Add-compression-support-to-BulkWriter.patch


 Currently there is no way to enable compression using BulkWriter. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3721) Staggering repair

2012-02-15 Thread Sylvain Lebresne (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208268#comment-13208268
 ] 

Sylvain Lebresne commented on CASSANDRA-3721:
-

I think there have been a misunderstanding. I've attached 3721.patch that was 
addressing my concerns with the initial 
0001-staggering-repair-with-snapshot.patch. I'm personally good with 3721.patch 
(It may be that my poor wording suggested otherwise, sorry if that's the case), 
except that it needs review of course.

Vijay did commit 3721.patch, so I think that was right (though maybe Vijay you 
could have made it more clear in your comment that you did review the last 
version, +1ed it and thus committed it). 

 Staggering repair
 -

 Key: CASSANDRA-3721
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3721
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-add-snapshot-command.patch, 
 0001-staggering-repair-with-snapshot.patch, 3721.patch


 Currently repair runs on all the nodes at once and causing the range of data 
 to be hot (higher latency on reads).
 Sequence:
 1) Send a repair request to all of the nodes so we can hold the references of 
 the SSTables (point at which repair was initiated)
 2) Send Validation on one node at a time (once completed will release 
 references).
 3) Hold the reference of the tree in the requesting node and once everything 
 is complete start diff.
 We can also serialize the streaming part not more than 1 node is involved in 
 the streaming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3862) RowCache misses Updates

2012-02-15 Thread Sylvain Lebresne (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-3862:


Attachment: 3862-v4.patch

Actually then handling of the copying patch by the preceding patches is wrong.  
When a put arrives and there is a sentinel, the patch does not add the put to 
the sentinel correctly. But thinking about it, for the copying cache, we should 
avoid having writes check the current value in the cache, because that have a 
non-negligible performance impact. What we should do is let invalidate actually 
invalidate sentinels. The only problem we're faced with if we do that, is that 
when a read-for-caching returns, it must make sure his own sentinel hasn't been 
invalidated. And in particular it must be careful of the case where the 
sentinel has been invalidated and another read has set another sentinel.

Anyway, attaching a v4 (that include the comments cleanups) that choose that 
strategy instead (and thus is (hopefully) not buggy even in the copying cache 
case). Note that it means that reads must be able to identify sentinels 
uniquely (not based on the content), so the code assign a unique ID to sentinel 
and use that for comparison.


 RowCache misses Updates
 ---

 Key: CASSANDRA-3862
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3862
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6
Reporter: Daniel Doubleday
Assignee: Sylvain Lebresne
 Fix For: 1.1.0

 Attachments: 3862-cleanup.txt, 3862-v2.patch, 3862-v4.patch, 
 3862.patch, 3862_v3.patch, include_memtables_in_rowcache_read.patch


 While performing stress tests to find any race problems for CASSANDRA-2864 I 
 guess I (re-)found one for the standard on-heap row cache.
 During my stress test I hava lots of threads running with some of them only 
 reading other writing and re-reading the value.
 This seems to happen:
 - Reader tries to read row A for the first time doing a getTopLevelColumns
 - Row A which is not in the cache yet is updated by Writer. The row is not 
 eagerly read during write (because we want fast writes) so the writer cannot 
 perform a cache update
 - Reader puts the row in the cache which is now missing the update
 I already asked this some time ago on the mailing list but unfortunately 
 didn't dig after I got no answer since I assumed that I just missed 
 something. In a way I still do but haven't found any locking mechanism that 
 makes sure that this should not happen.
 The problem can be reproduced with every run of my stress test. When I 
 restart the server the expected column is there. It's just missing from the 
 cache.
 To test I have created a patch that merges memtables with the row cache. With 
 the patch the problem is gone.
 I can also reproduce in 0.8. Haven't checked 1.1 but I haven't found any 
 relevant change their either so I assume the same aplies there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




git commit: Staggering repair patch by Vijay and Sylvain Lebresne; reviewed by Sylvain Lebresne for CASSANDRA-3721

2012-02-15 Thread vijay
Updated Branches:
  refs/heads/trunk 6642d0f95 - ddee43e84


Staggering repair
patch by Vijay and Sylvain Lebresne; reviewed by Sylvain Lebresne for 
CASSANDRA-3721


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ddee43e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ddee43e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ddee43e8

Branch: refs/heads/trunk
Commit: ddee43e8463777b0419fac2423a59511202c8fab
Parents: 6642d0f
Author: Vijay Parthasarathy vijay2...@gmail.com
Authored: Wed Feb 15 01:48:32 2012 -0800
Committer: Vijay Parthasarathy vijay2...@gmail.com
Committed: Wed Feb 15 01:48:32 2012 -0800

--
 .../org/apache/cassandra/db/ColumnFamilyStore.java |8 +
 src/java/org/apache/cassandra/db/Directories.java  |   15 +
 .../cassandra/db/compaction/CompactionManager.java |   37 ++-
 .../cassandra/service/AntiEntropyService.java  |  223 --
 .../apache/cassandra/service/StorageService.java   |   13 +-
 .../cassandra/service/StorageServiceMBean.java |4 +-
 src/java/org/apache/cassandra/tools/NodeCmd.java   |7 +-
 src/java/org/apache/cassandra/tools/NodeProbe.java |8 +-
 .../apache/cassandra/io/CompactSerializerTest.java |1 +
 9 files changed, 257 insertions(+), 59 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ddee43e8/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index e4e3204..218fadf 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1423,6 +1423,14 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 }
 
+public ListSSTableReader getSnapshotSSTableReader(String tag) throws 
IOException
+{
+ListSSTableReader readers = new ArrayListSSTableReader();
+for (Map.EntryDescriptor, SetComponent entries : 
directories.sstableLister().snapshots(tag).list().entrySet())
+readers.add(SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner));
+return readers;
+}
+
 /**
  * Take a snap shot of this columnfamily store.
  *

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ddee43e8/src/java/org/apache/cassandra/db/Directories.java
--
diff --git a/src/java/org/apache/cassandra/db/Directories.java 
b/src/java/org/apache/cassandra/db/Directories.java
index 2afefd2..7c51830 100644
--- a/src/java/org/apache/cassandra/db/Directories.java
+++ b/src/java/org/apache/cassandra/db/Directories.java
@@ -185,6 +185,7 @@ public class Directories
 private int nbFiles;
 private final MapDescriptor, SetComponent components = new 
HashMapDescriptor, SetComponent();
 private boolean filtered;
+private String snapshotName;
 
 public SSTableLister skipCompacted(boolean b)
 {
@@ -219,6 +220,14 @@ public class Directories
 return this;
 }
 
+public SSTableLister snapshots(String sn)
+{
+if (filtered)
+throw new IllegalStateException(list() has already been 
called);
+snapshotName = sn;
+return this;
+}
+
 public MapDescriptor, SetComponent list()
 {
 filter();
@@ -246,6 +255,12 @@ public class Directories
 
 for (File location : sstableDirectories)
 {
+if (snapshotName != null)
+{
+new File(location, join(SNAPSHOT_SUBDIR, 
snapshotName)).listFiles(getFilter());
+continue;
+}
+
 if (!onlyBackups)
 location.listFiles(getFilter());
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ddee43e8/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index f30510b..ed699f2 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -807,23 +807,33 @@ public class CompactionManager implements 
CompactionManagerMBean
 if (!cfs.isValid())
 return;
 
-// flush first so everyone is validating data that is as similar as 
possible
-try
-{
-

[jira] [Resolved] (CASSANDRA-3721) Staggering repair

2012-02-15 Thread Vijay (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay resolved CASSANDRA-3721.
--

Resolution: Fixed

Sorry for the confusion, +1 for me and i committed it again... I did test it 
and unit test passes. Thanks!

 Staggering repair
 -

 Key: CASSANDRA-3721
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3721
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.1.1

 Attachments: 0001-add-snapshot-command.patch, 
 0001-staggering-repair-with-snapshot.patch, 3721.patch


 Currently repair runs on all the nodes at once and causing the range of data 
 to be hot (higher latency on reads).
 Sequence:
 1) Send a repair request to all of the nodes so we can hold the references of 
 the SSTables (point at which repair was initiated)
 2) Send Validation on one node at a time (once completed will release 
 references).
 3) Hold the reference of the tree in the requesting node and once everything 
 is complete start diff.
 We can also serialize the streaming part not more than 1 node is involved in 
 the streaming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3912) support incremental repair controlled by external agent

2012-02-15 Thread Sylvain Lebresne (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208387#comment-13208387
 ] 

Sylvain Lebresne commented on CASSANDRA-3912:
-

I don't think that on the server side we should do more than expose the 
forceTableRepair method taking the Range in argument (and I do think there is 
value in doing that). The user friendly transformation of steps to range could 
probably be left to external scripts imo, but at least it should be moved to 
nodetool itself if we want it in.

As a side note, I'll remark that every repair of a range triggers a flush, so 
one should probably be careful to not repair incrementally on too small a range.

 support incremental repair controlled by external agent
 ---

 Key: CASSANDRA-3912
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3912
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
 Attachments: CASSANDRA-3912-trunk-v1.txt


 As a poor man's pre-cursor to CASSANDRA-2699, exposing the ability to repair 
 small parts of a range is extremely useful because it allows (with external 
 scripting logic) to slowly repair a node's content over time. Other than 
 avoiding the bulkyness of complete repairs, it means that you can safely do 
 repairs even if you absolutely cannot afford e.g. disk spaces spikes (see 
 CASSANDRA-2699 for what the issues are).
 Attaching a patch that exposes a repairincremental command to nodetool, 
 where you specify a step and the number of total steps. Incrementally 
 performing a repair in 100 steps, for example, would be done by:
 {code}
 nodetool repairincremental 0 100
 nodetool repairincremental 1 100
 ...
 nodetool repairincremental 99 100
 {code}
 An external script can be used to keep track of what has been repaired and 
 when. This should allow (1) allow incremental repair to happen now/soon, and 
 (2) allow experimentation and evaluation for an implementation of 
 CASSANDRA-2699 which I still think is a good idea. This patch does nothing to 
 help the average deployment, but at least makes incremental repair possible 
 given sufficient effort spent on external scripting.
 The big no-no about the patch is that it is entirely specific to 
 RandomPartitioner and BigIntegerToken. If someone can suggest a way to 
 implement this command generically using the Range/Token abstractions, I'd be 
 happy to hear suggestions.
 An alternative would be to provide a nodetool command that allows you to 
 simply specify the specific token ranges on the command line. It makes using 
 it a bit more difficult, but would mean that it works for any partitioner and 
 token type.
 Unless someone can suggest a better way to do this, I think I'll provide a 
 patch that does this. I'm still leaning towards supporting the simple step N 
 out of M form though.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3913) Incorrect InetAddress equality test

2012-02-15 Thread Brandon Williams (Created) (JIRA)
Incorrect InetAddress equality test
---

 Key: CASSANDRA-3913
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3913
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.6, 0.8.9
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 1.0.8
 Attachments: 3913.txt

CASSANDRA-3485 introduced some InetAddress checks using == instead of .equals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3913) Incorrect InetAddress equality test

2012-02-15 Thread Brandon Williams (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-3913:


Attachment: 3913.txt

 Incorrect InetAddress equality test
 ---

 Key: CASSANDRA-3913
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3913
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.9, 1.0.6
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 1.0.8

 Attachments: 3913.txt


 CASSANDRA-3485 introduced some InetAddress checks using == instead of .equals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3907) Support compression using BulkWriter

2012-02-15 Thread Brandon Williams (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208419#comment-13208419
 ] 

Brandon Williams commented on CASSANDRA-3907:
-

Your understanding is correct.

 Support compression using BulkWriter
 

 Key: CASSANDRA-3907
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3907
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Chris Goffinet
Assignee: Chris Goffinet
 Fix For: 1.1.0

 Attachments: 0001-Add-compression-support-to-BulkWriter.patch


 Currently there is no way to enable compression using BulkWriter. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




git commit: Add missing serializer in CompactSerializerTest

2012-02-15 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.1 212fd417f - 10b3dcc96


Add missing serializer in CompactSerializerTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/10b3dcc9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/10b3dcc9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/10b3dcc9

Branch: refs/heads/cassandra-1.1
Commit: 10b3dcc9672c9e748469eb5df5826639d8c88c8d
Parents: 212fd41
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 13:50:22 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 13:50:22 2012 +0100

--
 .../apache/cassandra/io/CompactSerializerTest.java |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/10b3dcc9/test/unit/org/apache/cassandra/io/CompactSerializerTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/CompactSerializerTest.java 
b/test/unit/org/apache/cassandra/io/CompactSerializerTest.java
index e8e2068..befb0d1 100644
--- a/test/unit/org/apache/cassandra/io/CompactSerializerTest.java
+++ b/test/unit/org/apache/cassandra/io/CompactSerializerTest.java
@@ -70,6 +70,7 @@ public class CompactSerializerTest extends CleanupHelper
 expectedClassNames.add(HashableSerializer);
 expectedClassNames.add(StreamingRepairTaskSerializer);
 expectedClassNames.add(AbstractBoundsSerializer);
+expectedClassNames.add(SnapshotCommandSerializer);
 
 discoveredClassNames = new ArrayListString();
 String cp = System.getProperty(java.class.path);



[jira] [Created] (CASSANDRA-3914) Remove py_stress

2012-02-15 Thread Brandon Williams (Created) (JIRA)
Remove py_stress


 Key: CASSANDRA-3914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3914
 Project: Cassandra
  Issue Type: Task
  Components: Tools
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 1.1.0


I don't even know if this works anymore.  Patch is 'git rm -rf tools/py_stress'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2699) continuous incremental anti-entropy

2012-02-15 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208467#comment-13208467
 ] 

Jonathan Ellis commented on CASSANDRA-2699:
---

I assume you meant to link a different issue?

 continuous incremental anti-entropy
 ---

 Key: CASSANDRA-2699
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2699
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Peter Schuller

 Currently, repair works by periodically running bulk jobs that (1)
 performs a validating compaction building up an in-memory merkle tree,
 and (2) streaming ring segments as needed according to differences
 indicated by the merkle tree.
 There are some disadvantages to this approach:
 * There is a trade-off between memory usage and the precision of the
   merkle tree. Less precision means more data streamed relative to
   what is strictly required.
 * Repair is a periodic bulk process that runs for a significant
   period and, although possibly rate limited as compaction (if 0.8 or
   backported throttling patch applied), is a divergence in terms of
   performance characteristics from normal operation of the cluster.
 * The impact of imprecision can be huge on a workload dominated by I/O
   and with cache locality being critical, since you will suddenly
   transfers lots of data to the target node.
 I propose a more incremental process whereby anti-entropy is
 incremental and continuous over time. In order to avoid being
 seek-bound one still wants to do work in some form of bursty fashion,
 but the amount of data processed at a time could be sufficiently small
 that the impact on the cluster feels a lot more continuous, and that
 the page cache allows us to avoid re-reading differing data twice.
 Consider a process whereby a node is constantly performing a per-CF
 repair operation for each CF. The current state of the repair process
 is defined by:
 * A starting timestamp of the current iteration through the token
   range the node is responsible for.
 * A finger indicating the current position along the token ring to
   which iteration has completed.
 This information, other than being in-memory, could periodically (every
 few minutes or something) be stored persistently on disk.
 The finger advances by the node selecting the next small bit of the
 ring and doing whatever merkling/hashing/checksumming is necessary on
 that small part, and then asking neighbors to do the same, and
 arranging for neighbors to send the node data for mismatching
 ranges. The data would be sent either by way of mutations like with
 read repair, or by streaming sstables. But it would be small amounts
 of data that will act roughly the same as regular writes for the
 perspective of compaction.
 Some nice properties of this approach:
 * It's always on; no periodic sudden effects on cluster performance.
 * Restarting nodes never cancels or breaks anti-entropy.
 * Huge compactions of entire CF:s never clog up the compaction queue
   (not necessarily a non-issue even with concurrent compactions in
   0.8).
 * Because we're always operating on small chunks, there is never the
   same kind of trade-off for memory use. A merkel tree or similar
   could be calculated at a very detailed level potentially. Although
   the precision from the perspective of reading from disk would likely
   not matter much if we are in page cache anyway, very high precision
   could be *very* useful when doing anti-entropy across data centers
   on slow links.
 There are devils in details, like how to select an appropriate ring
 segment given that you don't have knowledge of the data density on
 other nodes. But I feel that the overall idea/process seems very
 promising.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3913) Incorrect InetAddress equality test

2012-02-15 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208468#comment-13208468
 ] 

Jonathan Ellis commented on CASSANDRA-3913:
---

+1

I'd suggest committing to 0.8 as well just in case we do another release there

 Incorrect InetAddress equality test
 ---

 Key: CASSANDRA-3913
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3913
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.9, 1.0.6
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 1.0.8

 Attachments: 3913.txt


 CASSANDRA-3485 introduced some InetAddress checks using == instead of .equals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-1123) Allow tracing query details

2012-02-15 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208503#comment-13208503
 ] 

Jonathan Ellis commented on CASSANDRA-1123:
---

Another useful event would be username, if authenticated.  Since authentication 
happens once per connection we'd want a separate CF? for per-conn information.

 Allow tracing query details
 ---

 Key: CASSANDRA-1123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1123
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Aaron Morton
 Fix For: 1.2

 Attachments: 1123-3.patch.gz


 In the spirit of CASSANDRA-511, it would be useful to tracing on queries to 
 see where latency is coming from: how long did row cache lookup take?  key 
 search in the index?  merging the data from the sstables?  etc.
 The main difference vs setting debug logging is that debug logging is too big 
 of a hammer; by turning on the flood of logging for everyone, you actually 
 distort the information you're looking for.  This would be something you 
 could set per-query (or more likely per connection).
 We don't need to be as sophisticated as the techniques discussed in the 
 following papers but they are interesting reading:
 http://research.google.com/pubs/pub36356.html
 http://www.usenix.org/events/osdi04/tech/full_papers/barham/barham_html/
 http://www.usenix.org/event/nsdi07/tech/fonseca.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-3864) Unit tests failures in 1.1

2012-02-15 Thread Sylvain Lebresne (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-3864.
-

   Resolution: Fixed
Fix Version/s: 1.1.0

Ok, I think the problems of that ticket are now all solved, closing

 Unit tests failures in 1.1
 --

 Key: CASSANDRA-3864
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3864
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Brandon Williams
 Fix For: 1.1.0

 Attachments: 0001-Fix-DefsTest.patch, 
 0002-Fix-SSTableImportTest.patch, 0003-Fix-CompositeTypeTest.patch


 On the current 1.1 branch I get the following errors:
 # SSTableImportTest:
 {noformat}
 [junit] Testcase: 
 testImportSimpleCf(org.apache.cassandra.tools.SSTableImportTest):   Caused an 
 ERROR
 [junit] java.lang.Integer cannot be cast to java.lang.Long
 [junit] java.lang.ClassCastException: java.lang.Integer cannot be cast to 
 java.lang.Long
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport$JsonColumn.init(SSTableImport.java:132)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport.addColumnsToCF(SSTableImport.java:191)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport.addToStandardCF(SSTableImport.java:174)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport.importUnsorted(SSTableImport.java:290)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImport.importJson(SSTableImport.java:255)
 [junit]   at 
 org.apache.cassandra.tools.SSTableImportTest.testImportSimpleCf(SSTableImportTest.java:60)
 {noformat}
 # CompositeTypeTest:
 {noformat}
 [junit] Testcase: 
 testCompatibility(org.apache.cassandra.db.marshal.CompositeTypeTest):   
 Caused an ERROR
 [junit] Invalid comparator class 
 org.apache.cassandra.db.marshal.CompositeType: must define a public static 
 instance field or a public static method getInstance(TypeParser).
 [junit] org.apache.cassandra.config.ConfigurationException: Invalid 
 comparator class org.apache.cassandra.db.marshal.CompositeType: must define a 
 public static instance field or a public static method 
 getInstance(TypeParser).
 [junit]   at 
 org.apache.cassandra.db.marshal.TypeParser.getRawAbstractType(TypeParser.java:294)
 [junit]   at 
 org.apache.cassandra.db.marshal.TypeParser.getAbstractType(TypeParser.java:268)
 [junit]   at 
 org.apache.cassandra.db.marshal.TypeParser.parse(TypeParser.java:81)
 [junit]   at 
 org.apache.cassandra.db.marshal.CompositeTypeTest.testCompatibility(CompositeTypeTest.java:216)
 {noformat}
 # DefsTest:
 {noformat}
 [junit] Testcase: 
 testUpdateColumnFamilyNoIndexes(org.apache.cassandra.db.DefsTest):  FAILED
 [junit] Should have blown up when you used a different comparator.
 [junit] junit.framework.AssertionFailedError: Should have blown up when you 
 used a different comparator.
 [junit]   at 
 org.apache.cassandra.db.DefsTest.testUpdateColumnFamilyNoIndexes(DefsTest.java:539)
 {noformat}
 # CompactSerializerTest:
 {noformat}
 [junit] null
 [junit] java.lang.ExceptionInInitializerError
 [junit]   at 
 org.apache.cassandra.db.SystemTable.getCurrentLocalNodeId(SystemTable.java:437)
 [junit]   at 
 org.apache.cassandra.utils.NodeId$LocalNodeIdHistory.init(NodeId.java:195)
 [junit]   at 
 org.apache.cassandra.utils.NodeId$LocalIds.clinit(NodeId.java:43)
 [junit]   at java.lang.Class.forName0(Native Method)
 [junit]   at java.lang.Class.forName(Class.java:169)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:96)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest$1DirScanner.scan(CompactSerializerTest.java:87)
 [junit]   at 
 org.apache.cassandra.io.CompactSerializerTest.scanClasspath(CompactSerializerTest.java:129)
 [junit] Caused by: java.lang.NullPointerException
 [junit]   at 
 org.apache.cassandra.config.DatabaseDescriptor.createAllDirectories(DatabaseDescriptor.java:574)
 [junit]   at org.apache.cassandra.db.Table.clinit(Table.java:82)
 {noformat}
 There is also some error RemoveSubColumnTest and RemoveSubColumnTest but I'll 
 open a separate ticket for those as they may require a bit more discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

git commit: Correct InetAddress equality comparisons. Patch by brandonwilliams reviewed by jbellis for CASSANDRA-3913

2012-02-15 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-0.8 038b8f212 - c0a342bc8


Correct InetAddress equality comparisons.
Patch by brandonwilliams reviewed by jbellis for CASSANDRA-3913


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c0a342bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c0a342bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c0a342bc

Branch: refs/heads/cassandra-0.8
Commit: c0a342bc85482eb2ae14c15a349188ef27ed98f5
Parents: 038b8f2
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 15 09:05:45 2012 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 15 09:05:45 2012 -0600

--
 src/java/org/apache/cassandra/db/SystemTable.java |2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java   |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c0a342bc/src/java/org/apache/cassandra/db/SystemTable.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemTable.java 
b/src/java/org/apache/cassandra/db/SystemTable.java
index 870955f..b319f61 100644
--- a/src/java/org/apache/cassandra/db/SystemTable.java
+++ b/src/java/org/apache/cassandra/db/SystemTable.java
@@ -100,7 +100,7 @@ public class SystemTable
  */
 public static synchronized void updateToken(InetAddress ep, Token token)
 {
-if (ep == FBUtilities.getLocalAddress())
+if (ep.equals(FBUtilities.getLocalAddress()))
 return;
 IPartitioner p = StorageService.getPartitioner();
 ColumnFamily cf = ColumnFamily.create(Table.SYSTEM_TABLE, STATUS_CF);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c0a342bc/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 0f47259..be94f31 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -996,7 +996,7 @@ public class Gossiper implements 
IFailureDetectionEventListener
  */
 public void addSavedEndpoint(InetAddress ep)
 {
-if (ep == FBUtilities.getLocalAddress())
+if (ep.equals(FBUtilities.getLocalAddress()))
 {
 logger.debug(Attempt to add self as saved endpoint);
 return;



git commit: Correct InetAddress equality comparisons. Patch by brandonwilliams reviewed by jbellis for CASSANDRA-3913

2012-02-15 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.0 b3bc28bdb - 984194d0c


Correct InetAddress equality comparisons.
Patch by brandonwilliams reviewed by jbellis for CASSANDRA-3913


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/984194d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/984194d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/984194d0

Branch: refs/heads/cassandra-1.0
Commit: 984194d0c62e922d4600046456df7aa3a85ca4f1
Parents: b3bc28b
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 15 09:06:52 2012 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 15 09:07:33 2012 -0600

--
 src/java/org/apache/cassandra/db/SystemTable.java |2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java   |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/984194d0/src/java/org/apache/cassandra/db/SystemTable.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemTable.java 
b/src/java/org/apache/cassandra/db/SystemTable.java
index 7e13fc4..628719b 100644
--- a/src/java/org/apache/cassandra/db/SystemTable.java
+++ b/src/java/org/apache/cassandra/db/SystemTable.java
@@ -138,7 +138,7 @@ public class SystemTable
  */
 public static synchronized void updateToken(InetAddress ep, Token token)
 {
-if (ep == FBUtilities.getBroadcastAddress())
+if (ep.equals(FBUtilities.getBroadcastAddress()))
 {
 removeToken(token);
 return;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/984194d0/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 96576fb..9dd73ac 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -1095,7 +1095,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
  */
 public void addSavedEndpoint(InetAddress ep)
 {
-if (ep == FBUtilities.getBroadcastAddress())
+if (ep.equals(FBUtilities.getBroadcastAddress()))
 {
 logger.debug(Attempt to add self as saved endpoint);
 return;



[jira] [Updated] (CASSANDRA-3913) Incorrect InetAddress equality test

2012-02-15 Thread Brandon Williams (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-3913:


Fix Version/s: 0.8.11

 Incorrect InetAddress equality test
 ---

 Key: CASSANDRA-3913
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3913
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.9, 1.0.6
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 0.8.11, 1.0.8

 Attachments: 3913.txt


 CASSANDRA-3485 introduced some InetAddress checks using == instead of .equals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3915) Fix LazilyCompactedRowTest

2012-02-15 Thread Sylvain Lebresne (Created) (JIRA)
Fix LazilyCompactedRowTest
--

 Key: CASSANDRA-3915
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3915
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor


LazilyCompactedRowTest.testTwoRowSuperColumn has never really worked. It uses 
LazilyCompactedRowTest.assertBytes() that assumes standard columns, even though 
that test is for super columns. For some reason, the deserialization of the 
super columns as columns was not breaking stuff and so the test was working, 
but CASSANDRA-3872 changed that and 
LazilyCompactedRowTest.testTwoRowSuperColumn fails on current cassandra-1.1 
branch (but it's not CASSANDRA-3872 fault, just the test that is buggy).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3915) Fix LazilyCompactedRowTest

2012-02-15 Thread Sylvain Lebresne (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-3915:


Attachment: 3915.patch

Attached patch to fix the tests. I believe the patch apply cleanly to both 1.0 
(for which the test don't fail but is still broken) and 1.1 (where it does fix 
the test failure).

 Fix LazilyCompactedRowTest
 --

 Key: CASSANDRA-3915
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3915
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.0.8, 1.1.0

 Attachments: 3915.patch


 LazilyCompactedRowTest.testTwoRowSuperColumn has never really worked. It uses 
 LazilyCompactedRowTest.assertBytes() that assumes standard columns, even 
 though that test is for super columns. For some reason, the deserialization 
 of the super columns as columns was not breaking stuff and so the test was 
 working, but CASSANDRA-3872 changed that and 
 LazilyCompactedRowTest.testTwoRowSuperColumn fails on current cassandra-1.1 
 branch (but it's not CASSANDRA-3872 fault, just the test that is buggy).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3915) Fix LazilyCompactedRowTest

2012-02-15 Thread Brandon Williams (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208543#comment-13208543
 ] 

Brandon Williams commented on CASSANDRA-3915:
-

+1

 Fix LazilyCompactedRowTest
 --

 Key: CASSANDRA-3915
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3915
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.0.8, 1.1.0

 Attachments: 3915.patch


 LazilyCompactedRowTest.testTwoRowSuperColumn has never really worked. It uses 
 LazilyCompactedRowTest.assertBytes() that assumes standard columns, even 
 though that test is for super columns. For some reason, the deserialization 
 of the super columns as columns was not breaking stuff and so the test was 
 working, but CASSANDRA-3872 changed that and 
 LazilyCompactedRowTest.testTwoRowSuperColumn fails on current cassandra-1.1 
 branch (but it's not CASSANDRA-3872 fault, just the test that is buggy).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




git commit: Fix LazilyCompactedRowTest

2012-02-15 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.0 984194d0c - 636e41dc3


Fix LazilyCompactedRowTest

patch by slebresne; reviewed by driftx for CASSANDRA-3915


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/636e41dc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/636e41dc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/636e41dc

Branch: refs/heads/cassandra-1.0
Commit: 636e41dc3c1827ce438c98fb7a45c2a511083842
Parents: 984194d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 16:41:46 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 16:41:46 2012 +0100

--
 src/java/org/apache/cassandra/db/SuperColumn.java  |   28 +++
 .../cassandra/io/LazilyCompactedRowTest.java   |   12 +++---
 2 files changed, 34 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/636e41dc/src/java/org/apache/cassandra/db/SuperColumn.java
--
diff --git a/src/java/org/apache/cassandra/db/SuperColumn.java 
b/src/java/org/apache/cassandra/db/SuperColumn.java
index 75f166c..4358e99 100644
--- a/src/java/org/apache/cassandra/db/SuperColumn.java
+++ b/src/java/org/apache/cassandra/db/SuperColumn.java
@@ -26,6 +26,9 @@ import java.security.MessageDigest;
 import java.util.Collection;
 import java.util.Comparator;
 
+import com.google.common.base.Objects;
+import com.google.common.collect.Iterables;
+
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.MarshalException;
@@ -301,6 +304,31 @@ public class SuperColumn extends AbstractColumnContainer 
implements IColumn
 column.validateFields(metadata);
 }
 }
+
+@Override
+public boolean equals(Object o)
+{
+if (this == o)
+return true;
+if (o == null || getClass() != o.getClass())
+return false;
+
+SuperColumn sc = (SuperColumn)o;
+
+if (!name.equals(sc.name))
+return false;
+if (getMarkedForDeleteAt() != sc.getMarkedForDeleteAt())
+return false;
+if (getLocalDeletionTime() != sc.getLocalDeletionTime())
+return false;
+return Iterables.elementsEqual(columns, sc.columns);
+}
+
+@Override
+public int hashCode()
+{
+return Objects.hashCode(name, getMarkedForDeleteAt(), 
getLocalDeletionTime(), columns);
+}
 }
 
 class SuperColumnSerializer implements IColumnSerializer

http://git-wip-us.apache.org/repos/asf/cassandra/blob/636e41dc/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java 
b/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
index 94c71f5..65b2e7e 100644
--- a/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
+++ b/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
@@ -66,7 +66,7 @@ public class LazilyCompactedRowTest extends CleanupHelper
 AbstractCompactionIterable lazy = new 
CompactionIterable(OperationType.UNKNOWN,
  sstables,
  new 
LazilyCompactingController(cfs, sstables, gcBefore, false));
-assertBytes(sstables, eager, lazy);
+assertBytes(cfs, sstables, eager, lazy);
 
 // compare eager and parallel-lazy compactions
 eager = new CompactionIterable(OperationType.UNKNOWN,
@@ -76,10 +76,10 @@ public class LazilyCompactedRowTest extends CleanupHelper
  
sstables,
  
new CompactionController(cfs, sstables, gcBefore, false),
  
0);
-assertBytes(sstables, eager, parallel);
+assertBytes(cfs, sstables, eager, parallel);
 }
 
-private static void assertBytes(CollectionSSTableReader sstables, 
AbstractCompactionIterable ci1, AbstractCompactionIterable ci2) throws 
IOException
+private static void assertBytes(ColumnFamilyStore cfs, 
CollectionSSTableReader sstables, AbstractCompactionIterable ci1, 
AbstractCompactionIterable ci2) throws IOException
 {
 CloseableIteratorAbstractCompactedRow iter1 = ci1.iterator();
 CloseableIteratorAbstractCompactedRow iter2 = ci2.iterator();
@@ -132,8 +132,8 @@ public class LazilyCompactedRowTest extends CleanupHelper
 assert bytes1.equals(bytes2);
 

[3/4] git commit: Correct InetAddress equality comparisons. Patch by brandonwilliams reviewed by jbellis for CASSANDRA-3913

2012-02-15 Thread slebresne
Correct InetAddress equality comparisons.
Patch by brandonwilliams reviewed by jbellis for CASSANDRA-3913


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/984194d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/984194d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/984194d0

Branch: refs/heads/cassandra-1.1
Commit: 984194d0c62e922d4600046456df7aa3a85ca4f1
Parents: b3bc28b
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 15 09:06:52 2012 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 15 09:07:33 2012 -0600

--
 src/java/org/apache/cassandra/db/SystemTable.java |2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java   |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/984194d0/src/java/org/apache/cassandra/db/SystemTable.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemTable.java 
b/src/java/org/apache/cassandra/db/SystemTable.java
index 7e13fc4..628719b 100644
--- a/src/java/org/apache/cassandra/db/SystemTable.java
+++ b/src/java/org/apache/cassandra/db/SystemTable.java
@@ -138,7 +138,7 @@ public class SystemTable
  */
 public static synchronized void updateToken(InetAddress ep, Token token)
 {
-if (ep == FBUtilities.getBroadcastAddress())
+if (ep.equals(FBUtilities.getBroadcastAddress()))
 {
 removeToken(token);
 return;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/984194d0/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 96576fb..9dd73ac 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -1095,7 +1095,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
  */
 public void addSavedEndpoint(InetAddress ep)
 {
-if (ep == FBUtilities.getBroadcastAddress())
+if (ep.equals(FBUtilities.getBroadcastAddress()))
 {
 logger.debug(Attempt to add self as saved endpoint);
 return;



[4/4] git commit: add method signature for processing a already parsed CQL statement

2012-02-15 Thread slebresne
add method signature for processing a already parsed CQL statement


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3bc28bd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3bc28bd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3bc28bd

Branch: refs/heads/cassandra-1.1
Commit: b3bc28bdb49d1c683de61b0ac0d63fead3cf5c3b
Parents: f2a4309
Author: T Jake Luciani jak...@gmail.com
Authored: Tue Feb 14 15:36:43 2012 -0500
Committer: T Jake Luciani jak...@gmail.com
Committed: Tue Feb 14 15:36:43 2012 -0500

--
 .../org/apache/cassandra/cql/QueryProcessor.java   |7 +++
 1 files changed, 7 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3bc28bd/src/java/org/apache/cassandra/cql/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql/QueryProcessor.java
index b037d17..3008d99 100644
--- a/src/java/org/apache/cassandra/cql/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql/QueryProcessor.java
@@ -498,6 +498,13 @@ public class QueryProcessor
 logger.trace(CQL QUERY: {}, queryString);
 
 CQLStatement statement = getStatement(queryString);
+
+return process(statement, clientState);
+}
+
+public static CqlResult process(CQLStatement statement, ClientState 
clientState)
+throws RecognitionException, UnavailableException, 
InvalidRequestException, TimedOutException, SchemaDisagreementException
+{  
 String keyspace = null;
 
 // Some statements won't have (or don't need) a keyspace (think USE, 
or CREATE).



[1/4] git commit: Merge branch 'cassandra-1.0' into cassandra-1.1

2012-02-15 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.1 10b3dcc96 - e46753381


Merge branch 'cassandra-1.0' into cassandra-1.1

Conflicts:
src/java/org/apache/cassandra/cql/QueryProcessor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4675338
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4675338
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4675338

Branch: refs/heads/cassandra-1.1
Commit: e467533816409e2446de1197260623461614481d
Parents: 10b3dcc 636e41d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 16:45:19 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 16:45:19 2012 +0100

--
 src/java/org/apache/cassandra/db/SuperColumn.java  |   28 +++
 src/java/org/apache/cassandra/db/SystemTable.java  |2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java|2 +-
 .../cassandra/io/LazilyCompactedRowTest.java   |   12 +++---
 4 files changed, 36 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4675338/src/java/org/apache/cassandra/db/SuperColumn.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4675338/src/java/org/apache/cassandra/db/SystemTable.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4675338/src/java/org/apache/cassandra/gms/Gossiper.java
--



[2/4] git commit: Fix LazilyCompactedRowTest

2012-02-15 Thread slebresne
Fix LazilyCompactedRowTest

patch by slebresne; reviewed by driftx for CASSANDRA-3915


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/636e41dc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/636e41dc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/636e41dc

Branch: refs/heads/cassandra-1.1
Commit: 636e41dc3c1827ce438c98fb7a45c2a511083842
Parents: 984194d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 16:41:46 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 16:41:46 2012 +0100

--
 src/java/org/apache/cassandra/db/SuperColumn.java  |   28 +++
 .../cassandra/io/LazilyCompactedRowTest.java   |   12 +++---
 2 files changed, 34 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/636e41dc/src/java/org/apache/cassandra/db/SuperColumn.java
--
diff --git a/src/java/org/apache/cassandra/db/SuperColumn.java 
b/src/java/org/apache/cassandra/db/SuperColumn.java
index 75f166c..4358e99 100644
--- a/src/java/org/apache/cassandra/db/SuperColumn.java
+++ b/src/java/org/apache/cassandra/db/SuperColumn.java
@@ -26,6 +26,9 @@ import java.security.MessageDigest;
 import java.util.Collection;
 import java.util.Comparator;
 
+import com.google.common.base.Objects;
+import com.google.common.collect.Iterables;
+
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.MarshalException;
@@ -301,6 +304,31 @@ public class SuperColumn extends AbstractColumnContainer 
implements IColumn
 column.validateFields(metadata);
 }
 }
+
+@Override
+public boolean equals(Object o)
+{
+if (this == o)
+return true;
+if (o == null || getClass() != o.getClass())
+return false;
+
+SuperColumn sc = (SuperColumn)o;
+
+if (!name.equals(sc.name))
+return false;
+if (getMarkedForDeleteAt() != sc.getMarkedForDeleteAt())
+return false;
+if (getLocalDeletionTime() != sc.getLocalDeletionTime())
+return false;
+return Iterables.elementsEqual(columns, sc.columns);
+}
+
+@Override
+public int hashCode()
+{
+return Objects.hashCode(name, getMarkedForDeleteAt(), 
getLocalDeletionTime(), columns);
+}
 }
 
 class SuperColumnSerializer implements IColumnSerializer

http://git-wip-us.apache.org/repos/asf/cassandra/blob/636e41dc/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java 
b/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
index 94c71f5..65b2e7e 100644
--- a/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
+++ b/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
@@ -66,7 +66,7 @@ public class LazilyCompactedRowTest extends CleanupHelper
 AbstractCompactionIterable lazy = new 
CompactionIterable(OperationType.UNKNOWN,
  sstables,
  new 
LazilyCompactingController(cfs, sstables, gcBefore, false));
-assertBytes(sstables, eager, lazy);
+assertBytes(cfs, sstables, eager, lazy);
 
 // compare eager and parallel-lazy compactions
 eager = new CompactionIterable(OperationType.UNKNOWN,
@@ -76,10 +76,10 @@ public class LazilyCompactedRowTest extends CleanupHelper
  
sstables,
  
new CompactionController(cfs, sstables, gcBefore, false),
  
0);
-assertBytes(sstables, eager, parallel);
+assertBytes(cfs, sstables, eager, parallel);
 }
 
-private static void assertBytes(CollectionSSTableReader sstables, 
AbstractCompactionIterable ci1, AbstractCompactionIterable ci2) throws 
IOException
+private static void assertBytes(ColumnFamilyStore cfs, 
CollectionSSTableReader sstables, AbstractCompactionIterable ci1, 
AbstractCompactionIterable ci2) throws IOException
 {
 CloseableIteratorAbstractCompactedRow iter1 = ci1.iterator();
 CloseableIteratorAbstractCompactedRow iter2 = ci2.iterator();
@@ -132,8 +132,8 @@ public class LazilyCompactedRowTest extends CleanupHelper
 assert bytes1.equals(bytes2);
 
 // cf metadata
-ColumnFamily cf1 = 

[3/6] git commit: Fix LazilyCompactedRowTest

2012-02-15 Thread slebresne
Fix LazilyCompactedRowTest

patch by slebresne; reviewed by driftx for CASSANDRA-3915


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/636e41dc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/636e41dc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/636e41dc

Branch: refs/heads/trunk
Commit: 636e41dc3c1827ce438c98fb7a45c2a511083842
Parents: 984194d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 16:41:46 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 16:41:46 2012 +0100

--
 src/java/org/apache/cassandra/db/SuperColumn.java  |   28 +++
 .../cassandra/io/LazilyCompactedRowTest.java   |   12 +++---
 2 files changed, 34 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/636e41dc/src/java/org/apache/cassandra/db/SuperColumn.java
--
diff --git a/src/java/org/apache/cassandra/db/SuperColumn.java 
b/src/java/org/apache/cassandra/db/SuperColumn.java
index 75f166c..4358e99 100644
--- a/src/java/org/apache/cassandra/db/SuperColumn.java
+++ b/src/java/org/apache/cassandra/db/SuperColumn.java
@@ -26,6 +26,9 @@ import java.security.MessageDigest;
 import java.util.Collection;
 import java.util.Comparator;
 
+import com.google.common.base.Objects;
+import com.google.common.collect.Iterables;
+
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.MarshalException;
@@ -301,6 +304,31 @@ public class SuperColumn extends AbstractColumnContainer 
implements IColumn
 column.validateFields(metadata);
 }
 }
+
+@Override
+public boolean equals(Object o)
+{
+if (this == o)
+return true;
+if (o == null || getClass() != o.getClass())
+return false;
+
+SuperColumn sc = (SuperColumn)o;
+
+if (!name.equals(sc.name))
+return false;
+if (getMarkedForDeleteAt() != sc.getMarkedForDeleteAt())
+return false;
+if (getLocalDeletionTime() != sc.getLocalDeletionTime())
+return false;
+return Iterables.elementsEqual(columns, sc.columns);
+}
+
+@Override
+public int hashCode()
+{
+return Objects.hashCode(name, getMarkedForDeleteAt(), 
getLocalDeletionTime(), columns);
+}
 }
 
 class SuperColumnSerializer implements IColumnSerializer

http://git-wip-us.apache.org/repos/asf/cassandra/blob/636e41dc/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java 
b/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
index 94c71f5..65b2e7e 100644
--- a/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
+++ b/test/unit/org/apache/cassandra/io/LazilyCompactedRowTest.java
@@ -66,7 +66,7 @@ public class LazilyCompactedRowTest extends CleanupHelper
 AbstractCompactionIterable lazy = new 
CompactionIterable(OperationType.UNKNOWN,
  sstables,
  new 
LazilyCompactingController(cfs, sstables, gcBefore, false));
-assertBytes(sstables, eager, lazy);
+assertBytes(cfs, sstables, eager, lazy);
 
 // compare eager and parallel-lazy compactions
 eager = new CompactionIterable(OperationType.UNKNOWN,
@@ -76,10 +76,10 @@ public class LazilyCompactedRowTest extends CleanupHelper
  
sstables,
  
new CompactionController(cfs, sstables, gcBefore, false),
  
0);
-assertBytes(sstables, eager, parallel);
+assertBytes(cfs, sstables, eager, parallel);
 }
 
-private static void assertBytes(CollectionSSTableReader sstables, 
AbstractCompactionIterable ci1, AbstractCompactionIterable ci2) throws 
IOException
+private static void assertBytes(ColumnFamilyStore cfs, 
CollectionSSTableReader sstables, AbstractCompactionIterable ci1, 
AbstractCompactionIterable ci2) throws IOException
 {
 CloseableIteratorAbstractCompactedRow iter1 = ci1.iterator();
 CloseableIteratorAbstractCompactedRow iter2 = ci2.iterator();
@@ -132,8 +132,8 @@ public class LazilyCompactedRowTest extends CleanupHelper
 assert bytes1.equals(bytes2);
 
 // cf metadata
-ColumnFamily cf1 = 

[4/6] git commit: Correct InetAddress equality comparisons. Patch by brandonwilliams reviewed by jbellis for CASSANDRA-3913

2012-02-15 Thread slebresne
Correct InetAddress equality comparisons.
Patch by brandonwilliams reviewed by jbellis for CASSANDRA-3913


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/984194d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/984194d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/984194d0

Branch: refs/heads/trunk
Commit: 984194d0c62e922d4600046456df7aa3a85ca4f1
Parents: b3bc28b
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 15 09:06:52 2012 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 15 09:07:33 2012 -0600

--
 src/java/org/apache/cassandra/db/SystemTable.java |2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java   |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/984194d0/src/java/org/apache/cassandra/db/SystemTable.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemTable.java 
b/src/java/org/apache/cassandra/db/SystemTable.java
index 7e13fc4..628719b 100644
--- a/src/java/org/apache/cassandra/db/SystemTable.java
+++ b/src/java/org/apache/cassandra/db/SystemTable.java
@@ -138,7 +138,7 @@ public class SystemTable
  */
 public static synchronized void updateToken(InetAddress ep, Token token)
 {
-if (ep == FBUtilities.getBroadcastAddress())
+if (ep.equals(FBUtilities.getBroadcastAddress()))
 {
 removeToken(token);
 return;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/984194d0/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index 96576fb..9dd73ac 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -1095,7 +1095,7 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
  */
 public void addSavedEndpoint(InetAddress ep)
 {
-if (ep == FBUtilities.getBroadcastAddress())
+if (ep.equals(FBUtilities.getBroadcastAddress()))
 {
 logger.debug(Attempt to add self as saved endpoint);
 return;



[1/6] git commit: Merge branch 'cassandra-1.1' into trunk

2012-02-15 Thread slebresne
Updated Branches:
  refs/heads/trunk ddee43e84 - 875e05aa9


Merge branch 'cassandra-1.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/875e05aa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/875e05aa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/875e05aa

Branch: refs/heads/trunk
Commit: 875e05aa99ebe683df0841cbbd2adb5f36072f42
Parents: ddee43e e467533
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 16:46:21 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 16:46:21 2012 +0100

--
 src/java/org/apache/cassandra/db/SuperColumn.java  |   28 +++
 src/java/org/apache/cassandra/db/SystemTable.java  |2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java|2 +-
 .../cassandra/io/LazilyCompactedRowTest.java   |   12 +++---
 4 files changed, 36 insertions(+), 8 deletions(-)
--




[2/6] git commit: Merge branch 'cassandra-1.0' into cassandra-1.1

2012-02-15 Thread slebresne
Merge branch 'cassandra-1.0' into cassandra-1.1

Conflicts:
src/java/org/apache/cassandra/cql/QueryProcessor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4675338
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4675338
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4675338

Branch: refs/heads/trunk
Commit: e467533816409e2446de1197260623461614481d
Parents: 10b3dcc 636e41d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 16:45:19 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 16:45:19 2012 +0100

--
 src/java/org/apache/cassandra/db/SuperColumn.java  |   28 +++
 src/java/org/apache/cassandra/db/SystemTable.java  |2 +-
 src/java/org/apache/cassandra/gms/Gossiper.java|2 +-
 .../cassandra/io/LazilyCompactedRowTest.java   |   12 +++---
 4 files changed, 36 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4675338/src/java/org/apache/cassandra/db/SuperColumn.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4675338/src/java/org/apache/cassandra/db/SystemTable.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4675338/src/java/org/apache/cassandra/gms/Gossiper.java
--



[5/6] git commit: Add missing serializer in CompactSerializerTest

2012-02-15 Thread slebresne
Add missing serializer in CompactSerializerTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/10b3dcc9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/10b3dcc9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/10b3dcc9

Branch: refs/heads/trunk
Commit: 10b3dcc9672c9e748469eb5df5826639d8c88c8d
Parents: 212fd41
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 13:50:22 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 13:50:22 2012 +0100

--
 .../apache/cassandra/io/CompactSerializerTest.java |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/10b3dcc9/test/unit/org/apache/cassandra/io/CompactSerializerTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/CompactSerializerTest.java 
b/test/unit/org/apache/cassandra/io/CompactSerializerTest.java
index e8e2068..befb0d1 100644
--- a/test/unit/org/apache/cassandra/io/CompactSerializerTest.java
+++ b/test/unit/org/apache/cassandra/io/CompactSerializerTest.java
@@ -70,6 +70,7 @@ public class CompactSerializerTest extends CleanupHelper
 expectedClassNames.add(HashableSerializer);
 expectedClassNames.add(StreamingRepairTaskSerializer);
 expectedClassNames.add(AbstractBoundsSerializer);
+expectedClassNames.add(SnapshotCommandSerializer);
 
 discoveredClassNames = new ArrayListString();
 String cp = System.getProperty(java.class.path);



[6/6] git commit: add method signature for processing a already parsed CQL statement

2012-02-15 Thread slebresne
add method signature for processing a already parsed CQL statement


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3bc28bd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3bc28bd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3bc28bd

Branch: refs/heads/trunk
Commit: b3bc28bdb49d1c683de61b0ac0d63fead3cf5c3b
Parents: f2a4309
Author: T Jake Luciani jak...@gmail.com
Authored: Tue Feb 14 15:36:43 2012 -0500
Committer: T Jake Luciani jak...@gmail.com
Committed: Tue Feb 14 15:36:43 2012 -0500

--
 .../org/apache/cassandra/cql/QueryProcessor.java   |7 +++
 1 files changed, 7 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3bc28bd/src/java/org/apache/cassandra/cql/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql/QueryProcessor.java
index b037d17..3008d99 100644
--- a/src/java/org/apache/cassandra/cql/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql/QueryProcessor.java
@@ -498,6 +498,13 @@ public class QueryProcessor
 logger.trace(CQL QUERY: {}, queryString);
 
 CQLStatement statement = getStatement(queryString);
+
+return process(statement, clientState);
+}
+
+public static CqlResult process(CQLStatement statement, ClientState 
clientState)
+throws RecognitionException, UnavailableException, 
InvalidRequestException, TimedOutException, SchemaDisagreementException
+{  
 String keyspace = null;
 
 // Some statements won't have (or don't need) a keyspace (think USE, 
or CREATE).



[jira] [Created] (CASSANDRA-3917) System test failures in 1.1

2012-02-15 Thread Sylvain Lebresne (Created) (JIRA)
System test failures in 1.1
---

 Key: CASSANDRA-3917
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3917
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne


On branch 1.1, I currently see two system test failures:
{noformat}
==
FAIL: 
system.test_thrift_server.TestMutations.test_get_range_slice_after_deletion
--
Traceback (most recent call last):
  File /usr/lib/pymodules/python2.7/nose/case.py, line 187, in runTest
self.test(*self.arg)
  File /home/mcmanus/Git/cassandra/test/system/test_thrift_server.py, line 
1937, in test_get_range_slice_after_deletion
assert len(result[0].columns) == 1
AssertionError
{noformat}
and
{noformat}
==
FAIL: Test that column ttled expires from KEYS index
--
Traceback (most recent call last):
  File /usr/lib/pymodules/python2.7/nose/case.py, line 187, in runTest
self.test(*self.arg)
  File /home/mcmanus/Git/cassandra/test/system/test_thrift_server.py, line 
1908, in test_index_scan_expiring
assert len(result) == 1, result
AssertionError: []

--
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3916) Do not bind the storage_port if internode_encryption = all

2012-02-15 Thread Wade Poziombka (Created) (JIRA)
Do not bind the storage_port if internode_encryption = all
--

 Key: CASSANDRA-3916
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3916
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.7
 Environment: Any
Reporter: Wade Poziombka


We are highly security conscious and having additional clear text ports open 
are undesirable.

I have modified locally to get around but it seems that this is a very trivial 
fix to only bind the clear text storage_port if the internode_encryption is not 
all.  If all is selected then no clear text communication should be permitted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3918) Add mutual auth option(s) to internode_encrption options

2012-02-15 Thread Wade Poziombka (Created) (JIRA)
Add mutual auth option(s) to internode_encrption options


 Key: CASSANDRA-3918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3918
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.7
 Environment: Any
Reporter: Wade Poziombka


To prevent rogue nodes from being added to the cluster it is desirable to 
provide mutual authentication option to the internode_encrpytion option.

Seems like a trivial matter of providing an option to enable need client 
auth.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3677) NPE during HH delivery when gossip turned off on target

2012-02-15 Thread Brandon Williams (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208580#comment-13208580
 ] 

Brandon Williams commented on CASSANDRA-3677:
-

It doesn't make a lot of sense.  a) we have no way to quickly find such hints, 
and b) if you removetoken the node, the data from existing replicas will be 
copied to restore the RF, so the hint isn't necessary (unless you wrote at ANY, 
in which case you've already lived dangerously.)

 NPE during HH delivery when gossip turned off on target
 ---

 Key: CASSANDRA-3677
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3677
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.7
Reporter: Radim Kolar
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 1.0.8

 Attachments: 3677-v1.patch, 3677.txt


 probably not important bug
 ERROR [OptionalTasks:1] 2011-12-27 21:44:25,342 AbstractCassandraDaemon.java 
 (line 138) Fatal exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.NullPointerException
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.hash(NonBlockingHashMap.java:113)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfMatch(NonBlockingHashMap.java:553)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfMatch(NonBlockingHashMap.java:348)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfAbsent(NonBlockingHashMap.java:319)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashSet.add(NonBlockingHashSet.java:32)
 at 
 org.apache.cassandra.db.HintedHandOffManager.scheduleHintDelivery(HintedHandOffManager.java:371)
 at 
 org.apache.cassandra.db.HintedHandOffManager.scheduleAllDeliveries(HintedHandOffManager.java:356)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:84)
 at 
 org.apache.cassandra.db.HintedHandOffManager$1.run(HintedHandOffManager.java:119)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:679)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3677) NPE during HH delivery when gossip turned off on target

2012-02-15 Thread Sam Overton (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208583#comment-13208583
 ] 

Sam Overton commented on CASSANDRA-3677:


When a token is removed, hints intended for that endpoint are removed (see 
StorageService.excise(Token token, InetAddress endpoint)) so yes, they are lost 
for good. 

The removetoken process involves streaming from replicas to the new endpoint, 
so it should be up to date assuming writes were at CL  ANY. I can't think of a 
case where we would gain anything by delivering the hints for the removed 
endpoint (except where writes were at CL.ANY).


 NPE during HH delivery when gossip turned off on target
 ---

 Key: CASSANDRA-3677
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3677
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.7
Reporter: Radim Kolar
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 1.0.8

 Attachments: 3677-v1.patch, 3677.txt


 probably not important bug
 ERROR [OptionalTasks:1] 2011-12-27 21:44:25,342 AbstractCassandraDaemon.java 
 (line 138) Fatal exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.NullPointerException
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.hash(NonBlockingHashMap.java:113)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfMatch(NonBlockingHashMap.java:553)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfMatch(NonBlockingHashMap.java:348)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfAbsent(NonBlockingHashMap.java:319)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashSet.add(NonBlockingHashSet.java:32)
 at 
 org.apache.cassandra.db.HintedHandOffManager.scheduleHintDelivery(HintedHandOffManager.java:371)
 at 
 org.apache.cassandra.db.HintedHandOffManager.scheduleAllDeliveries(HintedHandOffManager.java:356)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:84)
 at 
 org.apache.cassandra.db.HintedHandOffManager$1.run(HintedHandOffManager.java:119)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:679)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3917) System test failures in 1.1

2012-02-15 Thread Sylvain Lebresne (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-3917:


Attachment: 3917.txt

The test_get_range_slice_after_deletion failure is actually due to a typo 
during a merge (my fault). Attached trivial patch to fix.

Somehow I'm not able to reproduce the second failure so far, I'm not sure why.

 System test failures in 1.1
 ---

 Key: CASSANDRA-3917
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3917
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.1.0

 Attachments: 3917.txt


 On branch 1.1, I currently see two system test failures:
 {noformat}
 ==
 FAIL: 
 system.test_thrift_server.TestMutations.test_get_range_slice_after_deletion
 --
 Traceback (most recent call last):
   File /usr/lib/pymodules/python2.7/nose/case.py, line 187, in runTest
 self.test(*self.arg)
   File /home/mcmanus/Git/cassandra/test/system/test_thrift_server.py, line 
 1937, in test_get_range_slice_after_deletion
 assert len(result[0].columns) == 1
 AssertionError
 {noformat}
 and
 {noformat}
 ==
 FAIL: Test that column ttled expires from KEYS index
 --
 Traceback (most recent call last):
   File /usr/lib/pymodules/python2.7/nose/case.py, line 187, in runTest
 self.test(*self.arg)
   File /home/mcmanus/Git/cassandra/test/system/test_thrift_server.py, line 
 1908, in test_index_scan_expiring
 assert len(result) == 1, result
 AssertionError: []
 --
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-2699) continuous incremental anti-entropy

2012-02-15 Thread Peter Schuller (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208593#comment-13208593
 ] 

Peter Schuller commented on CASSANDRA-2699:
---

Sorry yes - CASSANDRA-3912.


 continuous incremental anti-entropy
 ---

 Key: CASSANDRA-2699
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2699
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Peter Schuller

 Currently, repair works by periodically running bulk jobs that (1)
 performs a validating compaction building up an in-memory merkle tree,
 and (2) streaming ring segments as needed according to differences
 indicated by the merkle tree.
 There are some disadvantages to this approach:
 * There is a trade-off between memory usage and the precision of the
   merkle tree. Less precision means more data streamed relative to
   what is strictly required.
 * Repair is a periodic bulk process that runs for a significant
   period and, although possibly rate limited as compaction (if 0.8 or
   backported throttling patch applied), is a divergence in terms of
   performance characteristics from normal operation of the cluster.
 * The impact of imprecision can be huge on a workload dominated by I/O
   and with cache locality being critical, since you will suddenly
   transfers lots of data to the target node.
 I propose a more incremental process whereby anti-entropy is
 incremental and continuous over time. In order to avoid being
 seek-bound one still wants to do work in some form of bursty fashion,
 but the amount of data processed at a time could be sufficiently small
 that the impact on the cluster feels a lot more continuous, and that
 the page cache allows us to avoid re-reading differing data twice.
 Consider a process whereby a node is constantly performing a per-CF
 repair operation for each CF. The current state of the repair process
 is defined by:
 * A starting timestamp of the current iteration through the token
   range the node is responsible for.
 * A finger indicating the current position along the token ring to
   which iteration has completed.
 This information, other than being in-memory, could periodically (every
 few minutes or something) be stored persistently on disk.
 The finger advances by the node selecting the next small bit of the
 ring and doing whatever merkling/hashing/checksumming is necessary on
 that small part, and then asking neighbors to do the same, and
 arranging for neighbors to send the node data for mismatching
 ranges. The data would be sent either by way of mutations like with
 read repair, or by streaming sstables. But it would be small amounts
 of data that will act roughly the same as regular writes for the
 perspective of compaction.
 Some nice properties of this approach:
 * It's always on; no periodic sudden effects on cluster performance.
 * Restarting nodes never cancels or breaks anti-entropy.
 * Huge compactions of entire CF:s never clog up the compaction queue
   (not necessarily a non-issue even with concurrent compactions in
   0.8).
 * Because we're always operating on small chunks, there is never the
   same kind of trade-off for memory use. A merkel tree or similar
   could be calculated at a very detailed level potentially. Although
   the precision from the perspective of reading from disk would likely
   not matter much if we are in page cache anyway, very high precision
   could be *very* useful when doing anti-entropy across data centers
   on slow links.
 There are devils in details, like how to select an appropriate ring
 segment given that you don't have knowledge of the data density on
 other nodes. But I feel that the overall idea/process seems very
 promising.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3919) Dropping a column should do more than just remove the definition

2012-02-15 Thread Jonathan Ellis (Created) (JIRA)
Dropping a column should do more than just remove the definition


 Key: CASSANDRA-3919
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3919
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.1.1


Dropping a column should:

- immediately make it unavailable for {{SELECT}}, including {{SELECT *}}
- eventually (i.e., post-compaction) reclaim the space formerly used by that 
column


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3677) NPE during HH delivery when gossip turned off on target

2012-02-15 Thread Edward Capriolo (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208600#comment-13208600
 ] 

Edward Capriolo commented on CASSANDRA-3677:


It seems wrong to abandon hints. Arguments like such as (unless you wrote at 
ANY, in which case you've already lived dangerously.) are  a slippery slope, 
and it says something about the contract of ANY.

According to Cassandra.thrift.
{quote}
 *   ANY  Ensure that the write has been written once somewhere, 
including possibly being hinted in a non-target node.
{quote}

It does not say :

{quote}
 *   ANY  Ensure that the write has been written once somewhere, 
including possibly being hinted in a non-target node, which probably wont get 
lost, unless .
{quote}


Maybe there is some other harder to contrive scenario out there RC3, write ONE, 
two hints and one node failure then a move also causes an issue with lost hints.

It is an edge case, but I think it is important. Since writes are idempotent I 
would rather a hint gets delivered causing an extra write then it gets lost. 

1.0's made HH way more reliable, I would like to see Cassandra push that high 
standard and not have caveats associated around how ANY works.

 





 NPE during HH delivery when gossip turned off on target
 ---

 Key: CASSANDRA-3677
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3677
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.7
Reporter: Radim Kolar
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 1.0.8

 Attachments: 3677-v1.patch, 3677.txt


 probably not important bug
 ERROR [OptionalTasks:1] 2011-12-27 21:44:25,342 AbstractCassandraDaemon.java 
 (line 138) Fatal exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.NullPointerException
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.hash(NonBlockingHashMap.java:113)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfMatch(NonBlockingHashMap.java:553)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfMatch(NonBlockingHashMap.java:348)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfAbsent(NonBlockingHashMap.java:319)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashSet.add(NonBlockingHashSet.java:32)
 at 
 org.apache.cassandra.db.HintedHandOffManager.scheduleHintDelivery(HintedHandOffManager.java:371)
 at 
 org.apache.cassandra.db.HintedHandOffManager.scheduleAllDeliveries(HintedHandOffManager.java:356)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:84)
 at 
 org.apache.cassandra.db.HintedHandOffManager$1.run(HintedHandOffManager.java:119)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:679)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-3677) NPE during HH delivery when gossip turned off on target

2012-02-15 Thread Edward Capriolo (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208600#comment-13208600
 ] 

Edward Capriolo edited comment on CASSANDRA-3677 at 2/15/12 5:32 PM:
-

It seems wrong to abandon hints. Arguments like such as (unless you wrote at 
ANY, in which case you've already lived dangerously.) are  a slippery slope, 
and it says something about the contract of ANY.

According to Cassandra.thrift.
{quote}
 *   ANY  Ensure that the write has been written once somewhere, 
including possibly being hinted in a non-target node.
{quote}

It does not say :

{quote}
 *   ANY  Ensure that the write has been written once somewhere, 
including possibly being hinted in a non-target node, which probably wont get 
lost, unless .
{quote}


Maybe there is some other harder to contrive scenario out there RF3, write ONE, 
two hints and one node failure then a move also causes an issue with lost hints.

It is an edge case, but I think it is important. Since writes are idempotent I 
would rather a hint gets delivered causing an extra write then it gets lost. 

1.0's made HH way more reliable, I would like to see Cassandra push that high 
standard and not have caveats associated around how ANY works.

 





  was (Author: appodictic):
It seems wrong to abandon hints. Arguments like such as (unless you wrote 
at ANY, in which case you've already lived dangerously.) are  a slippery 
slope, and it says something about the contract of ANY.

According to Cassandra.thrift.
{quote}
 *   ANY  Ensure that the write has been written once somewhere, 
including possibly being hinted in a non-target node.
{quote}

It does not say :

{quote}
 *   ANY  Ensure that the write has been written once somewhere, 
including possibly being hinted in a non-target node, which probably wont get 
lost, unless .
{quote}


Maybe there is some other harder to contrive scenario out there RC3, write ONE, 
two hints and one node failure then a move also causes an issue with lost hints.

It is an edge case, but I think it is important. Since writes are idempotent I 
would rather a hint gets delivered causing an extra write then it gets lost. 

1.0's made HH way more reliable, I would like to see Cassandra push that high 
standard and not have caveats associated around how ANY works.

 




  
 NPE during HH delivery when gossip turned off on target
 ---

 Key: CASSANDRA-3677
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3677
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.7
Reporter: Radim Kolar
Assignee: Brandon Williams
Priority: Trivial
 Fix For: 1.0.8

 Attachments: 3677-v1.patch, 3677.txt


 probably not important bug
 ERROR [OptionalTasks:1] 2011-12-27 21:44:25,342 AbstractCassandraDaemon.java 
 (line 138) Fatal exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.NullPointerException
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.hash(NonBlockingHashMap.java:113)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfMatch(NonBlockingHashMap.java:553)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfMatch(NonBlockingHashMap.java:348)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashMap.putIfAbsent(NonBlockingHashMap.java:319)
 at 
 org.cliffc.high_scale_lib.NonBlockingHashSet.add(NonBlockingHashSet.java:32)
 at 
 org.apache.cassandra.db.HintedHandOffManager.scheduleHintDelivery(HintedHandOffManager.java:371)
 at 
 org.apache.cassandra.db.HintedHandOffManager.scheduleAllDeliveries(HintedHandOffManager.java:356)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:84)
 at 
 org.apache.cassandra.db.HintedHandOffManager$1.run(HintedHandOffManager.java:119)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:679)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

git commit: Fix licenses and version for 1.1.0-beta1

2012-02-15 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.1 e46753381 - 7f4693dab


Fix licenses and version for 1.1.0-beta1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f4693da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f4693da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f4693da

Branch: refs/heads/cassandra-1.1
Commit: 7f4693dab9a4331584109a0561eadffa02c39600
Parents: e467533
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 16:53:43 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 16:53:43 2012 +0100

--
 .rat-excludes  |2 +-
 CHANGES.txt|2 +-
 build.xml  |4 +-
 debian/changelog   |6 +++
 .../org/apache/cassandra/db/SnapshotCommand.java   |   21 +
 .../compaction/CompactionInterruptedException.java |   21 +
 .../apache/cassandra/db/filter/ExtendedFilter.java |   21 +
 .../org/apache/cassandra/thrift/RequestType.java   |   35 ---
 .../apache/cassandra/cql/jdbc/ClientUtilsTest.java |   21 +
 9 files changed, 122 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/.rat-excludes
--
diff --git a/.rat-excludes b/.rat-excludes
index 1ab279b..9dc6311 100644
--- a/.rat-excludes
+++ b/.rat-excludes
@@ -13,7 +13,6 @@ src/gen-java/**
 build/**
 lib/licenses/*.txt
 .settings/**
-contrib/pig/example-script.pig
 **/cassandra.yaml
 **/*.db
 redhat/apache-cassandra.spec
@@ -30,3 +29,4 @@ drivers/py/cql/cassandra/*
 doc/cql/CQL*
 build.properties.default
 test/data/legacy-sstables/**
+examples/pig/**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e3207ea..613c14e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,4 @@
-1.1-dev
+1.1-beta1
  * add nodetool rebuild_index (CASSANDRA-3583)
  * add nodetool rangekeysample (CASSANDRA-2917)
  * Fix streaming too much data during move operations (CASSANDRA-3639)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/build.xml
--
diff --git a/build.xml b/build.xml
index 5d3b32b..74ed5c7 100644
--- a/build.xml
+++ b/build.xml
@@ -25,8 +25,8 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information (we need the default SCM info as 
people may checkout with git-svn) --
-property name=base.version value=1.1-dev/
-property name=scm.default.path 
value=cassandra/branches/cassandra-1.0.0/
+property name=base.version value=1.1.0-beta1/
+property name=scm.default.path 
value=cassandra/branches/cassandra-1.1/
 property name=scm.default.connection 
value=scm:svn:http://svn.apache.org/repos/asf/${scm.default.path}/
 property name=scm.default.developerConnection 
value=scm:svn:https://svn.apache.org/repos/asf/${scm.default.path}/
 property name=scm.default.url 
value=http://svn.apache.org/viewvc/${scm.default.path}/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 70578c8..8f0ca37 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (1.1.0~beta1) unstable; urgency=low
+
+  * New beta release 
+
+ -- Sylvain Lebresne slebre...@apache.org  Wed, 15 Feb 2012 16:49:11 +0100
+
 cassandra (1.0.7) unstable; urgency=low
 
   * New release

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/src/java/org/apache/cassandra/db/SnapshotCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/SnapshotCommand.java 
b/src/java/org/apache/cassandra/db/SnapshotCommand.java
index 2b49874..dee0d8c 100644
--- a/src/java/org/apache/cassandra/db/SnapshotCommand.java
+++ b/src/java/org/apache/cassandra/db/SnapshotCommand.java
@@ -1,4 +1,25 @@
 package org.apache.cassandra.db;
+/*
+ * 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0

[jira] [Commented] (CASSANDRA-3917) System test failures in 1.1

2012-02-15 Thread Brandon Williams (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208604#comment-13208604
 ] 

Brandon Williams commented on CASSANDRA-3917:
-

+1

 System test failures in 1.1
 ---

 Key: CASSANDRA-3917
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3917
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.1.0

 Attachments: 3917.txt


 On branch 1.1, I currently see two system test failures:
 {noformat}
 ==
 FAIL: 
 system.test_thrift_server.TestMutations.test_get_range_slice_after_deletion
 --
 Traceback (most recent call last):
   File /usr/lib/pymodules/python2.7/nose/case.py, line 187, in runTest
 self.test(*self.arg)
   File /home/mcmanus/Git/cassandra/test/system/test_thrift_server.py, line 
 1937, in test_get_range_slice_after_deletion
 assert len(result[0].columns) == 1
 AssertionError
 {noformat}
 and
 {noformat}
 ==
 FAIL: Test that column ttled expires from KEYS index
 --
 Traceback (most recent call last):
   File /usr/lib/pymodules/python2.7/nose/case.py, line 187, in runTest
 self.test(*self.arg)
   File /home/mcmanus/Git/cassandra/test/system/test_thrift_server.py, line 
 1908, in test_index_scan_expiring
 assert len(result) == 1, result
 AssertionError: []
 --
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




git commit: Fix typo in SuperColumn.mostRecentLiveChangeAt()

2012-02-15 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.1 7f4693dab - b0c0faeed


Fix typo in SuperColumn.mostRecentLiveChangeAt()

patch by slebresne; reviewed by driftx for CASSANDRA-3917


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b0c0faee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b0c0faee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b0c0faee

Branch: refs/heads/cassandra-1.1
Commit: b0c0faeed5f22261eea26f2a66dccc921d6a2b83
Parents: 7f4693d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 18:38:07 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 18:38:07 2012 +0100

--
 src/java/org/apache/cassandra/db/SuperColumn.java |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b0c0faee/src/java/org/apache/cassandra/db/SuperColumn.java
--
diff --git a/src/java/org/apache/cassandra/db/SuperColumn.java 
b/src/java/org/apache/cassandra/db/SuperColumn.java
index 8089d59..67d1d71 100644
--- a/src/java/org/apache/cassandra/db/SuperColumn.java
+++ b/src/java/org/apache/cassandra/db/SuperColumn.java
@@ -144,7 +144,7 @@ public class SuperColumn extends AbstractColumnContainer 
implements IColumn
 long max = Long.MIN_VALUE;
 for (IColumn column : getSubColumns())
 {
-if (column.isMarkedForDelete()  column.timestamp()  max)
+if (!column.isMarkedForDelete()  column.timestamp()  max)
 {
 max = column.timestamp();
 }



Git Push Summary

2012-02-15 Thread slebresne
Updated Tags:  refs/tags/1.1.0-beta1-tentative [created] b0c0faeed


[jira] [Created] (CASSANDRA-3920) tests for cqlsh

2012-02-15 Thread paul cannon (Created) (JIRA)
tests for cqlsh
---

 Key: CASSANDRA-3920
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3920
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: paul cannon
Assignee: paul cannon
Priority: Minor
 Fix For: 1.0.8


Cqlsh has become big enough and tries to cover enough situations that it's time 
to start acting like a responsible adult and make this bugger some unit tests 
to guard against regressions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3874) cqlsh: handle situation where data can't be deserialized as expected

2012-02-15 Thread paul cannon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208787#comment-13208787
 ] 

paul cannon commented on CASSANDRA-3874:


Changes made in my 3874-1.0 branch at github: 
https://github.com/thepaul/cassandra/tree/3874-1.0

Since the merge forward is not completely trivial, my 3874-1.1 branch has those 
commits already merged to 1.1: 
https://github.com/thepaul/cassandra/tree/3874-1.1 , and it will be an easy 
merge from that to your updated cassandra-1.1, whatever it is.

I'll attach patch versions too, in case you still prefer those.

 cqlsh: handle situation where data can't be deserialized as expected
 

 Key: CASSANDRA-3874
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3874
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: paul cannon
Assignee: paul cannon
Priority: Minor
  Labels: cqlsh
 Fix For: 1.0.8


 When cqlsh tries to deserialize data which doesn't match the expected type 
 (either because the validation type for the column/key alias was changed, or 
 ASSUME has been used), it just fails completely and in most cases won't show 
 any results at all. When there is only one misbehaving value out of a large 
 number, this can be frustrating.
 cqlsh should either show some failure marker in place of the bad value, or 
 simply show the bytes along with some indicator of a failed deserialization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3874) cqlsh: handle situation where data can't be deserialized as expected

2012-02-15 Thread paul cannon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

paul cannon updated CASSANDRA-3874:
---

Attachment: 3874-1.1.patch.txt
3874-1.0.patch.txt

 cqlsh: handle situation where data can't be deserialized as expected
 

 Key: CASSANDRA-3874
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3874
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: paul cannon
Assignee: paul cannon
Priority: Minor
  Labels: cqlsh
 Fix For: 1.0.8

 Attachments: 3874-1.0.patch.txt, 3874-1.1.patch.txt


 When cqlsh tries to deserialize data which doesn't match the expected type 
 (either because the validation type for the column/key alias was changed, or 
 ASSUME has been used), it just fails completely and in most cases won't show 
 any results at all. When there is only one misbehaving value out of a large 
 number, this can be frustrating.
 cqlsh should either show some failure marker in place of the bad value, or 
 simply show the bytes along with some indicator of a failed deserialization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3874) cqlsh: handle situation where data can't be deserialized as expected

2012-02-15 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208790#comment-13208790
 ] 

Jonathan Ellis commented on CASSANDRA-3874:
---

Is this a big enough change that we should keep it to 1.1 only?

 cqlsh: handle situation where data can't be deserialized as expected
 

 Key: CASSANDRA-3874
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3874
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: paul cannon
Assignee: paul cannon
Priority: Minor
  Labels: cqlsh
 Fix For: 1.0.8

 Attachments: 3874-1.0.patch.txt, 3874-1.1.patch.txt


 When cqlsh tries to deserialize data which doesn't match the expected type 
 (either because the validation type for the column/key alias was changed, or 
 ASSUME has been used), it just fails completely and in most cases won't show 
 any results at all. When there is only one misbehaving value out of a large 
 number, this can be frustrating.
 cqlsh should either show some failure marker in place of the bad value, or 
 simply show the bytes along with some indicator of a failed deserialization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3874) cqlsh: handle situation where data can't be deserialized as expected

2012-02-15 Thread paul cannon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208800#comment-13208800
 ] 

paul cannon commented on CASSANDRA-3874:


I think it's worth having in 1.0, especially if it's going to live for a few 
more releases. It's technically only a bugfix. But I don't mind much, whichever.

 cqlsh: handle situation where data can't be deserialized as expected
 

 Key: CASSANDRA-3874
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3874
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: paul cannon
Assignee: paul cannon
Priority: Minor
  Labels: cqlsh
 Fix For: 1.0.8

 Attachments: 3874-1.0.patch.txt, 3874-1.1.patch.txt


 When cqlsh tries to deserialize data which doesn't match the expected type 
 (either because the validation type for the column/key alias was changed, or 
 ASSUME has been used), it just fails completely and in most cases won't show 
 any results at all. When there is only one misbehaving value out of a large 
 number, this can be frustrating.
 cqlsh should either show some failure marker in place of the bad value, or 
 simply show the bytes along with some indicator of a failed deserialization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3903) Intermittent unexpected errors: possibly race condition around CQL parser?

2012-02-15 Thread paul cannon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

paul cannon updated CASSANDRA-3903:
---

Reviewer: thepaul
Assignee: Sylvain Lebresne

I can't reproduce any of the failure modes above with these patches applied-- 
not even the no viable alternative one, so they must help.

In reviewing the patches, though, I don't think I understand how 
0001-Fix-CFS.all-thread-safety.patch helps anything. Isn't that array wholly 
made and manipulated on the stack?

 Intermittent unexpected errors: possibly race condition around CQL parser?
 --

 Key: CASSANDRA-3903
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3903
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Mac OS X 10.7 with Sun/Oracle Java 1.6.0_29
 Debian GNU/Linux 6.0.3 (squeeze) with Sun/Oracle Java 1.6.0_26
 several recent commits on cassandra-1.1 branch. at least:
 0183dc0b36e684082832de43a21b3dc0a9716d48, 
 3eefbac133c838db46faa6a91ba1f114192557ae, 
 9a842c7b317e6f1e6e156ccb531e34bb769c979f
 Running cassandra under ccm with one node
Reporter: paul cannon
Assignee: Sylvain Lebresne
 Attachments: 0001-Fix-CFS.all-thread-safety.patch, 
 0002-Fix-fixCFMaxId.patch


 When running multiple simultaneous instances of the test_cql.py piece of the 
 python-cql test suite, I can reliably reproduce intermittent and 
 unpredictable errors in the tests.
 The failures often occur at the point of keyspace creation during test setup, 
 with a CQL statement of the form:
 {code}
 CREATE KEYSPACE 'asnvzpot' WITH strategy_class = SimpleStrategy
 AND strategy_options:replication_factor = 1
 
 {code}
 An InvalidRequestException is returned to the cql driver, which re-raises it 
 as a cql.ProgrammingError. The message:
 {code}
 ProgrammingError: Bad Request: line 2:24 no viable alternative at input 
 'asnvzpot'
 {code}
 In a few cases, Cassandra threw an ArrayIndexOutOfBoundsException and this 
 traceback, closing the thrift connection:
 {code}
 ERROR [Thrift:244] 2012-02-10 15:51:46,815 CustomTThreadPoolServer.java (line 
 205) Error occurred during processing of message.
 java.lang.ArrayIndexOutOfBoundsException: 7
 at 
 org.apache.cassandra.db.ColumnFamilyStore.all(ColumnFamilyStore.java:1520)
 at 
 org.apache.cassandra.thrift.ThriftValidation.validateCfDef(ThriftValidation.java:634)
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:744)
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:898)
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1245)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3458)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3446)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:680)
 {code}
 Sometimes I see an ArrayOutOfBoundsError with no traceback:
 {code}
 ERROR [Thrift:858] 2012-02-13 12:04:01,537 CustomTThreadPoolServer.java (line 
 205) Error occurred during processing of message.
 java.lang.ArrayIndexOutOfBoundsException
 {code}
 Sometimes I get this:
 {code}
 ERROR [MigrationStage:1] 2012-02-13 12:04:46,077 AbstractCassandraDaemon.java 
 (line 134) Fatal exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IllegalArgumentException: value already present: 1558
 at 
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
 at 
 com.google.common.collect.AbstractBiMap.putInBothMaps(AbstractBiMap.java:111)
 at com.google.common.collect.AbstractBiMap.put(AbstractBiMap.java:96)
 at com.google.common.collect.HashBiMap.put(HashBiMap.java:84)
 at org.apache.cassandra.config.Schema.load(Schema.java:392)
 at 
 org.apache.cassandra.db.migration.MigrationHelper.addColumnFamily(MigrationHelper.java:284)
 at 
 org.apache.cassandra.db.migration.MigrationHelper.addColumnFamily(MigrationHelper.java:209)
 at 
 org.apache.cassandra.db.migration.AddColumnFamily.applyImpl(AddColumnFamily.java:49)
 at 
 org.apache.cassandra.db.migration.Migration.apply(Migration.java:66)
 at 
 

[jira] [Updated] (CASSANDRA-3874) cqlsh: handle situation where data can't be deserialized as expected

2012-02-15 Thread Jonathan Ellis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3874:
--

Reviewer: brandon.williams

 cqlsh: handle situation where data can't be deserialized as expected
 

 Key: CASSANDRA-3874
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3874
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: paul cannon
Assignee: paul cannon
Priority: Minor
  Labels: cqlsh
 Fix For: 1.0.8

 Attachments: 3874-1.0.patch.txt, 3874-1.1.patch.txt


 When cqlsh tries to deserialize data which doesn't match the expected type 
 (either because the validation type for the column/key alias was changed, or 
 ASSUME has been used), it just fails completely and in most cases won't show 
 any results at all. When there is only one misbehaving value out of a large 
 number, this can be frustrating.
 cqlsh should either show some failure marker in place of the bad value, or 
 simply show the bytes along with some indicator of a failed deserialization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[1/5] git commit: Merge branch 'cassandra-1.1' into trunk

2012-02-15 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 b0c0faeed - 33199c6ca
  refs/heads/trunk 875e05aa9 - edd97d4e1


Merge branch 'cassandra-1.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edd97d4e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edd97d4e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edd97d4e

Branch: refs/heads/trunk
Commit: edd97d4e1b78376453cd04ab4062cf61a24d9c4f
Parents: 875e05a 33199c6
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 15 17:34:20 2012 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 15 17:34:20 2012 -0600

--
 .rat-excludes  |2 +-
 CHANGES.txt|2 +-
 build.xml  |4 +-
 debian/changelog   |6 +++
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   10 ++---
 .../org/apache/cassandra/db/SnapshotCommand.java   |   21 +
 src/java/org/apache/cassandra/db/SuperColumn.java  |2 +-
 .../compaction/CompactionInterruptedException.java |   21 +
 .../apache/cassandra/db/filter/ExtendedFilter.java |   21 +
 .../org/apache/cassandra/thrift/RequestType.java   |   35 ---
 .../apache/cassandra/cql/jdbc/ClientUtilsTest.java |   21 +
 11 files changed, 127 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd97d4e/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[2/5] git commit: use metadata.cfId instead of re-looking it up for each cache update

2012-02-15 Thread jbellis
use metadata.cfId instead of re-looking it up for each cache update


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33199c6c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33199c6c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33199c6c

Branch: refs/heads/cassandra-1.1
Commit: 33199c6ca21e7cceb358a971bfe047af22354768
Parents: b0c0fae
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 15 17:33:20 2012 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 15 17:34:10 2012 -0600

--
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   10 --
 1 files changed, 4 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33199c6c/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index e4e3204..92ae676 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -694,11 +694,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public void updateRowCache(DecoratedKey key, ColumnFamily columnFamily)
 {
-Integer cfId = Schema.instance.getId(table.name, this.columnFamily);
-if (cfId == null)
+if (metadata.cfId == null)
 return; // secondary index
 
-RowCacheKey cacheKey = new RowCacheKey(cfId, key);
+RowCacheKey cacheKey = new RowCacheKey(metadata.cfId, key);
 
 if (CacheService.instance.rowCache.isPutCopying())
 {
@@ -1480,11 +1479,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public ColumnFamily getRawCachedRow(DecoratedKey key)
 {
-Integer cfId = Schema.instance.getId(table.name, this.columnFamily);
-if (cfId == null)
+if (metadata.cfId == null)
 return null; // secondary index
 
-return getRawCachedRow(new RowCacheKey(cfId, key));
+return getRawCachedRow(new RowCacheKey(metadata.cfId, key));
 }
 
 public ColumnFamily getRawCachedRow(RowCacheKey key)



[3/5] git commit: use metadata.cfId instead of re-looking it up for each cache update

2012-02-15 Thread jbellis
use metadata.cfId instead of re-looking it up for each cache update


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33199c6c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33199c6c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33199c6c

Branch: refs/heads/trunk
Commit: 33199c6ca21e7cceb358a971bfe047af22354768
Parents: b0c0fae
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 15 17:33:20 2012 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 15 17:34:10 2012 -0600

--
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   10 --
 1 files changed, 4 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33199c6c/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index e4e3204..92ae676 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -694,11 +694,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public void updateRowCache(DecoratedKey key, ColumnFamily columnFamily)
 {
-Integer cfId = Schema.instance.getId(table.name, this.columnFamily);
-if (cfId == null)
+if (metadata.cfId == null)
 return; // secondary index
 
-RowCacheKey cacheKey = new RowCacheKey(cfId, key);
+RowCacheKey cacheKey = new RowCacheKey(metadata.cfId, key);
 
 if (CacheService.instance.rowCache.isPutCopying())
 {
@@ -1480,11 +1479,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public ColumnFamily getRawCachedRow(DecoratedKey key)
 {
-Integer cfId = Schema.instance.getId(table.name, this.columnFamily);
-if (cfId == null)
+if (metadata.cfId == null)
 return null; // secondary index
 
-return getRawCachedRow(new RowCacheKey(cfId, key));
+return getRawCachedRow(new RowCacheKey(metadata.cfId, key));
 }
 
 public ColumnFamily getRawCachedRow(RowCacheKey key)



[4/5] git commit: Fix typo in SuperColumn.mostRecentLiveChangeAt()

2012-02-15 Thread jbellis
Fix typo in SuperColumn.mostRecentLiveChangeAt()

patch by slebresne; reviewed by driftx for CASSANDRA-3917


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b0c0faee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b0c0faee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b0c0faee

Branch: refs/heads/trunk
Commit: b0c0faeed5f22261eea26f2a66dccc921d6a2b83
Parents: 7f4693d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 18:38:07 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 18:38:07 2012 +0100

--
 src/java/org/apache/cassandra/db/SuperColumn.java |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b0c0faee/src/java/org/apache/cassandra/db/SuperColumn.java
--
diff --git a/src/java/org/apache/cassandra/db/SuperColumn.java 
b/src/java/org/apache/cassandra/db/SuperColumn.java
index 8089d59..67d1d71 100644
--- a/src/java/org/apache/cassandra/db/SuperColumn.java
+++ b/src/java/org/apache/cassandra/db/SuperColumn.java
@@ -144,7 +144,7 @@ public class SuperColumn extends AbstractColumnContainer 
implements IColumn
 long max = Long.MIN_VALUE;
 for (IColumn column : getSubColumns())
 {
-if (column.isMarkedForDelete()  column.timestamp()  max)
+if (!column.isMarkedForDelete()  column.timestamp()  max)
 {
 max = column.timestamp();
 }



[5/5] git commit: Fix licenses and version for 1.1.0-beta1

2012-02-15 Thread jbellis
Fix licenses and version for 1.1.0-beta1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f4693da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f4693da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f4693da

Branch: refs/heads/trunk
Commit: 7f4693dab9a4331584109a0561eadffa02c39600
Parents: e467533
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 15 16:53:43 2012 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 15 16:53:43 2012 +0100

--
 .rat-excludes  |2 +-
 CHANGES.txt|2 +-
 build.xml  |4 +-
 debian/changelog   |6 +++
 .../org/apache/cassandra/db/SnapshotCommand.java   |   21 +
 .../compaction/CompactionInterruptedException.java |   21 +
 .../apache/cassandra/db/filter/ExtendedFilter.java |   21 +
 .../org/apache/cassandra/thrift/RequestType.java   |   35 ---
 .../apache/cassandra/cql/jdbc/ClientUtilsTest.java |   21 +
 9 files changed, 122 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/.rat-excludes
--
diff --git a/.rat-excludes b/.rat-excludes
index 1ab279b..9dc6311 100644
--- a/.rat-excludes
+++ b/.rat-excludes
@@ -13,7 +13,6 @@ src/gen-java/**
 build/**
 lib/licenses/*.txt
 .settings/**
-contrib/pig/example-script.pig
 **/cassandra.yaml
 **/*.db
 redhat/apache-cassandra.spec
@@ -30,3 +29,4 @@ drivers/py/cql/cassandra/*
 doc/cql/CQL*
 build.properties.default
 test/data/legacy-sstables/**
+examples/pig/**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e3207ea..613c14e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,4 @@
-1.1-dev
+1.1-beta1
  * add nodetool rebuild_index (CASSANDRA-3583)
  * add nodetool rangekeysample (CASSANDRA-2917)
  * Fix streaming too much data during move operations (CASSANDRA-3639)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/build.xml
--
diff --git a/build.xml b/build.xml
index 5d3b32b..74ed5c7 100644
--- a/build.xml
+++ b/build.xml
@@ -25,8 +25,8 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information (we need the default SCM info as 
people may checkout with git-svn) --
-property name=base.version value=1.1-dev/
-property name=scm.default.path 
value=cassandra/branches/cassandra-1.0.0/
+property name=base.version value=1.1.0-beta1/
+property name=scm.default.path 
value=cassandra/branches/cassandra-1.1/
 property name=scm.default.connection 
value=scm:svn:http://svn.apache.org/repos/asf/${scm.default.path}/
 property name=scm.default.developerConnection 
value=scm:svn:https://svn.apache.org/repos/asf/${scm.default.path}/
 property name=scm.default.url 
value=http://svn.apache.org/viewvc/${scm.default.path}/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 70578c8..8f0ca37 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (1.1.0~beta1) unstable; urgency=low
+
+  * New beta release 
+
+ -- Sylvain Lebresne slebre...@apache.org  Wed, 15 Feb 2012 16:49:11 +0100
+
 cassandra (1.0.7) unstable; urgency=low
 
   * New release

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f4693da/src/java/org/apache/cassandra/db/SnapshotCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/SnapshotCommand.java 
b/src/java/org/apache/cassandra/db/SnapshotCommand.java
index 2b49874..dee0d8c 100644
--- a/src/java/org/apache/cassandra/db/SnapshotCommand.java
+++ b/src/java/org/apache/cassandra/db/SnapshotCommand.java
@@ -1,4 +1,25 @@
 package org.apache.cassandra.db;
+/*
+ * 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * 

[jira] [Commented] (CASSANDRA-3874) cqlsh: handle situation where data can't be deserialized as expected

2012-02-15 Thread paul cannon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208983#comment-13208983
 ] 

paul cannon commented on CASSANDRA-3874:


updated the 3874-1.1 branch just now to add the 1.0.9 version of the python-cql 
library, since that is required for this fix to work. 1.0 doesn't have an 
embedded python-cql lib, so it's fine.

 cqlsh: handle situation where data can't be deserialized as expected
 

 Key: CASSANDRA-3874
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3874
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: paul cannon
Assignee: paul cannon
Priority: Minor
  Labels: cqlsh
 Fix For: 1.0.8

 Attachments: 3874-1.0.patch.txt, 3874-1.1.patch.txt


 When cqlsh tries to deserialize data which doesn't match the expected type 
 (either because the validation type for the column/key alias was changed, or 
 ASSUME has been used), it just fails completely and in most cases won't show 
 any results at all. When there is only one misbehaving value out of a large 
 number, this can be frustrating.
 cqlsh should either show some failure marker in place of the bad value, or 
 simply show the bytes along with some indicator of a failed deserialization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3921) Compaction doesn't clear out expired tombstones from SerializingCache

2012-02-15 Thread Jonathan Ellis (Created) (JIRA)
Compaction doesn't clear out expired tombstones from SerializingCache
-

 Key: CASSANDRA-3921
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3921
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 1.1.0


Compaction calls removeDeletedInCache, which looks like this:

{code}
.   public void removeDeletedInCache(DecoratedKey key)
{
ColumnFamily cachedRow = cfs.getRawCachedRow(key);
if (cachedRow != null)
ColumnFamilyStore.removeDeleted(cachedRow, gcBefore);
}
{code}

For the SerializingCache, this means it calls removeDeleted on a temporary, 
deserialized copy, which leaves the cache contents unaffected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3921) Compaction doesn't clear out expired tombstones from SerializingCache

2012-02-15 Thread Jonathan Ellis (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208986#comment-13208986
 ] 

Jonathan Ellis commented on CASSANDRA-3921:
---

I'm not sure what the right fix is.  We could invalidate the row, but that 
means a big compaction could wipe out the row cache entirely, which is worse.  
Should we just wontfix this?

 Compaction doesn't clear out expired tombstones from SerializingCache
 -

 Key: CASSANDRA-3921
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3921
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 1.1.0


 Compaction calls removeDeletedInCache, which looks like this:
 {code}
 .   public void removeDeletedInCache(DecoratedKey key)
 {
 ColumnFamily cachedRow = cfs.getRawCachedRow(key);
 if (cachedRow != null)
 ColumnFamilyStore.removeDeleted(cachedRow, gcBefore);
 }
 {code}
 For the SerializingCache, this means it calls removeDeleted on a temporary, 
 deserialized copy, which leaves the cache contents unaffected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3921) Compaction doesn't clear out expired tombstones from SerializingCache

2012-02-15 Thread Peter Schuller (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209048#comment-13209048
 ] 

Peter Schuller commented on CASSANDRA-3921:
---

With a typical workload, not evicting it would just fail to optimize - you 
have to wait for the data to get dropped out of cache.

But if you do have a use-case where you have a lot of requests to non-existent 
data, this could silently render the row caches almost useless in a way that 
is very hard to detect. Suppose you have rows with a hot set where you want 
caching, but the data is made up of churning columns (insertions/deletions; 
the set of columns constantly  But, the damaged is limited to the ratio of 
requests for empty rows vs. other requests, and is less prominent the bigger 
the average requested row in relation to the size of a tombstone.

The above should be true if the rows in cache only contain the tombstones.

But consider a use-case where you have rows (hot) that gets constantly updated 
with insertion/deletion (let's say it has 1-5 columns at any point in time, 
with columns constantly churned). How you have something which is hot in 
cache due to reads (for the ones in hot set), data being over-written, and the 
per-row average size increasing over time. Cache locality becomes worse and 
worse and no one can see why, and a restart magically fixes it. This should 
hold true as long as row cache entries aren't entirely re-read on writes, which 
would in part defeat the purpose of caching (but I realize now I'm not sure 
what we do here).


 Compaction doesn't clear out expired tombstones from SerializingCache
 -

 Key: CASSANDRA-3921
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3921
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 1.1.0


 Compaction calls removeDeletedInCache, which looks like this:
 {code}
 .   public void removeDeletedInCache(DecoratedKey key)
 {
 ColumnFamily cachedRow = cfs.getRawCachedRow(key);
 if (cachedRow != null)
 ColumnFamilyStore.removeDeleted(cachedRow, gcBefore);
 }
 {code}
 For the SerializingCache, this means it calls removeDeleted on a temporary, 
 deserialized copy, which leaves the cache contents unaffected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-3912) support incremental repair controlled by external agent

2012-02-15 Thread Peter Schuller (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209117#comment-13209117
 ] 

Peter Schuller edited comment on CASSANDRA-3912 at 2/16/12 5:22 AM:


Agreed.

The good news is that the actual commands necessary ({{getprimaryrange}} and 
{{repairrange}}) are easy patches.

The bad news is that it turns out the AntiEntropyService does not support 
arbitrary ranges.

Attaching {{CASSANDRA\-3912\-v2\-001\-add\-nodetool\-commands.txt}} and 
{{CASSANDRA\-3912\-v2\-002\-fix\-antientropyservice.txt}}.

Had it not been for AES I'd want to propose we commit this to 1.1 since it 
would be additive only, but given the AES fix I don't know... I guess probably 
not?

It's a shame because I think it would be a boon to users with large nodes 
struggling with repair (despite the fact that, as you point out, each repair 
implies a flush).



  was (Author: scode):
Agreed.

The good news is that the actual commands necessary ({{getprimaryrange}} and 
{{repairrange}}) are easy patches.

The bad news is that it turns out the AntiEntropyService does not support 
arbitrary ranges.

Attaching {{CASSANDRA\-3912\-v2\-001\-add\-nodetool\-commands.txt}} and 
{{CASSANDRA-3912-v2-002-fix-antientropyservice.txt}}.

Had it not been for AES I'd want to propose we commit this to 1.1 since it 
would be additive only, but given the AES fix I don't know... I guess probably 
not?

It's a shame because I think it would be a boon to users with large nodes 
struggling with repair (despite the fact that, as you point out, each repair 
implies a flush).


  
 support incremental repair controlled by external agent
 ---

 Key: CASSANDRA-3912
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3912
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
 Attachments: CASSANDRA-3912-trunk-v1.txt, 
 CASSANDRA-3912-v2-001-add-nodetool-commands.txt, 
 CASSANDRA-3912-v2-002-fix-antientropyservice.txt


 As a poor man's pre-cursor to CASSANDRA-2699, exposing the ability to repair 
 small parts of a range is extremely useful because it allows (with external 
 scripting logic) to slowly repair a node's content over time. Other than 
 avoiding the bulkyness of complete repairs, it means that you can safely do 
 repairs even if you absolutely cannot afford e.g. disk spaces spikes (see 
 CASSANDRA-2699 for what the issues are).
 Attaching a patch that exposes a repairincremental command to nodetool, 
 where you specify a step and the number of total steps. Incrementally 
 performing a repair in 100 steps, for example, would be done by:
 {code}
 nodetool repairincremental 0 100
 nodetool repairincremental 1 100
 ...
 nodetool repairincremental 99 100
 {code}
 An external script can be used to keep track of what has been repaired and 
 when. This should allow (1) allow incremental repair to happen now/soon, and 
 (2) allow experimentation and evaluation for an implementation of 
 CASSANDRA-2699 which I still think is a good idea. This patch does nothing to 
 help the average deployment, but at least makes incremental repair possible 
 given sufficient effort spent on external scripting.
 The big no-no about the patch is that it is entirely specific to 
 RandomPartitioner and BigIntegerToken. If someone can suggest a way to 
 implement this command generically using the Range/Token abstractions, I'd be 
 happy to hear suggestions.
 An alternative would be to provide a nodetool command that allows you to 
 simply specify the specific token ranges on the command line. It makes using 
 it a bit more difficult, but would mean that it works for any partitioner and 
 token type.
 Unless someone can suggest a better way to do this, I think I'll provide a 
 patch that does this. I'm still leaning towards supporting the simple step N 
 out of M form though.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3912) support incremental repair controlled by external agent

2012-02-15 Thread Peter Schuller (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209118#comment-13209118
 ] 

Peter Schuller commented on CASSANDRA-3912:
---

For the record the second patch sneaks in a renaming of AES.getNeighbors() to 
AES.getSources(). Since it does filtering, it is in fact not returning all 
neighbors, as the old name implied (well, to me anyway).

 support incremental repair controlled by external agent
 ---

 Key: CASSANDRA-3912
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3912
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
 Attachments: CASSANDRA-3912-trunk-v1.txt, 
 CASSANDRA-3912-v2-001-add-nodetool-commands.txt, 
 CASSANDRA-3912-v2-002-fix-antientropyservice.txt


 As a poor man's pre-cursor to CASSANDRA-2699, exposing the ability to repair 
 small parts of a range is extremely useful because it allows (with external 
 scripting logic) to slowly repair a node's content over time. Other than 
 avoiding the bulkyness of complete repairs, it means that you can safely do 
 repairs even if you absolutely cannot afford e.g. disk spaces spikes (see 
 CASSANDRA-2699 for what the issues are).
 Attaching a patch that exposes a repairincremental command to nodetool, 
 where you specify a step and the number of total steps. Incrementally 
 performing a repair in 100 steps, for example, would be done by:
 {code}
 nodetool repairincremental 0 100
 nodetool repairincremental 1 100
 ...
 nodetool repairincremental 99 100
 {code}
 An external script can be used to keep track of what has been repaired and 
 when. This should allow (1) allow incremental repair to happen now/soon, and 
 (2) allow experimentation and evaluation for an implementation of 
 CASSANDRA-2699 which I still think is a good idea. This patch does nothing to 
 help the average deployment, but at least makes incremental repair possible 
 given sufficient effort spent on external scripting.
 The big no-no about the patch is that it is entirely specific to 
 RandomPartitioner and BigIntegerToken. If someone can suggest a way to 
 implement this command generically using the Range/Token abstractions, I'd be 
 happy to hear suggestions.
 An alternative would be to provide a nodetool command that allows you to 
 simply specify the specific token ranges on the command line. It makes using 
 it a bit more difficult, but would mean that it works for any partitioner and 
 token type.
 Unless someone can suggest a better way to do this, I think I'll provide a 
 patch that does this. I'm still leaning towards supporting the simple step N 
 out of M form though.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3922) streaming from all (not one) neighbors during rebuild/bootstrap

2012-02-15 Thread Peter Schuller (Created) (JIRA)
streaming from all (not one) neighbors during rebuild/bootstrap
---

 Key: CASSANDRA-3922
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3922
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
Priority: Blocker
 Attachments: CASSANDRA-3922-1.1.txt

The last round of changes that happened in CASSANDRA-3483 before it went in 
actually changed behavior - we now stream from *ALL* neighbors that have a 
range, rather than just one. This leads to data size explosion.

Attaching patch to revert to intended behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3922) streaming from all (not one) neighbors during rebuild/bootstrap

2012-02-15 Thread Peter Schuller (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schuller updated CASSANDRA-3922:
--

Attachment: CASSANDRA-3922-1.1.txt

 streaming from all (not one) neighbors during rebuild/bootstrap
 ---

 Key: CASSANDRA-3922
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3922
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
Priority: Blocker
 Fix For: 1.1.0

 Attachments: CASSANDRA-3922-1.1.txt


 The last round of changes that happened in CASSANDRA-3483 before it went in 
 actually changed behavior - we now stream from *ALL* neighbors that have a 
 range, rather than just one. This leads to data size explosion.
 Attaching patch to revert to intended behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3922) streaming from all (not one) neighbors during rebuild/bootstrap

2012-02-15 Thread Peter Schuller (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schuller updated CASSANDRA-3922:
--

Fix Version/s: 1.1.0

 streaming from all (not one) neighbors during rebuild/bootstrap
 ---

 Key: CASSANDRA-3922
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3922
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
Priority: Blocker
 Fix For: 1.1.0

 Attachments: CASSANDRA-3922-1.1.txt


 The last round of changes that happened in CASSANDRA-3483 before it went in 
 actually changed behavior - we now stream from *ALL* neighbors that have a 
 range, rather than just one. This leads to data size explosion.
 Attaching patch to revert to intended behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3923) Cassandra Nodetool Cleanup causing sstable corruption.

2012-02-15 Thread Samarth Gahire (Created) (JIRA)
Cassandra Nodetool Cleanup causing sstable corruption.
--

 Key: CASSANDRA-3923
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3923
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 0.8.2
Reporter: Samarth Gahire
 Fix For: 1.1.0


We recently doubled size of our cluster. 
After that,we ran the cleanup on the old machines in the cluster.
After that we loaded the data which triggered minor compaction which did not 
complete due to corrupt sstables
This was the case with those machines only on which we ran the cleanup.

As the cleanup is unavoidable after node addition, what is the way to avoid 
this problem?
Is this issue fixed in newer versions of Cassandra(As we are using 
cassandra-0.8.2)?

OR 

Are there any steps / procedure that will avoid need of cleanup after node 
addition.
The same issue is reported here: CASSANDRA-3065

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3903) Intermittent unexpected errors: possibly race condition around CQL parser?

2012-02-15 Thread Sylvain Lebresne (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13209195#comment-13209195
 ] 

Sylvain Lebresne commented on CASSANDRA-3903:
-

bq. Isn't that array wholly made and manipulated on the stack?

It is, the thread safety I'm talking about about is that a new keyspace can 
be added while CFS.all() is running. If so, there can be say 6 keyspace when 
the array is created, but on the next line, when Table.all() is called, it 
could return 7 entries.

 Intermittent unexpected errors: possibly race condition around CQL parser?
 --

 Key: CASSANDRA-3903
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3903
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Mac OS X 10.7 with Sun/Oracle Java 1.6.0_29
 Debian GNU/Linux 6.0.3 (squeeze) with Sun/Oracle Java 1.6.0_26
 several recent commits on cassandra-1.1 branch. at least:
 0183dc0b36e684082832de43a21b3dc0a9716d48, 
 3eefbac133c838db46faa6a91ba1f114192557ae, 
 9a842c7b317e6f1e6e156ccb531e34bb769c979f
 Running cassandra under ccm with one node
Reporter: paul cannon
Assignee: Sylvain Lebresne
 Attachments: 0001-Fix-CFS.all-thread-safety.patch, 
 0002-Fix-fixCFMaxId.patch


 When running multiple simultaneous instances of the test_cql.py piece of the 
 python-cql test suite, I can reliably reproduce intermittent and 
 unpredictable errors in the tests.
 The failures often occur at the point of keyspace creation during test setup, 
 with a CQL statement of the form:
 {code}
 CREATE KEYSPACE 'asnvzpot' WITH strategy_class = SimpleStrategy
 AND strategy_options:replication_factor = 1
 
 {code}
 An InvalidRequestException is returned to the cql driver, which re-raises it 
 as a cql.ProgrammingError. The message:
 {code}
 ProgrammingError: Bad Request: line 2:24 no viable alternative at input 
 'asnvzpot'
 {code}
 In a few cases, Cassandra threw an ArrayIndexOutOfBoundsException and this 
 traceback, closing the thrift connection:
 {code}
 ERROR [Thrift:244] 2012-02-10 15:51:46,815 CustomTThreadPoolServer.java (line 
 205) Error occurred during processing of message.
 java.lang.ArrayIndexOutOfBoundsException: 7
 at 
 org.apache.cassandra.db.ColumnFamilyStore.all(ColumnFamilyStore.java:1520)
 at 
 org.apache.cassandra.thrift.ThriftValidation.validateCfDef(ThriftValidation.java:634)
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:744)
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:898)
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1245)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3458)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3446)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:680)
 {code}
 Sometimes I see an ArrayOutOfBoundsError with no traceback:
 {code}
 ERROR [Thrift:858] 2012-02-13 12:04:01,537 CustomTThreadPoolServer.java (line 
 205) Error occurred during processing of message.
 java.lang.ArrayIndexOutOfBoundsException
 {code}
 Sometimes I get this:
 {code}
 ERROR [MigrationStage:1] 2012-02-13 12:04:46,077 AbstractCassandraDaemon.java 
 (line 134) Fatal exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IllegalArgumentException: value already present: 1558
 at 
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
 at 
 com.google.common.collect.AbstractBiMap.putInBothMaps(AbstractBiMap.java:111)
 at com.google.common.collect.AbstractBiMap.put(AbstractBiMap.java:96)
 at com.google.common.collect.HashBiMap.put(HashBiMap.java:84)
 at org.apache.cassandra.config.Schema.load(Schema.java:392)
 at 
 org.apache.cassandra.db.migration.MigrationHelper.addColumnFamily(MigrationHelper.java:284)
 at 
 org.apache.cassandra.db.migration.MigrationHelper.addColumnFamily(MigrationHelper.java:209)
 at 
 org.apache.cassandra.db.migration.AddColumnFamily.applyImpl(AddColumnFamily.java:49)
 at 
 org.apache.cassandra.db.migration.Migration.apply(Migration.java:66)