[jira] [Updated] (CASSANDRA-7237) Optimize batchlog manager to avoid full scans

2014-05-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7237:
-

Fix Version/s: (was: 2.1.0)
   2.1.1

 Optimize batchlog manager to avoid full scans
 -

 Key: CASSANDRA-7237
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7237
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.1.1


 Now that we use time-UUIDs for batchlog ids, and given that w/ local strategy 
 the partitions are ordered in time-order here, we can optimize the scanning 
 by limiting the range to replay taking the last replayed batch's id as the 
 beginning of the range, and uuid(now+timeout) as its end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7241) Pig test fails on 2.1 branch

2014-05-15 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-7241:


Assignee: Sylvain Lebresne

 Pig test fails on 2.1 branch
 

 Key: CASSANDRA-7241
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7241
 Project: Cassandra
  Issue Type: Bug
Reporter: Alex Liu
Assignee: Sylvain Lebresne

 run ant pig-test on cassandra-2.1 branch. There are many tests failed. I 
 trace it a little and find out Pig test fails starts from 
 https://github.com/apache/cassandra/commit/362cc05352ec67e707e0ac790732e96a15e63f6b
 commit.
 It looks like storage changes break Pig tests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6283) Windows 7 data files kept open / can't be deleted after compaction.

2014-05-15 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-6283:
---

Reproduced In: 2.0.3, 2.0.2, 2.1.0  (was: 2.0.2, 2.0.3, 2.1.0)
   Labels: Windows compaction  (was: compaction)

 Windows 7 data files kept open / can't be deleted after compaction.
 ---

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
  Labels: Windows, compaction
 Fix For: 3.0

 Attachments: 6283_StreamWriter_patch.txt, leakdetect.patch, 
 neighbor-log.zip, root-log.zip, screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
 {code:title=RandomAccessReader.java|borderStyle=solid}
 @Override
 protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
 }
 {code}
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/6] git commit: Make StreamSession more thread safe

2014-05-15 Thread yukim
Make StreamSession more thread safe

patch by sankalp kohli; reviewed by yukim for CASSANDRA-7092


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7484bd41
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7484bd41
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7484bd41

Branch: refs/heads/trunk
Commit: 7484bd41918cc042642753f1ad1eaf468c6fc3af
Parents: d48c797
Author: Yuki Morishita yu...@apache.org
Authored: Fri May 9 10:40:50 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Fri May 9 10:40:50 2014 -0500

--
 .../org/apache/cassandra/streaming/StreamSession.java   | 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7484bd41/src/java/org/apache/cassandra/streaming/StreamSession.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamSession.java 
b/src/java/org/apache/cassandra/streaming/StreamSession.java
index 0ba41fb..30e3fa2 100644
--- a/src/java/org/apache/cassandra/streaming/StreamSession.java
+++ b/src/java/org/apache/cassandra/streaming/StreamSession.java
@@ -20,11 +20,9 @@ package org.apache.cassandra.streaming;
 import java.io.IOException;
 import java.net.InetAddress;
 import java.util.*;
-import java.util.concurrent.Future;
-import java.util.concurrent.TimeUnit;
+import java.util.concurrent.*;
 
-import com.google.common.collect.Iterables;
-import com.google.common.collect.Lists;
+import com.google.common.collect.*;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -123,11 +121,11 @@ public class StreamSession implements 
IEndpointStateChangeSubscriber, IFailureDe
 private StreamResultFuture streamResult;
 
 // stream requests to send to the peer
-private final ListStreamRequest requests = new ArrayList();
+private final SetStreamRequest requests = Sets.newConcurrentHashSet();
 // streaming tasks are created and managed per ColumnFamily ID
-private final MapUUID, StreamTransferTask transfers = new HashMap();
+private final MapUUID, StreamTransferTask transfers = new 
ConcurrentHashMap();
 // data receivers, filled after receiving prepare message
-private final MapUUID, StreamReceiveTask receivers = new HashMap();
+private final MapUUID, StreamReceiveTask receivers = new 
ConcurrentHashMap();
 private final StreamingMetrics metrics;
 
 public final ConnectionHandler handler;



[4/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-05-15 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a680f721
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a680f721
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a680f721

Branch: refs/heads/trunk
Commit: a680f721b0bbedec1d367d5b6e8c8e8281d31a2f
Parents: c6efd35 2092da0
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 14 13:27:34 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 14 13:27:34 2014 -0500

--
 build.xml | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a680f721/build.xml
--
diff --cc build.xml
index 044e3a2,6adb042..c6dd55e
--- a/build.xml
+++ b/build.xml
@@@ -371,18 -366,19 +371,20 @@@
  
dependency groupId=junit artifactId=junit version=4.6 /
dependency groupId=commons-logging artifactId=commons-logging 
version=1.1.1/
 -  dependency groupId=org.apache.rat artifactId=apache-rat 
version=0.6
 +  dependency groupId=org.apache.rat artifactId=apache-rat 
version=0.10
   exclusion groupId=commons-lang artifactId=commons-lang/
/dependency
 -  dependency groupId=org.apache.hadoop artifactId=hadoop-core 
version=1.0.3/
 +  dependency groupId=org.apache.hadoop artifactId=hadoop-core 
version=1.0.3
 +  exclusion groupId=org.mortbay.jetty 
artifactId=servlet-api/
 +  /dependency
dependency groupId=org.apache.hadoop 
artifactId=hadoop-minicluster version=1.0.3/
 -  dependency groupId=org.apache.pig artifactId=pig 
version=0.10.0/
 -  dependency groupId=net.java.dev.jna artifactId=jna 
version=3.2.7/
 +  dependency groupId=org.apache.pig artifactId=pig 
version=0.11.1/
 +  dependency groupId=net.java.dev.jna artifactId=jna 
version=4.0.0/
  
-   dependency groupId=net.sourceforge.cobertura 
artifactId=cobertura version=${cobertura.version}/
+   dependency groupId=net.sourceforge.cobertura 
artifactId=cobertura version=${cobertura.version}
+ exclusion groupId=xerces artifactId=xercesImpl/
+   /dependency
  
 -  dependency groupId=log4j artifactId=log4j version=1.2.16 /
dependency groupId=org.apache.cassandra 
artifactId=cassandra-all version=${version} /
dependency groupId=org.apache.cassandra 
artifactId=cassandra-thrift version=${version} /
dependency groupId=com.yammer.metrics artifactId=metrics-core 
version=2.2.0 /



[jira] [Updated] (CASSANDRA-6877) pig tests broken

2014-05-15 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-6877:


Attachment: (was: 0002-Fix-failed-pig-test.patch)

 pig tests broken
 

 Key: CASSANDRA-6877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6877
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Sam Tunnicliffe
 Fix For: 2.0.9, 2.1 rc1

 Attachments: 0001-Exclude-cobertura-xerces-dependency.patch, 
 0002-Fix-failed-pig-test.patch


 Not sure what happened here, but I get a smorgasbord of errors running the 
 pig tests now, from xml errors in xerces to NotFoundExceptions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6962) examine shortening path length post-5202

2014-05-15 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997896#comment-13997896
 ] 

Joshua McKenzie commented on CASSANDRA-6962:


Looks like it was lowered across the board and not on a per-platform basis.  I 
can see a file-path limitation on linux being a surprise but it's part of the 
ecosystem people are used to with windows.

 examine shortening path length post-5202
 

 Key: CASSANDRA-6962
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6962
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 2.1 rc1

 Attachments: 6962-2.1.txt


 From CASSANDRA-5202 discussion:
 {quote}
 Did we give up on this?
 Could we clean up the redundancy a little by moving the ID into the directory 
 name? e.g., ks/cf-uuid/version-generation-component.db
 I'm worried about path length, which is limited on Windows.
 Edit: to give a specific example, for KS foo Table bar we now have
 /var/lib/cassandra/flush/foo/bar-2fbb89709a6911e3b7dc4d7d4e3ca4b4/foo-bar-ka-1-Data.db
 I'm proposing
 /var/lib/cassandra/flush/foo/bar-2fbb89709a6911e3b7dc4d7d4e3ca4b4/ka-1-Data.db
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7001) Windows launch feature parity - augment launch process using PowerShell to match capabilities of *nix launching

2014-05-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-7001.
---

Resolution: Fixed

 Windows launch feature parity - augment launch process using PowerShell to 
 match capabilities of *nix launching
 ---

 Key: CASSANDRA-7001
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7001
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: Windows, qa-resolved
 Fix For: 2.1 rc1

 Attachments: 7001_v1.txt, 7001_v2.txt, 7001_v3.txt, 7001_v4.txt


 The current .bat-based launching has neither the logic nor robustness of a 
 bash or PowerShell-based solution.  In pursuit of making Windows a 1st-class 
 citizen for C*, we need to augment the launch-process using something like 
 PowerShell to get as close to feature-parity as possible with Linux.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using WHERE

2014-05-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997968#comment-13997968
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


The problem is that key cache entries stick around after the keyspace is 
dropped.  After it's recreated and read, there are key cache hits that return 
old positions.  I'm not sure why it only seems to be a problem for the 
secondary index tables; my guess is that the key-cache preheating that happens 
after compaction is replacing the old entries in the key cache for the data 
tables.

CASSANDRA-5202 is the correct permanent solution for this, but that's for 2.1.  
For 2.0, perhaps we should do something similar to CASSANDRA-6351 and go 
through the key cache to invalidate all entries for the CF when it's dropped.

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Tyler Hobbs
 Fix For: 2.0.8

 Attachments: 6981_test.py


 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 {noformat}
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:87)
 at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
 at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
 at 

[jira] [Updated] (CASSANDRA-7232) Enable live replay of commit logs

2014-05-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7232:
--

 Reviewer: Jonathan Ellis
  Component/s: Tools
Fix Version/s: 2.0.9
 Assignee: Lyuben Todorov

 Enable live replay of commit logs
 -

 Key: CASSANDRA-7232
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7232
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Patrick McFadin
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.9


 Replaying commit logs takes a restart but restoring sstables can be an online 
 operation with refresh. In order to restore a point-in-time without a 
 restart, the node needs to live replay the commit logs from JMX and a 
 nodetool command.
 nodetool refreshcommitlogs keyspace table



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-05-15 Thread dbrosius
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/15e0814c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/15e0814c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/15e0814c

Branch: refs/heads/trunk
Commit: 15e0814c5b9aad8a5df01f01eddd32055ab0941a
Parents: 0722837 2a77695
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed May 14 20:45:57 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed May 14 20:45:57 2014 -0400

--
 CHANGES.txt |  1 +
 .../SimpleAbstractColumnIterator.java   | 29 ---
 .../apache/cassandra/db/context/IContext.java   | 75 
 .../cassandra/gms/IFailureNotification.java | 27 --
 .../service/PendingRangeCalculatorService.java  |  2 +-
 .../PendingRangeCalculatorServiceMBean.java | 23 -
 .../apache/cassandra/thrift/RequestType.java| 24 --
 .../cassandra/utils/AtomicLongArrayUpdater.java | 91 
 .../apache/cassandra/utils/DefaultDouble.java   | 46 --
 .../apache/cassandra/utils/LatencyTracker.java  | 82 --
 .../cassandra/utils/SkipNullRepresenter.java| 40 -
 11 files changed, 2 insertions(+), 438 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/15e0814c/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/15e0814c/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
--



[jira] [Comment Edited] (CASSANDRA-7218) cassandra-all:2.1.0-beta1 maven dependency failing

2014-05-15 Thread Prasanth Gullapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997260#comment-13997260
 ] 

Prasanth Gullapalli edited comment on CASSANDRA-7218 at 5/14/14 4:26 AM:
-

I tried it with 2.1beta2 as well. But the same dependency issue exists even 
there. In fact it is throwing more number of dependency issues. Here is the 
stack trace:

* What went wrong:  
 
Could not resolve all dependencies for configuration ':mapro-commons:compile'.  
 
 Could not find ch.qos.logback:logback-classic:1.1.12. 
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
com.adaequare.mapro.model:mapro-model:2.0-SNAPSHOT
 Could not find io.netty:netty:4.0.17.Final.   
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
com.datastax.cassandra:cassandra-driver-core:2.0.1
 Could not find ch.qos.logback:logback-core:1.1.12.
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
org.apache.cassandra:cassandra-all:2.1.0-beta2
 Could not find ch.qos.logback:logback-classic:1.1.12. 
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
org.apache.cassandra:cassandra-all:2.1.0-beta2
 Could not find com.github.stephenc:jamm:0.2.6.
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
org.apache.cassandra:cassandra-all:2.1.0-beta2
 Could not find io.netty:netty:4.0.17.Final.   
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
org.apache.cassandra:cassandra-all:2.1.0-beta2

 


was (Author: prasanthnath):
[~dbros...@apache.org]
I tried it with 2.1beta2 as well. But the same dependency issue exists even 
there. In fact it is throwing more number of dependency issues. Here is the 
stack trace:

* What went wrong:  
 
Could not resolve all dependencies for configuration ':mapro-commons:compile'.  
 
 Could not find ch.qos.logback:logback-classic:1.1.12. 
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
com.adaequare.mapro.model:mapro-model:2.0-SNAPSHOT
 Could not find io.netty:netty:4.0.17.Final.   
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
com.datastax.cassandra:cassandra-driver-core:2.0.1
 Could not find ch.qos.logback:logback-core:1.1.12.
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
org.apache.cassandra:cassandra-all:2.1.0-beta2
 Could not find ch.qos.logback:logback-classic:1.1.12. 
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
org.apache.cassandra:cassandra-all:2.1.0-beta2
 Could not find com.github.stephenc:jamm:0.2.6.
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
org.apache.cassandra:cassandra-all:2.1.0-beta2
 Could not find io.netty:netty:4.0.17.Final.   
  
  Required by:  
 
  MAPro_Transactor:mapro-commons:1.0  
org.apache.cassandra:cassandra-all:2.1.0-beta2

 

 cassandra-all:2.1.0-beta1 maven dependency failing
 --

 Key: CASSANDRA-7218
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7218
 Project: Cassandra
  Issue Type: Bug
  Components: API, 

[jira] [Created] (CASSANDRA-7233) Dropping a keyspace fails to purge the Key Cache resulting in SSTable Corruption during searches

2014-05-15 Thread Vinoo Ganesh (JIRA)
Vinoo Ganesh created CASSANDRA-7233:
---

 Summary: Dropping a keyspace fails to purge the Key Cache 
resulting in SSTable Corruption during searches
 Key: CASSANDRA-7233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7233
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vinoo Ganesh


Dropping a keyspace fails to purge the Key Cache resulting in SSTable 
corruption during searches.

One of our workflows involves dropping a full keyspace (with column families) 
and re-creating it all without restarting Cassandra. When data is dropped from 
Cassandra, it doesn't look the key cache is invalidated which causes searches 
to print out CorruptSSTable messages.  

At an initial glance, it looks like the issue we're seeing has to do with the 
fact that the Descriptor passed into KeyCacheKey's constructor checks 
directory, generation, ksname, cfname, and temp. In our workflow, when the new 
keyspace is created, generation restarts at 1 which creates issues. 

We're not sure if it makes a lot of sense to try and preserve the generation 
during the deletion/recreation process (and we're not sure where Cassandra 
would even save this) but that would be a fix for our workflow. 

Additionally, making the actual Column Family UUIDs unique would be great as 
well. It looks like in RowKeyCache, they UUIDs are just made up of the keyspace 
name and column family. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7233) Dropping a keyspace fails to purge the Key Cache resulting in SSTable Corruption during searches

2014-05-15 Thread Vinoo Ganesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinoo Ganesh updated CASSANDRA-7233:


Fix Version/s: 1.2.17

 Dropping a keyspace fails to purge the Key Cache resulting in SSTable 
 Corruption during searches
 

 Key: CASSANDRA-7233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7233
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vinoo Ganesh
 Fix For: 1.2.17


 Dropping a keyspace fails to purge the Key Cache resulting in SSTable 
 corruption during searches.
 One of our workflows involves dropping a full keyspace (with column families) 
 and re-creating it all without restarting Cassandra. When data is dropped 
 from Cassandra, it doesn't look the key cache is invalidated which causes 
 searches to print out CorruptSSTable messages.  
 At an initial glance, it looks like the issue we're seeing has to do with the 
 fact that the Descriptor passed into KeyCacheKey's constructor checks 
 directory, generation, ksname, cfname, and temp. In our workflow, when the 
 new keyspace is created, generation restarts at 1 which creates issues. 
 We're not sure if it makes a lot of sense to try and preserve the generation 
 during the deletion/recreation process (and we're not sure where Cassandra 
 would even save this) but that would be a fix for our workflow. 
 Additionally, making the actual Column Family UUIDs unique would be great as 
 well. It looks like in RowKeyCache, they UUIDs are just made up of the 
 keyspace name and column family. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7224) GossipingPropertyFileSnitch fails to read cassandra-rackdc.properties

2014-05-15 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-7224:
-

 Summary: GossipingPropertyFileSnitch fails to read 
cassandra-rackdc.properties
 Key: CASSANDRA-7224
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7224
 Project: Cassandra
  Issue Type: Bug
Reporter: Michael Shuler
Priority: Blocker
 Fix For: 2.1 rc1


stock cassandra.yaml with {{endpoint_snitch: GossipingPropertyFileSnitch}} 
results in (DEBUG logging only gave one additional unrelated line)

{noformat}
mshuler@hana:/tmp/apache-cassandra-2.1.0-beta2$ ./bin/cassandra -f
INFO  21:24:48 Hostname: hana.12.am
INFO  21:24:48 Loading settings from 
file:/tmp/apache-cassandra-2.1.0-beta2/conf/cassandra.yaml
INFO  21:24:48 Node configuration:[authenticator=AllowAllAuthenticator; 
authorizer=AllowAllAuthorizer; auto_snapshot=true; 
batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; 
cas_contention_timeout_in_ms=1000; client_encryption_options=REDACTED; 
cluster_name=Test Cluster; column_index_size_in_kb=64; 
commitlog_directory=/var/lib/cassandra/commitlog; 
commitlog_segment_size_in_mb=32; commitlog_sync=periodic; 
commitlog_sync_period_in_ms=1; compaction_throughput_mb_per_sec=16; 
concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; 
counter_cache_save_period=7200; counter_cache_size_in_mb=null; 
counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; 
data_file_directories=[/var/lib/cassandra/data]; disk_failure_policy=stop; 
dynamic_snitch_badness_threshold=0.1; 
dynamic_snitch_reset_interval_in_ms=60; 
dynamic_snitch_update_interval_in_ms=100; 
endpoint_snitch=GossipingPropertyFileSnitch; hinted_handoff_enabled=true; 
hinted_handoff_throttle_in_kb=1024; in_memory_compaction_limit_in_mb=64; 
incremental_backups=false; index_summary_capacity_in_mb=null; 
index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; 
internode_compression=all; key_cache_save_period=14400; 
key_cache_size_in_mb=null; listen_address=localhost; 
max_hint_window_in_ms=1080; max_hints_delivery_threads=2; 
memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=0.4; 
native_transport_port=9042; num_tokens=256; 
partitioner=org.apache.cassandra.dht.Murmur3Partitioner; 
permissions_validity_in_ms=2000; range_request_timeout_in_ms=1; 
read_request_timeout_in_ms=5000; 
request_scheduler=org.apache.cassandra.scheduler.NoScheduler; 
request_timeout_in_ms=1; row_cache_save_period=0; row_cache_size_in_mb=0; 
rpc_address=localhost; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; 
saved_caches_directory=/var/lib/cassandra/saved_caches; 
seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, 
parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=REDACTED; 
snapshot_before_compaction=false; ssl_storage_port=7001; 
sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; 
start_rpc=true; storage_port=7000; thrift_framed_transport_size_in_mb=15; 
tombstone_failure_threshold=10; tombstone_warn_threshold=1000; 
trickle_fsync=false; trickle_fsync_interval_in_kb=10240; 
truncate_request_timeout_in_ms=6; write_request_timeout_in_ms=2000]
INFO  21:24:48 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is 
mmap
INFO  21:24:48 Global memtable on-heap threshold is enabled at 976MB
INFO  21:24:48 Global memtable off-heap threshold is enabled at 976MB
WARN  21:24:48 Unable to read cassandra-rackdc.properties
ERROR 21:24:48 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Error instantiating 
snitch class 'org.apache.cassandra.locator.GossipingPropertyFileSnitch'.
at 
org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:501) 
~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:576)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:395)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:129)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:109) 
[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:454) 
[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:543) 
[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
Caused by: org.apache.cassandra.exceptions.ConfigurationException: DC or rack 
not found in snitch properties, check your configuration in: 
cassandra-rackdc.properties
at 

[3/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-05-15 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/773b95ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/773b95ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/773b95ef

Branch: refs/heads/cassandra-2.1
Commit: 773b95efb85a381f56cdd537c595f0f9c9b3065f
Parents: d267cf8 fb0a78a
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed May 7 21:22:27 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed May 7 21:22:27 2014 -0400

--
 CHANGES.txt| 3 ++-
 .../cassandra/service/PendingRangeCalculatorService.java   | 6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/773b95ef/CHANGES.txt
--
diff --cc CHANGES.txt
index ea3a192,6c8f1fb..4a0548a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,24 +1,17 @@@
 -2.0.9
 - * Warn when 'USING TIMESTAMP' is used on a CAS BATCH (CASSANDRA-7067)
 - * Starting threads in OutboundTcpConnectionPool constructor causes race 
conditions (CASSANDRA-7177)
 - * return all cpu values from BackgroundActivityMonitor.readAndCompute 
(CASSANDRA-7183) 
 -
 -2.0.8
 +2.1.0-rc1
 + * Add snapshot manifest describing files included (CASSANDRA-6326)
 + * Parallel streaming for sstableloader (CASSANDRA-3668)
 + * Fix bugs in supercolumns handling (CASSANDRA-7138)
 + * Fix ClassClassException on composite dense tables (CASSANDRA-7112)
 + * Cleanup and optimize collation and slice iterators (CASSANDRA-7107)
 + * Upgrade NBHM lib (CASSANDRA-7128)
 + * Optimize netty server (CASSANDRA-6861)
 +Merged from 2.0:
   * Correctly delete scheduled range xfers (CASSANDRA-7143)
   * Make batchlog replica selection rack-aware (CASSANDRA-6551)
 - * Allow overriding cassandra-rackdc.properties file (CASSANDRA-7072)
 - * Set JMX RMI port to 7199 (CASSANDRA-7087)
 - * Use LOCAL_QUORUM for data reads at LOCAL_SERIAL (CASSANDRA-6939)
 - * Log a warning for large batches (CASSANDRA-6487)
 - * Queries on compact tables can return more rows that requested 
(CASSANDRA-7052)
 - * USING TIMESTAMP for batches does not work (CASSANDRA-7053)
 - * Fix performance regression from CASSANDRA-5614 (CASSANDRA-6949)
 - * Merge groupable mutations in TriggerExecutor#execute() (CASSANDRA-7047)
 - * Fix CFMetaData#getColumnDefinitionFromColumnName() (CASSANDRA-7074)
 - * Plug holes in resource release when wiring up StreamSession 
(CASSANDRA-7073)
 - * Re-add parameter columns to tracing session (CASSANDRA-6942)
 - * Fix writetime/ttl functions for static columns (CASSANDRA-7081)
   * Suggest CTRL-C or semicolon after three blank lines in cqlsh 
(CASSANDRA-7142)
-  * return all cpu values from BackgroundActivityMonitor.readAndCompute 
(CASSANDRA-7183)  
++ * return all cpu values from BackgroundActivityMonitor.readAndCompute 
(CASSANDRA-7183)
++ * reduce garbage creation in calculatePendingRanges (CASSANDRA-7191)
  Merged from 1.2:
   * Add Cloudstack snitch (CASSANDRA-7147)
   * Update system.peers correctly when relocating tokens (CASSANDRA-7126)



[jira] [Created] (CASSANDRA-7203) Flush (and Compact) High Traffic Partitions Separately

2014-05-15 Thread Benedict (JIRA)
Benedict created CASSANDRA-7203:
---

 Summary: Flush (and Compact) High Traffic Partitions Separately
 Key: CASSANDRA-7203
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7203
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict


An idea possibly worth exploring is the use of streaming count-min sketches to 
collect data over the up-time of a server to estimating the velocity of 
different partitions, so that high-volume partitions can be flushed separately 
on the assumption that they will be much smaller in number, thus reducing write 
amplification by permitting compaction independently of any low-velocity data.

Whilst the idea is reasonably straight forward, it seems that the biggest 
problem here will be defining any success metric. Obviously any workload 
following an exponential/zipf/extreme distribution is likely to benefit from 
such an approach, but whether or not that would translate in real terms is 
another matter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7201) Regression: ColumnFamilyStoreTest, NativeCellTest, SSTableMetadataTest unit tests on 2.1

2014-05-15 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-7201:
--

Labels: qa-resolved  (was: )

 Regression: ColumnFamilyStoreTest, NativeCellTest, SSTableMetadataTest unit 
 tests on 2.1
 

 Key: CASSANDRA-7201
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7201
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Benedict
  Labels: qa-resolved
 Fix For: 2.1 rc1

 Attachments: 7201.txt


 http://cassci.datastax.com/job/cassandra-2.1_utest/252/testReport/
 {noformat}
 REGRESSION:  
 org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF
 Error Message:
 null
 Stack Trace:
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF(ColumnFamilyStoreTest.java:992)
 REGRESSION:  org.apache.cassandra.db.NativeCellTest.testCells
 Error Message:
 null
 Stack Trace:
 java.lang.IllegalArgumentException
 at java.nio.Buffer.position(Buffer.java:236)
 at 
 org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:659)
 at 
 org.apache.cassandra.db.NativeCounterCell.updateDigest(NativeCounterCell.java:139)
 at org.apache.cassandra.db.NativeCellTest.test(NativeCellTest.java:148)
 at 
 org.apache.cassandra.db.NativeCellTest.testCells(NativeCellTest.java:132)
 REGRESSION:  
 org.apache.cassandra.io.sstable.SSTableMetadataTest.testLegacyCounterShardTracking
 Error Message:
 null
 Stack Trace:
 junit.framework.AssertionFailedError: 
 at 
 org.apache.cassandra.io.sstable.SSTableMetadataTest.testLegacyCounterShardTracking(SSTableMetadataTest.java:306)
 {noformat}
 All 3 tests bisect to:
 {noformat}
 commit 1ac72f637cdfc9876d2d121302061e46ac104bf8
 Author: Jonathan Ellis jbel...@apache.org
 Date:   Thu May 8 16:44:35 2014 -0500
 prefer MemoryUtil.getByteBuffer to JNA Native.getDirectByteBuffer; 
 specify native endian on the former
 patch by bes; reviewed by jbellis for CASSANDRA-6575
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[5/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-05-15 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
conf/cassandra-env.sh


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c58dd30
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c58dd30
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c58dd30

Branch: refs/heads/cassandra-2.1
Commit: 8c58dd30e1a2ab3ee9c6cb010d9b343d9afb3bc8
Parents: 60dbe8b ea0c399
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 7 16:24:43 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 7 16:24:43 2014 -0500

--
 conf/cassandra-env.sh | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c58dd30/conf/cassandra-env.sh
--
diff --cc conf/cassandra-env.sh
index 200a3b9,fc4fa3d..820f160
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@@ -162,11 -168,7 +168,7 @@@ JMX_PORT=7199
  JVM_OPTS=$JVM_OPTS -ea
  
  # add the jamm javaagent
- if [ $JVM_VENDOR != OpenJDK -o $JVM_VERSION \ 1.6.0 ] \
-   || [ $JVM_VERSION = 1.6.0 -a $JVM_PATCH_VERSION -ge 23 ]
- then
- JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.6.jar
- fi
 -JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
++JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.6.jar
  
  # some JVMs will fill up their heap when accessed via JMX, see CASSANDRA-6541
  JVM_OPTS=$JVM_OPTS -XX:+CMSClassUnloadingEnabled



[1/6] git commit: Improve error message when trying = 2.0 on java 1.7

2014-05-15 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 ea7d0c827 - ea0c39991
  refs/heads/cassandra-2.1 60dbe8b70 - 8c58dd30e
  refs/heads/trunk 56c76bebe - 66429d618


Improve error message when trying = 2.0 on java  1.7

Patch by brandonwilliams reviewed by thobbs for CASSANDRA-7137


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea0c3999
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea0c3999
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea0c3999

Branch: refs/heads/cassandra-2.0
Commit: ea0c399912841821e0f604512808b0a3ce92ace9
Parents: ea7d0c8
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 7 16:23:11 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 7 16:23:11 2014 -0500

--
 conf/cassandra-env.sh | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea0c3999/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 3b15517..fc4fa3d 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -94,6 +94,12 @@ jvmver=`echo $java_ver_output | awk -F'' 'NR==1 {print 
$2}'`
 JVM_VERSION=${jvmver%_*}
 JVM_PATCH_VERSION=${jvmver#*_}
 
+if [ $JVM_VERSION \ 1.7 ] ; then
+echo Cassandra 2.0 and later require Java 7 or later.
+exit 1;
+fi
+
+
 jvm=`echo $java_ver_output | awk 'NR==2 {print $1}'`
 case $jvm in
 OpenJDK)
@@ -162,11 +168,7 @@ JMX_PORT=7199
 JVM_OPTS=$JVM_OPTS -ea
 
 # add the jamm javaagent
-if [ $JVM_VENDOR != OpenJDK -o $JVM_VERSION \ 1.6.0 ] \
-  || [ $JVM_VERSION = 1.6.0 -a $JVM_PATCH_VERSION -ge 23 ]
-then
-JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
-fi
+JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
 
 # some JVMs will fill up their heap when accessed via JMX, see CASSANDRA-6541
 JVM_OPTS=$JVM_OPTS -XX:+CMSClassUnloadingEnabled
@@ -210,8 +212,9 @@ JVM_OPTS=$JVM_OPTS -XX:MaxTenuringThreshold=1
 JVM_OPTS=$JVM_OPTS -XX:CMSInitiatingOccupancyFraction=75
 JVM_OPTS=$JVM_OPTS -XX:+UseCMSInitiatingOccupancyOnly
 JVM_OPTS=$JVM_OPTS -XX:+UseTLAB
+
 # note: bash evals '1.7.x' as  '1.7' so this is really a = 1.7 jvm check
-if [ $JVM_VERSION \ 1.7 ]  [ $JVM_ARCH = 64-Bit ] ; then
+if [ $JVM_ARCH = 64-Bit ] ; then
 JVM_OPTS=$JVM_OPTS -XX:+UseCondCardMark
 fi
 



[jira] [Commented] (CASSANDRA-4718) More-efficient ExecutorService for improved throughput

2014-05-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997787#comment-13997787
 ] 

Jonathan Ellis commented on CASSANDRA-4718:
---

bq. Exactly, hot data more or less fits so the problem is that once you get 
into page page reclaim and disk reads (even SSDs), improvements maid here are 
no longer doing anything helpful

I don't follow you at all.  If 90% of reads are already in-cache, this is going 
to help even if 10% are going to disk.

 More-efficient ExecutorService for improved throughput
 --

 Key: CASSANDRA-4718
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4718
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Benedict
Priority: Minor
  Labels: performance
 Fix For: 2.1.0

 Attachments: 4718-v1.patch, PerThreadQueue.java, aws.svg, 
 aws_read.svg, backpressure-stress.out.txt, baq vs trunk.png, 
 belliotsmith_branches-stress.out.txt, jason_read.svg, jason_read_latency.svg, 
 jason_write.svg, op costs of various queues.ods, stress op rate with various 
 queues.ods, v1-stress.out


 Currently all our execution stages dequeue tasks one at a time.  This can 
 result in contention between producers and consumers (although we do our best 
 to minimize this by using LinkedBlockingQueue).
 One approach to mitigating this would be to make consumer threads do more 
 work in bulk instead of just one task per dequeue.  (Producer threads tend 
 to be single-task oriented by nature, so I don't see an equivalent 
 opportunity there.)
 BlockingQueue has a drainTo(collection, int) method that would be perfect for 
 this.  However, no ExecutorService in the jdk supports using drainTo, nor 
 could I google one.
 What I would like to do here is create just such a beast and wire it into (at 
 least) the write and read stages.  (Other possible candidates for such an 
 optimization, such as the CommitLog and OutboundTCPConnection, are not 
 ExecutorService-based and will need to be one-offs.)
 AbstractExecutorService may be useful.  The implementations of 
 ICommitLogExecutorService may also be useful. (Despite the name these are not 
 actual ExecutorServices, although they share the most important properties of 
 one.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-05-15 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4b028795
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4b028795
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4b028795

Branch: refs/heads/trunk
Commit: 4b0287957bc77a2cbb3aa8eb06481ed32ac1c775
Parents: d24513e c6bed82
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue May 13 17:52:35 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue May 13 17:52:35 2014 +0300

--
 CHANGES.txt|  1 +
 .../apache/cassandra/triggers/InvertedIndex.java   | 17 -
 2 files changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4b028795/CHANGES.txt
--



[jira] [Created] (CASSANDRA-7192) QueryTrace for a paginated query exists only for the first element of the list returned by getAllExecutionInfo()

2014-05-15 Thread Roger Hernandez (JIRA)
Roger Hernandez created CASSANDRA-7192:
--

 Summary: QueryTrace for a paginated query exists only for the 
first element of the list returned by getAllExecutionInfo()
 Key: CASSANDRA-7192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7192
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: A Cassandra 2.0.6 cluster of 16 nodes running on Ubuntu 
12.04.2 LTS, using the Java Driver in the client.
Reporter: Roger Hernandez
Priority: Minor


Within the Java Driver, with tracing enabled, I execute a large query that 
benefits from automatic pagination (with fetchSize=10).

I make sure to go through all of the ResultSet, and by the end of the query I 
call getAllExecutionInfo() on the ResultSet. This returns an ArrayList of 9 
ExecutionInfo elements (the number of pages it requested from Cassandra).

When accessing the QueryTrace in the ExecutionInfo from the ArrayList at index 
0, I can retrieve the information without issues. However, the first is the 
only one that has QueryTrace information, every other ExecutionInfo of the 
array returns a NULL QueryTrace object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Starting threads in the OutboundTcpConnectionPool constructor causes race conditions

2014-05-15 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 2e61cd5e0 - 05bacaeab


Starting threads in the OutboundTcpConnectionPool constructor causes race 
conditions

patch by sbtourist; reviewed by jasobrown for CASSANDRA-7177


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/05bacaea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/05bacaea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/05bacaea

Branch: refs/heads/cassandra-2.0
Commit: 05bacaeabc96a6d85fbf908dce8474acffcab730
Parents: 2e61cd5
Author: Jason Brown jasobr...@apple.com
Authored: Wed May 7 11:58:56 2014 -0700
Committer: Jason Brown jasobr...@apple.com
Committed: Wed May 7 11:58:56 2014 -0700

--
 CHANGES.txt |  2 +-
 .../apache/cassandra/net/MessagingService.java  |  6 +--
 .../net/OutboundTcpConnectionPool.java  | 41 +---
 3 files changed, 40 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/05bacaea/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fc192ef..65ee6cf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,6 @@
 2.0.9
  * Warn when 'USING TIMESTAMP' is used on a CAS BATCH (CASSANDRA-7067)
-
+ * Starting threads in OutboundTcpConnectionPool constructor causes race 
conditions (CASSANDRA-7177)
 
 2.0.8
  * Correctly delete scheduled range xfers (CASSANDRA-7143)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/05bacaea/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index cccf698..dbd76d6 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -498,11 +498,11 @@ public final class MessagingService implements 
MessagingServiceMBean
 cp = new OutboundTcpConnectionPool(to);
 OutboundTcpConnectionPool existingPool = 
connectionManagers.putIfAbsent(to, cp);
 if (existingPool != null)
-{
-cp.close();
 cp = existingPool;
-}
+else
+cp.start();
 }
+cp.waitForStarted();
 return cp;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/05bacaea/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java
--
diff --git a/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java 
b/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java
index 81168c6..c45fc53 100644
--- a/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java
+++ b/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java
@@ -22,6 +22,8 @@ import java.net.InetAddress;
 import java.net.InetSocketAddress;
 import java.net.Socket;
 import java.nio.channels.SocketChannel;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.cassandra.concurrent.Stage;
 import org.apache.cassandra.config.Config;
@@ -36,6 +38,7 @@ public class OutboundTcpConnectionPool
 {
 // pointer for the real Address.
 private final InetAddress id;
+private final CountDownLatch started;
 public final OutboundTcpConnection cmdCon;
 public final OutboundTcpConnection ackCon;
 // pointer to the reseted Address.
@@ -46,13 +49,10 @@ public class OutboundTcpConnectionPool
 {
 id = remoteEp;
 resetedEndpoint = SystemKeyspace.getPreferredIP(remoteEp);
+started = new CountDownLatch(1);
 
 cmdCon = new OutboundTcpConnection(this);
-cmdCon.start();
 ackCon = new OutboundTcpConnection(this);
-ackCon.start();
-
-metrics = new ConnectionMetrics(id, this);
 }
 
 /**
@@ -167,14 +167,45 @@ public class OutboundTcpConnectionPool
 }
 return true;
 }
+
+public void start()
+{
+cmdCon.start();
+ackCon.start();
+
+metrics = new ConnectionMetrics(id, this);
+
+started.countDown();
+}
+
+public void waitForStarted()
+{
+if (started.getCount() == 0)
+return;
+
+boolean error = false;
+try
+{
+if (!started.await(1, TimeUnit.MINUTES))
+error = true;
+}
+catch (InterruptedException e)
+{
+Thread.currentThread().interrupt();
+error = true;
+}
+if (error)
+throw new 

[1/3] git commit: fix c* launch issues on Russian os's due to output of linux 'free' cmd

2014-05-15 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 67ed3375b - 70360cf25


fix c* launch issues on Russian os's due to output of linux 'free' cmd

patch by dbrosius reviewed by bwilliams for cassandra-6162


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16fd1a4a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16fd1a4a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16fd1a4a

Branch: refs/heads/trunk
Commit: 16fd1a4a89958595ca2ae44fdac2eb7aa1ad6be2
Parents: fb0a78a
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Thu May 8 00:32:29 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Thu May 8 00:32:29 2014 -0400

--
 CHANGES.txt   | 3 ++-
 conf/cassandra-env.sh | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16fd1a4a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6c8f1fb..05cc193 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,7 +1,8 @@
 2.0.9
  * Warn when 'USING TIMESTAMP' is used on a CAS BATCH (CASSANDRA-7067)
  * Starting threads in OutboundTcpConnectionPool constructor causes race 
conditions (CASSANDRA-7177)
- * return all cpu values from BackgroundActivityMonitor.readAndCompute 
(CASSANDRA-7183) 
+ * return all cpu values from BackgroundActivityMonitor.readAndCompute 
(CASSANDRA-7183)
+ * fix c* launch issues on Russian os's due to output of linux 'free' cmd 
(CASSANDRA-6162)
 
 2.0.8
  * Correctly delete scheduled range xfers (CASSANDRA-7143)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/16fd1a4a/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index fc4fa3d..7604918 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -18,7 +18,7 @@ calculate_heap_sizes()
 {
 case `uname` in
 Linux)
-system_memory_in_mb=`free -m | awk '/Mem:/ {print $2}'`
+system_memory_in_mb=`free -m | awk '/:/ {print $2;exit}'`
 system_cpu_cores=`egrep -c 'processor([[:space:]]+):.*' 
/proc/cpuinfo`
 ;;
 FreeBSD)



[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-05-15 Thread Ben Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997265#comment-13997265
 ] 

Ben Chan commented on CASSANDRA-5483:
-

I seem to be having problems with JIRA email notifications. May 12 arrived 
fine, May 7 never arrived, May 8 arrived on May 13, but was dated May 10. 
Moving on...

{quote}
Just skip system keyspace entirely and save the logspam (use Keyspace.nonSystem 
instead of Keyspace.all)
{quote}

This patch has grown enough parts to be a little unwieldy. Just to be clear, 
the output from [https://gist.github.com/lyubent/bfc133fe92ef1afb9dd4] is the 
verbose output from {{nodetool}}, which means there is some extra output aside 
from the traces themselves. (TODO to self, I need to make the nodetool verbose 
output optional.). That particular message comes from 
{{src/java/org/apache/cassandra/tools/NodeProbe.java}}, in a part of the code 
untouched by this patch. I can go ahead and nuke that particular message for 
the system keyspace.

{quote}
How does {{Endpoints /127.0.0.2 and /127.0.0.1 are consistent for events}} 
scale up to more replicas? Should we switch to using {{\[..]}} notation instead?
{quote}

{{n * (n - 1)}} differences calculated for {{n}} replicas, so {{n * (n - 1)}} 
are consistent messages. I haven't dug deep enough into the code to be 
certain, but on the face of it, it seems like there should be some (possibly 
not-simple) way to reduce this to {{O(n * log\(n))}}. Enough speculation, 
though.

One edge case for the proposed notation would be a consistency partition:

{noformat}
A == B == C
A != D
D == E == F

=

# We need a separate message for each partition.
Endpoints [A, B, C] are consistent for events
Endpoints [D, E, F] are consistent for events
{noformat}

Even with the edge case, it seems messy, but doable. You do lose trace timing 
information on the calculation of individual differences (the consistent ones, 
at least). On the other hand, comparing matching merkle trees should be a 
consistently fast operation, so you're probably not missing out on too much 
information.

{quote}
I'm a little lost in the commands and sessions, e.g. does {{\[2014-05-08 
23:27:45,368] Session completed successfully}} refer to session 
3617e3f0-d6ef-11e3-a493-7d438369d7fc or 36a49390-d6ef-11e3-a493-7d438369d7fc? 
Is there exactly one session per command? If so let's merge the starting 
repair command + new session output, and the completed + finished.
{quote}

Each repair command seems to consist of multiple repair sessions (one per 
range). The sessions go semi-sequentially; there's a semi-random overlap 
between the end of one session and the start of another, like so (using small 
integers instead of UUIDs, and some labels on the left for readability):

{noformat}
[command #1   ] Starting repair command #1
[command #1, session 1] New session 1 will sync range ...
[command #1, session 1] Requesting merkle tree for ...
[command #1, session 1] Received merkle tree for ...
[command #1, session 2] New session 2 will sync range ...
[command #1, session 2] Requesting merkle tree for ...
[command #1, session 1] Endpoints ... consistent.
[command #1, session 1] Session 1 completed successfully
[command #1, session 2] Received merkle tree for ...
[command #1, session 2] Endpoints ... consistent.
[command #1, session 3] New session 3 will sync range ...
[command #1, session 2] Session 2 completed successfully
[command #1, session 3] Requesting merkle tree for ...
[command #1, session 3] Received merkle tree for ...
[command #1, session 3] Endpoints ... consistent.
[command #1, session 3] Session 3 completed successfully
[command #1   ] Repair command #1 finished
{noformat}

Most of the time it's obvious from context, but during that overlap, having the 
repair session UUID helps to disambiguate. I suspect the overlap is even 
greater (and more confusing) when you have heavy streaming.

{quote}
Why do we log Repair command #1 finished with no merkle trees requested for 
db.tweet? Is it because all sstables are already repaired? If so we should log 
that.
{quote}

I've never encountered a trace like that in my testing. I always seem to get 
merkle trees exchanged (see the log below), even if no streaming is needed. I'm 
hoping lyubent can provide enough information for me to be able to recreate 
this situation locally.

{quote}
Does this actually show any streaming? If so I'm missing it.
{quote}

lyubent's sample run didn't need streaming, so no streaming to trace. Here's 
how I usually test streaming (using [^ccm-repair-test] and yukim's method.):

{noformat}
./ccm-repair-test -kR 
ccm node1 stop 
ccm node1 clear 
ccm node1 start 
./ccm-repair-test -rt
{noformat}

Note that this sample run uses the codebase as of 
[^5483-v12-02-cassandra-yaml-ttl-doc.patch]; I haven't got around to doing the 
May 12 changes yet.

I should also warn you (if recent 

[jira] [Updated] (CASSANDRA-7206) UDT - allow null / non-existant attributes

2014-05-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-7206:


Fix Version/s: 2.1 rc1

 UDT - allow null / non-existant attributes
 --

 Key: CASSANDRA-7206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7206
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
 Fix For: 2.1 rc1


 C* 2.1 CQL User-Defined-Types are really fine and useful.
 But it lacks the possibility to omit attributes or set them to null.
 Would be great to have the possibility to create UDT instances with some 
 attributes missing.
 Also changing the UDT definition (for example: {{alter type add new_attr}}) 
 will break running applications that rely on the previous definition of the 
 UDT.
 For exmple:
 {code}
 CREATE TYPE foo (
attr_one text,
attr_two int );
 CREATE TABLE bar (
id int,
comp foo );
 {code}
 {code}
 INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra', attr_two: 2});
 {code}
 works
 {code}
 INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra'});
 {code}
 does not work
 {code}
 ALTER TYPE foo ADD attr_three timestamp;
 {code}
 {code}
 INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra', attr_two: 2});
 {code}
 will no longer work (missing attribute)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6907) ignore snapshot repair flag on Windows

2014-05-15 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-6907:
---

Labels: Windows  (was: )

 ignore snapshot repair flag on Windows
 --

 Key: CASSANDRA-6907
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6907
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.0.7, 2.1 beta2

 Attachments: CASSANDRA-6907_v1.patch, CASSANDRA-6907_v2.patch, 
 CASSANDRA-6907_v3.patch


 Per discussion in CASSANDRA-4050, we should ignore the snapshot repair flag 
 on windows, and log a warning while proceeding to do non-snapshot repair.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Fix IllegalStateException in CqlPagingRecordReader for inputPageRowSize

2014-05-15 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 7484bd419 - 453a07430


Fix IllegalStateException in CqlPagingRecordReader for inputPageRowSize

patch by btheisen reviewed by dbrosius for cassandra-7198


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/453a0743
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/453a0743
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/453a0743

Branch: refs/heads/cassandra-2.0
Commit: 453a07430c3ebce938047f9d5d0339ff90c6bfcc
Parents: 7484bd4
Author: Brent Theisen br...@bantamlabs.com
Authored: Fri May 9 19:45:53 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri May 9 19:45:53 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/hadoop/cql3/CqlPagingRecordReader.java| 12 +++-
 2 files changed, 8 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/453a0743/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9e6f173..32bd539 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,6 +8,7 @@
  * Fix potential NumberFormatException when deserializing IntegerType 
(CASSANDRA-7088)
  * cqlsh can't tab-complete disabling compaction (CASSANDRA-7185)
  * cqlsh: Accept and execute CQL statement(s) from command-line parameter 
(CASSANDRA-7172)
+ * Fix IllegalStateException in CqlPagingRecordReader (CASSANDRA-7198)
 
 
 2.0.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/453a0743/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
index b692280..1492ce0 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
@@ -24,6 +24,7 @@ import java.nio.ByteBuffer;
 import java.nio.charset.CharacterCodingException;
 import java.util.*;
 
+import com.google.common.base.Optional;
 import com.google.common.collect.AbstractIterator;
 import com.google.common.collect.Iterables;
 import org.apache.cassandra.hadoop.HadoopCompat;
@@ -115,13 +116,14 @@ public class CqlPagingRecordReader extends 
RecordReaderMapString, ByteBuffer,
 columns = CqlConfigHelper.getInputcolumns(conf);
 userDefinedWhereClauses = CqlConfigHelper.getInputWhereClauses(conf);
 
-try
+OptionalInteger pageRowSizeOptional = 
CqlConfigHelper.getInputPageRowSize(conf);
+try 
 {
-pageRowSize = CqlConfigHelper.getInputPageRowSize(conf).get();
-}
-catch (NumberFormatException e)
+   pageRowSize = pageRowSizeOptional.isPresent() ? 
pageRowSizeOptional.get() : DEFAULT_CQL_PAGE_LIMIT;
+} 
+catch(NumberFormatException e) 
 {
-pageRowSize = DEFAULT_CQL_PAGE_LIMIT;
+   pageRowSize = DEFAULT_CQL_PAGE_LIMIT;
 }
 
 partitioner = 
ConfigHelper.getInputPartitioner(HadoopCompat.getConfiguration(context));



[jira] [Updated] (CASSANDRA-7201) Regression: ColumnFamilyStoreTest, NativeCellTest, SSTableMetadataTest unit tests on 2.1

2014-05-15 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-7201:
--

Description: 
http://cassci.datastax.com/job/cassandra-2.1_utest/252/testReport/
{noformat}
REGRESSION:  
org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF

Error Message:
null

Stack Trace:
java.lang.NullPointerException
at 
org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF(ColumnFamilyStoreTest.java:992)


REGRESSION:  org.apache.cassandra.db.NativeCellTest.testCells

Error Message:
null

Stack Trace:
java.lang.IllegalArgumentException
at java.nio.Buffer.position(Buffer.java:236)
at 
org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:659)
at 
org.apache.cassandra.db.NativeCounterCell.updateDigest(NativeCounterCell.java:139)
at org.apache.cassandra.db.NativeCellTest.test(NativeCellTest.java:148)
at org.apache.cassandra.db.NativeCellTest.testCells(NativeCellTest.java:132)


REGRESSION:  
org.apache.cassandra.io.sstable.SSTableMetadataTest.testLegacyCounterShardTracking

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: 
at 
org.apache.cassandra.io.sstable.SSTableMetadataTest.testLegacyCounterShardTracking(SSTableMetadataTest.java:306)
{noformat}

All 3 tests bisect to:
{noformat}
commit 1ac72f637cdfc9876d2d121302061e46ac104bf8
Author: Jonathan Ellis jbel...@apache.org
Date:   Thu May 8 16:44:35 2014 -0500

prefer MemoryUtil.getByteBuffer to JNA Native.getDirectByteBuffer; specify 
native endian on the former
patch by bes; reviewed by jbellis for CASSANDRA-6575
{noformat}

  was:
http://cassci.datastax.com/job/cassandra-2.1_utest/252/testReport/org.apache.cassandra.db/ColumnFamilyStoreTest/testSliceByNamesCommandOnUUIDTypeSCF/
{noformat}
Error Message:
null

Stack Trace:
java.lang.NullPointerException
at 
org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF(ColumnFamilyStoreTest.java:992)
{noformat}

bisects to:
{noformat}
commit 1ac72f637cdfc9876d2d121302061e46ac104bf8
Author: Jonathan Ellis jbel...@apache.org
Date:   Thu May 8 16:44:35 2014 -0500

prefer MemoryUtil.getByteBuffer to JNA Native.getDirectByteBuffer; specify 
native endian on the former
patch by bes; reviewed by jbellis for CASSANDRA-6575
{noformat}

Summary: Regression: ColumnFamilyStoreTest, NativeCellTest, 
SSTableMetadataTest unit tests on 2.1  (was: Regression: ColumnFamilyStoreTest 
unit test on 2.1)

 Regression: ColumnFamilyStoreTest, NativeCellTest, SSTableMetadataTest unit 
 tests on 2.1
 

 Key: CASSANDRA-7201
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7201
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Benedict
  Labels: qa-resolved
 Fix For: 2.1 rc1

 Attachments: 7201.txt


 http://cassci.datastax.com/job/cassandra-2.1_utest/252/testReport/
 {noformat}
 REGRESSION:  
 org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF
 Error Message:
 null
 Stack Trace:
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF(ColumnFamilyStoreTest.java:992)
 REGRESSION:  org.apache.cassandra.db.NativeCellTest.testCells
 Error Message:
 null
 Stack Trace:
 java.lang.IllegalArgumentException
 at java.nio.Buffer.position(Buffer.java:236)
 at 
 org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:659)
 at 
 org.apache.cassandra.db.NativeCounterCell.updateDigest(NativeCounterCell.java:139)
 at org.apache.cassandra.db.NativeCellTest.test(NativeCellTest.java:148)
 at 
 org.apache.cassandra.db.NativeCellTest.testCells(NativeCellTest.java:132)
 REGRESSION:  
 org.apache.cassandra.io.sstable.SSTableMetadataTest.testLegacyCounterShardTracking
 Error Message:
 null
 Stack Trace:
 junit.framework.AssertionFailedError: 
 at 
 org.apache.cassandra.io.sstable.SSTableMetadataTest.testLegacyCounterShardTracking(SSTableMetadataTest.java:306)
 {noformat}
 All 3 tests bisect to:
 {noformat}
 commit 1ac72f637cdfc9876d2d121302061e46ac104bf8
 Author: Jonathan Ellis jbel...@apache.org
 Date:   Thu May 8 16:44:35 2014 -0500
 prefer MemoryUtil.getByteBuffer to JNA Native.getDirectByteBuffer; 
 specify native endian on the former
 patch by bes; reviewed by jbellis for CASSANDRA-6575
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7221) CqlRecordReader does not work with Password authentication

2014-05-15 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13998150#comment-13998150
 ] 

Alex Liu commented on CASSANDRA-7221:
-

+1

 CqlRecordReader does not work with Password authentication
 --

 Key: CASSANDRA-7221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7221
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Jacek Lewandowski
 Attachments: CASSANDRA-7221.txt


 {{CqlRecordReader}} initialises a cluster with:
 {{cluster = CqlConfigHelper.getInputCluster(location, conf);}}
 {{CqlConfigHelper}} gets class name for auth provider and then tries to 
 initialise it without any parameter. {{PlainTextAuthProvider}} does not have 
 no-args constructor, so it fails in this case. There is no other way to 
 initialise CqlRecordReader with password authentication. 
 One solution which can be considered is to modify the method which 
 instantiate auth provider, so that if it detects PlainTextAuthProvider, it 
 will retrieve additional parameters to pass to the constructor. Or, it can be 
 done more generic.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-6397) removenode outputs confusing non-error

2014-05-15 Thread Kirk True (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk True reassigned CASSANDRA-6397:


Assignee: Kirk True

 removenode outputs confusing non-error
 --

 Key: CASSANDRA-6397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6397
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Assignee: Kirk True
Priority: Trivial
  Labels: lhf
 Fix For: 2.0.8


 *{{nodetool removenode force}}* outputs a slightly confusing error message 
 when there is nothing for it to do.
 * Start a cluster, then kill one of the nodes.
 * run *{{nodetool removenode}}* on the node you killed.
 * Simultaneously, in another shell, run *{{nodetool removenode force}}*, see 
 that it outputs a simple message regarding it's status.
 * Run *{{nodetool removenode force}}* again after the firsrt removenode 
 command finishes, you'll see this message and traceback:
 {code}
 $ ~/.ccm/test/node1/bin/nodetool -p 7100 removenode force
 RemovalStatus: No token removals in process.
 Exception in thread main java.lang.UnsupportedOperationException: No tokens 
 to force removal on, call 'removetoken' first
   at 
 org.apache.cassandra.service.StorageService.forceRemoveCompletion(StorageService.java:3140)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}
 Two issues I see with this traceback:
 * No tokens to force removal on is telling me the same thing that the 
 message before it tells me: RemovalStatus: No token removals in process., 
 So the entire traceback is redundant.
 * call 'removetoken' first - removetoken has been deprecated according to 
 the message output by removenode, so there is inconsistency in directions to 
 the user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-05-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6285:


Attachment: enable_reallocate_buffers.txt

Patch to enable buffer reallocation.

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7238) Nodetool Status performance is much slower with VNodes On

2014-05-15 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7238:


Tester: Ryan McGuire

 Nodetool Status performance is much slower with VNodes On
 -

 Key: CASSANDRA-7238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7238
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: 1000 M1.Large Ubuntu 12.04
Reporter: Russell Alexander Spitzer
Priority: Minor
 Fix For: 2.1 beta2


 Nodetool status on a 1000 Node cluster without vnodes returns in several 
 seconds. With vnodes on (256) there are OOM errors with the default XMX of 
 32. Adjusting the XMX to 128 allows nodetool status to complete but the 
 execution takes roughly 10 minutes.
 Tested
 {code}
 XMX|  Status
 32 |OOM
 64 |OOM: GC Overhead
 128|Finishes in ~10 minutes
 500|Finishes in ~10 minutes
 1000   |Finishes in ~10 minutes
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/4] git commit: reduce garbage creation in calculatePendingRanges

2014-05-15 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk e48a00b6d - 67ed3375b


reduce garbage creation in calculatePendingRanges

patch by dbrosius reviewed by bwilliams for cassandra-7191


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d839350f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d839350f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d839350f

Branch: refs/heads/trunk
Commit: d839350f42405ccd85ff478bd13bad9920522dee
Parents: 21b3a67
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed May 7 21:17:48 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed May 7 21:17:48 2014 -0400

--
 CHANGES.txt| 1 +
 .../cassandra/service/PendingRangeCalculatorService.java   | 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d839350f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8533e64..312cf06 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -22,6 +22,7 @@
  * fix time conversion to milliseconds in SimpleCondition.await 
(CASSANDRA-7149)
  * remove duplicate query for local tokens (CASSANDRA-7182)
  * raise streaming phi convict threshold level (CASSANDRA-7063)
+ * reduce garbage creation in calculatePendingRanges (CASSANDRA-7191)
 
 1.2.16
  * Add UNLOGGED, COUNTER options to BATCH documentation (CASSANDRA-6816)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d839350f/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
--
diff --git 
a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java 
b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
index b408c75..6f77ace 100644
--- a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
+++ b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
@@ -43,7 +43,6 @@ import java.util.Set;
 import java.util.Collection;
 import java.util.concurrent.*;
 
-
 public class PendingRangeCalculatorService extends 
PendingRangeCalculatorServiceMBean
 {
 public static final PendingRangeCalculatorService instance = new 
PendingRangeCalculatorService();
@@ -157,9 +156,10 @@ public class PendingRangeCalculatorService extends 
PendingRangeCalculatorService
 
 // For each of the bootstrapping nodes, simply add and remove them one 
by one to
 // allLeftMetadata and check in between what their ranges would be.
-for (InetAddress endpoint : bootstrapTokens.inverse().keySet())
+MultimapInetAddress, Token bootstrapAddresses = 
bootstrapTokens.inverse();
+for (InetAddress endpoint : bootstrapAddresses.keySet())
 {
-CollectionToken tokens = bootstrapTokens.inverse().get(endpoint);
+CollectionToken tokens = bootstrapAddresses.get(endpoint);
 
 allLeftMetadata.updateNormalTokens(tokens, endpoint);
 for (RangeToken range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))



[3/6] git commit: Improve error message when trying = 2.0 on java 1.7

2014-05-15 Thread brandonwilliams
Improve error message when trying = 2.0 on java  1.7

Patch by brandonwilliams reviewed by thobbs for CASSANDRA-7137


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea0c3999
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea0c3999
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea0c3999

Branch: refs/heads/trunk
Commit: ea0c399912841821e0f604512808b0a3ce92ace9
Parents: ea7d0c8
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 7 16:23:11 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 7 16:23:11 2014 -0500

--
 conf/cassandra-env.sh | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea0c3999/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 3b15517..fc4fa3d 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -94,6 +94,12 @@ jvmver=`echo $java_ver_output | awk -F'' 'NR==1 {print 
$2}'`
 JVM_VERSION=${jvmver%_*}
 JVM_PATCH_VERSION=${jvmver#*_}
 
+if [ $JVM_VERSION \ 1.7 ] ; then
+echo Cassandra 2.0 and later require Java 7 or later.
+exit 1;
+fi
+
+
 jvm=`echo $java_ver_output | awk 'NR==2 {print $1}'`
 case $jvm in
 OpenJDK)
@@ -162,11 +168,7 @@ JMX_PORT=7199
 JVM_OPTS=$JVM_OPTS -ea
 
 # add the jamm javaagent
-if [ $JVM_VENDOR != OpenJDK -o $JVM_VERSION \ 1.6.0 ] \
-  || [ $JVM_VERSION = 1.6.0 -a $JVM_PATCH_VERSION -ge 23 ]
-then
-JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
-fi
+JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
 
 # some JVMs will fill up their heap when accessed via JMX, see CASSANDRA-6541
 JVM_OPTS=$JVM_OPTS -XX:+CMSClassUnloadingEnabled
@@ -210,8 +212,9 @@ JVM_OPTS=$JVM_OPTS -XX:MaxTenuringThreshold=1
 JVM_OPTS=$JVM_OPTS -XX:CMSInitiatingOccupancyFraction=75
 JVM_OPTS=$JVM_OPTS -XX:+UseCMSInitiatingOccupancyOnly
 JVM_OPTS=$JVM_OPTS -XX:+UseTLAB
+
 # note: bash evals '1.7.x' as  '1.7' so this is really a = 1.7 jvm check
-if [ $JVM_VERSION \ 1.7 ]  [ $JVM_ARCH = 64-Bit ] ; then
+if [ $JVM_ARCH = 64-Bit ] ; then
 JVM_OPTS=$JVM_OPTS -XX:+UseCondCardMark
 fi
 



[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-05-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997706#comment-13997706
 ] 

Jonathan Ellis commented on CASSANDRA-5483:
---

bq. Pretty sure we always do this comparison pair wise? So there will always 
only be two ip's in it.

Ah, right.  Let's leave that alone then.

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: 5483-full-trunk.txt, 
 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
  5483-v07-08-Fix-brace-style.patch, 
 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
 5483-v08-11-Shorten-trace-messages.-Use-Tracing-begin.patch, 
 5483-v08-12-Trace-streaming-in-Differencer-StreamingRepairTask.patch, 
 5483-v08-13-sendNotification-of-local-traces-back-to-nodetool.patch, 
 5483-v08-14-Poll-system_traces.events.patch, 
 5483-v08-15-Limit-trace-notifications.-Add-exponential-backoff.patch, 
 5483-v09-16-Fix-hang-caused-by-incorrect-exit-code.patch, 
 5483-v10-17-minor-bugfixes-and-changes.patch, 
 5483-v10-rebased-and-squashed-471f5cc.patch, 5483-v11-01-squashed.patch, 
 5483-v11-squashed-nits.patch, 5483-v12-02-cassandra-yaml-ttl-doc.patch, 
 ccm-repair-test, cqlsh-left-justify-text-columns.patch, 
 prerepair-vs-postbuggedrepair.diff, test-5483-system_traces-events.txt, 
 trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
 trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
 v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
 v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6877) pig tests broken

2014-05-15 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6877:
---

Attachment: 0001-Exclude-cobertura-xerces-dependency.patch

This is caused by cobertura 2.0.x pulling in xerces:xercesImpl 2.6.2 which it 
appears is not compatible with Hadoop. With the older version of cobertura, 
which did not have this dependency, the JAXP impl would fall back to the 
com.sun.org.apache.xerces.internal one from the JDK. 

Patch attached to exclude the transitive dependency from cobertura. I've tested 
that this fixes the pig tests (I now see the 1 failure like 
[~brandon.williams]) and that the cobertura report is still generated correctly.


 pig tests broken
 

 Key: CASSANDRA-6877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6877
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 2.0.9, 2.1 rc1

 Attachments: 0001-Exclude-cobertura-xerces-dependency.patch


 Not sure what happened here, but I get a smorgasbord of errors running the 
 pig tests now, from xml errors in xerces to NotFoundExceptions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4718) More-efficient ExecutorService for improved throughput

2014-05-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13996074#comment-13996074
 ] 

Jason Brown commented on CASSANDRA-4718:


[~enigmacurry] Also, is it possible for you to fill up the disks with more 
sstables than available memory? I think we shouyld check how going to disk 
plays into the performance mix, rather than just reading from page cache for 
the entire read test. This should introduce another modality into the way the 
algorithm behaves, one that is probably more realistic to the real world (a mix 
of page cache hits and disk seeks).

[~benedict] This rewrite is quite extensive wrt prior branches. As it this code 
is quite complex with many new additions, I will need a good chunk of time 
tomorrow to review this. 

 More-efficient ExecutorService for improved throughput
 --

 Key: CASSANDRA-4718
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4718
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jason Brown
Priority: Minor
  Labels: performance
 Fix For: 2.1.0

 Attachments: 4718-v1.patch, PerThreadQueue.java, aws.svg, 
 backpressure-stress.out.txt, baq vs trunk.png, 
 belliotsmith_branches-stress.out.txt, jason_read.svg, jason_read_latency.svg, 
 jason_write.svg, op costs of various queues.ods, stress op rate with various 
 queues.ods, v1-stress.out


 Currently all our execution stages dequeue tasks one at a time.  This can 
 result in contention between producers and consumers (although we do our best 
 to minimize this by using LinkedBlockingQueue).
 One approach to mitigating this would be to make consumer threads do more 
 work in bulk instead of just one task per dequeue.  (Producer threads tend 
 to be single-task oriented by nature, so I don't see an equivalent 
 opportunity there.)
 BlockingQueue has a drainTo(collection, int) method that would be perfect for 
 this.  However, no ExecutorService in the jdk supports using drainTo, nor 
 could I google one.
 What I would like to do here is create just such a beast and wire it into (at 
 least) the write and read stages.  (Other possible candidates for such an 
 optimization, such as the CommitLog and OutboundTCPConnection, are not 
 ExecutorService-based and will need to be one-offs.)
 AbstractExecutorService may be useful.  The implementations of 
 ICommitLogExecutorService may also be useful. (Despite the name these are not 
 actual ExecutorServices, although they share the most important properties of 
 one.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6108) Create timeid64 type

2014-05-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6108:
--

Fix Version/s: (was: 2.1 rc1)
   2.1.1

 Create timeid64 type
 

 Key: CASSANDRA-6108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6108
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.1.1


 As discussed in CASSANDRA-6106, we could create a 64-bit type with 48 bits of 
 timestamp and 16 bites of unique coordinator id.  This would give us a 
 unique-per-cluster value that could be used as a more compact replacement for 
 many TimeUUID uses.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[6/6] git commit: Merge branch 'cassandra-2.1' into trunk

2014-05-15 Thread jbellis
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/091db648
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/091db648
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/091db648

Branch: refs/heads/trunk
Commit: 091db6482b6b3fbadaa8d8a7ff8471f9ae035e53
Parents: b5671f0 8a5365e
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 14 10:01:40 2014 -0700
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 14 10:01:40 2014 -0700

--
 CHANGES.txt| 4 
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordReader.java | 2 ++
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/091db648/CHANGES.txt
--



[jira] [Assigned] (CASSANDRA-7231) Support more concurrent requests per native transport connection

2014-05-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-7231:
---

Assignee: Sylvain Lebresne

 Support more concurrent requests per native transport connection
 

 Key: CASSANDRA-7231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7231
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.1.0


 Right now we only support 127 concurrent requests against a given native 
 transport connection. This causes us to waste file handles opening multiple 
 connections, increases driver complexity and dilutes writes across multiple 
 connections so that batching cannot easily be performed.
 I propose raising this limit substantially, to somewhere in the region of 
 16-64K, and that this is a good time to do it since we're already bumping the 
 protocol version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4718) More-efficient ExecutorService for improved throughput

2014-05-15 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997999#comment-13997999
 ] 

Benedict commented on CASSANDRA-4718:
-

Thanks [~enigmacurry]!

Those graphs all look pretty good to me. Think it's time to run some of the 
longer tests to see that performance is still good for other workloads. Let's 
drop thrift from the equation now.

I'd suggest something like 

write n=6 -key populate=1..6 
force major compaction
for each thread count/branch:
 read n=1 -key dist=extr(1..6,2)
and warm up with one (any) read test run before the rest, so that they all are 
playing from a roughly level page cache point

This should create a dataset in the region of 110Gb, but around 75% of requests 
will be to ~40Gb of it, which should be in the region of the amount of page 
cache available to the EC2 systems after bloom filters etc. are accounted for

NB: if you want to play with different distributions, cassandra-stress print 
lets you see what a spec would yield

 More-efficient ExecutorService for improved throughput
 --

 Key: CASSANDRA-4718
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4718
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Benedict
Priority: Minor
  Labels: performance
 Fix For: 2.1.0

 Attachments: 4718-v1.patch, PerThreadQueue.java, aws.svg, 
 aws_read.svg, backpressure-stress.out.txt, baq vs trunk.png, 
 belliotsmith_branches-stress.out.txt, jason_read.svg, jason_read_latency.svg, 
 jason_write.svg, op costs of various queues.ods, stress op rate with various 
 queues.ods, v1-stress.out


 Currently all our execution stages dequeue tasks one at a time.  This can 
 result in contention between producers and consumers (although we do our best 
 to minimize this by using LinkedBlockingQueue).
 One approach to mitigating this would be to make consumer threads do more 
 work in bulk instead of just one task per dequeue.  (Producer threads tend 
 to be single-task oriented by nature, so I don't see an equivalent 
 opportunity there.)
 BlockingQueue has a drainTo(collection, int) method that would be perfect for 
 this.  However, no ExecutorService in the jdk supports using drainTo, nor 
 could I google one.
 What I would like to do here is create just such a beast and wire it into (at 
 least) the write and read stages.  (Other possible candidates for such an 
 optimization, such as the CommitLog and OutboundTCPConnection, are not 
 ExecutorService-based and will need to be one-offs.)
 AbstractExecutorService may be useful.  The implementations of 
 ICommitLogExecutorService may also be useful. (Despite the name these are not 
 actual ExecutorServices, although they share the most important properties of 
 one.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (CASSANDRA-7201) Regression: ColumnFamilyStoreTest, NativeCellTest, SSTableMetadataTest unit tests on 2.1

2014-05-15 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reopened CASSANDRA-7201:
---


ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF didn't get fixed 
after commit (the other 2 are passing fine) - sorry, I'm not sure if I missed 
retesting correctly or..?

current 2.1 HEAD - 2a77695
{noformat}
$ ant test -Dtest.name=ColumnFamilyStoreTest
...
test:
 [echo] running unit tests
[mkdir] Created dir: /home/mshuler/git/cassandra/build/test/cassandra
[mkdir] Created dir: /home/mshuler/git/cassandra/build/test/output
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/usr/share/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/home/mshuler/git/cassandra/build/lib/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Testsuite: org.apache.cassandra.db.ColumnFamilyStoreTest
[junit] Tests run: 35, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
14.12 sec
[junit] 
[junit] - Standard Output ---
[junit] ERROR 01:06:46 Fatal exception in thread 
Thread[SSTableBatchOpen:1,5,main]
[junit] java.lang.RuntimeException: java.io.FileNotFoundException: 
build/test/cassandra/data/Keyspace1/Indexed2-2b51da1adbcd11e3ac729b2001e5c823/Keyspace1-Indexed2.birthdate_index-ka-1-Index.db
 (No such file or directory)
[junit] at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:102)
 ~[main/:na]
[junit] at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:90)
 ~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.buildSummary(SSTableReader.java:766)
 ~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:746) 
~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:708) 
~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:391) 
~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:293) 
~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:427) 
~[main/:na]
[junit] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_55]
[junit] at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_55]
[junit] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_55]
[junit] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_55]
[junit] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
[junit] Caused by: java.io.FileNotFoundException: 
build/test/cassandra/data/Keyspace1/Indexed2-2b51da1adbcd11e3ac729b2001e5c823/Keyspace1-Indexed2.birthdate_index-ka-1-Index.db
 (No such file or directory)
[junit] at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_55]
[junit] at java.io.RandomAccessFile.init(RandomAccessFile.java:241) 
~[na:1.7.0_55]
[junit] at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 ~[main/:na]
[junit] at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:98)
 ~[main/:na]
[junit] ... 12 common frames omitted
[junit] ERROR 01:06:46 Fatal exception in thread 
Thread[SSTableBatchOpen:1,5,main]
[junit] java.lang.RuntimeException: java.io.FileNotFoundException: 
build/test/cassandra/data/Keyspace1/Indexed2-2b51da1adbcd11e3ac729b2001e5c823/Keyspace1-Indexed2.birthdate_index-ka-1-Index.db
 (No such file or directory)
[junit] at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:102)
 ~[main/:na]
[junit] at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:90)
 ~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.buildSummary(SSTableReader.java:766)
 ~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:746) 
~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:708) 
~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:391) 
~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:293) 
~[main/:na]
[junit] at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:427) 
~[main/:na]
[junit] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_55]
[junit] at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_55]
[junit] at 

[jira] [Updated] (CASSANDRA-7201) Regression: ColumnFamilyStoreTest, NativeCellTest, SSTableMetadataTest unit tests on 2.1

2014-05-15 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-7201:


Attachment: 7201.txt

Counters store data in their value() context written in big endian format, so 
this was breaking counter context processing when they were read back from 
native cells. Rather than convert all native byte buffers to big endian, I've 
attached a patch that swaps the endianness of counter value only.

 Regression: ColumnFamilyStoreTest, NativeCellTest, SSTableMetadataTest unit 
 tests on 2.1
 

 Key: CASSANDRA-7201
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7201
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Benedict
  Labels: qa-resolved
 Fix For: 2.1 rc1

 Attachments: 7201.txt


 http://cassci.datastax.com/job/cassandra-2.1_utest/252/testReport/
 {noformat}
 REGRESSION:  
 org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF
 Error Message:
 null
 Stack Trace:
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOnUUIDTypeSCF(ColumnFamilyStoreTest.java:992)
 REGRESSION:  org.apache.cassandra.db.NativeCellTest.testCells
 Error Message:
 null
 Stack Trace:
 java.lang.IllegalArgumentException
 at java.nio.Buffer.position(Buffer.java:236)
 at 
 org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:659)
 at 
 org.apache.cassandra.db.NativeCounterCell.updateDigest(NativeCounterCell.java:139)
 at org.apache.cassandra.db.NativeCellTest.test(NativeCellTest.java:148)
 at 
 org.apache.cassandra.db.NativeCellTest.testCells(NativeCellTest.java:132)
 REGRESSION:  
 org.apache.cassandra.io.sstable.SSTableMetadataTest.testLegacyCounterShardTracking
 Error Message:
 null
 Stack Trace:
 junit.framework.AssertionFailedError: 
 at 
 org.apache.cassandra.io.sstable.SSTableMetadataTest.testLegacyCounterShardTracking(SSTableMetadataTest.java:306)
 {noformat}
 All 3 tests bisect to:
 {noformat}
 commit 1ac72f637cdfc9876d2d121302061e46ac104bf8
 Author: Jonathan Ellis jbel...@apache.org
 Date:   Thu May 8 16:44:35 2014 -0500
 prefer MemoryUtil.getByteBuffer to JNA Native.getDirectByteBuffer; 
 specify native endian on the former
 patch by bes; reviewed by jbellis for CASSANDRA-6575
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-05-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13994249#comment-13994249
 ] 

Brandon Williams commented on CASSANDRA-6285:
-

I have no issue with doing a) _AND_ b), just to be extra safe, if we know this 
puts the nail in this ticket's coffin.

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-4718) More-efficient ExecutorService for improved throughput

2014-05-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993928#comment-13993928
 ] 

Jason Brown edited comment on CASSANDRA-4718 at 5/9/14 9:28 PM:


attached file (belliotsmith_branches.out) are my results from running the 
latest branches on the same hardware I described above. The lse-batchnetty 
definitely is the best performer thus far.


was (Author: jasobrown):
attached file (belliotsmith_branches.out) are my results from running the 
latest branches on the same hardware I described above.

 More-efficient ExecutorService for improved throughput
 --

 Key: CASSANDRA-4718
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4718
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jason Brown
Priority: Minor
  Labels: performance
 Fix For: 2.1.0

 Attachments: 4718-v1.patch, PerThreadQueue.java, aws.svg, 
 backpressure-stress.out.txt, baq vs trunk.png, 
 belliotsmith_branches-stress.out.txt, jason_read.svg, jason_read_latency.svg, 
 jason_write.svg, op costs of various queues.ods, stress op rate with various 
 queues.ods, v1-stress.out


 Currently all our execution stages dequeue tasks one at a time.  This can 
 result in contention between producers and consumers (although we do our best 
 to minimize this by using LinkedBlockingQueue).
 One approach to mitigating this would be to make consumer threads do more 
 work in bulk instead of just one task per dequeue.  (Producer threads tend 
 to be single-task oriented by nature, so I don't see an equivalent 
 opportunity there.)
 BlockingQueue has a drainTo(collection, int) method that would be perfect for 
 this.  However, no ExecutorService in the jdk supports using drainTo, nor 
 could I google one.
 What I would like to do here is create just such a beast and wire it into (at 
 least) the write and read stages.  (Other possible candidates for such an 
 optimization, such as the CommitLog and OutboundTCPConnection, are not 
 ExecutorService-based and will need to be one-offs.)
 AbstractExecutorService may be useful.  The implementations of 
 ICommitLogExecutorService may also be useful. (Despite the name these are not 
 actual ExecutorServices, although they share the most important properties of 
 one.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/6] git commit: Fix pig tests.

2014-05-15 Thread brandonwilliams
Fix pig tests.

Patch by Alex Liu, reviewed by brandonwilliams for CASSANDRA-6877


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b927f790
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b927f790
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b927f790

Branch: refs/heads/trunk
Commit: b927f790af0d20fe8d2453d50ecce25bb6c6e4d0
Parents: 2092da0
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 14 14:32:45 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 14 14:32:45 2014 -0500

--
 test/pig/org/apache/cassandra/pig/CqlTableTest.java | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b927f790/test/pig/org/apache/cassandra/pig/CqlTableTest.java
--
diff --git a/test/pig/org/apache/cassandra/pig/CqlTableTest.java 
b/test/pig/org/apache/cassandra/pig/CqlTableTest.java
index 3bbc3d1..15d49f2 100644
--- a/test/pig/org/apache/cassandra/pig/CqlTableTest.java
+++ b/test/pig/org/apache/cassandra/pig/CqlTableTest.java
@@ -177,18 +177,18 @@ public class CqlTableTest extends PigTestBase
 pig.registerQuery(STORE recs INTO 'cql://cql3ks/collectiontable? + 
defaultParameters + output_query=update+cql3ks.collectiontable+set+n+%3D+%3F' 
USING CqlStorage(););
 pig.executeBatch();
 
-//(book2,((key2, value2),(m,mm),(n,nn)))
-//(book3,((key3, value3),(m,mm),(n,nn)))
-//(book4,((key4, value4),(m,mm),(n,nn)))
-//(book1,((key1, value1),(m,mm),(n,nn)))
+//(book2,((m,mm),(n,nn)))
+//(book3,((m,mm),(n,nn)))
+//(book4,((m,mm),(n,nn)))
+//(book1,((m,mm),(n,nn)))
 pig.registerQuery(result= LOAD 'cql://cql3ks/collectiontable? + 
defaultParameters + ' USING CqlStorage(););
 IteratorTuple it = pig.openIterator(result);
 while (it.hasNext()) {
 Tuple t = it.next();
 Tuple t1 = (Tuple) t.get(1);
-Assert.assertEquals(t1.size(), 3);
-Tuple element1 = (Tuple) t1.get(1);
-Tuple element2 = (Tuple) t1.get(2);
+Assert.assertEquals(t1.size(), 2);
+Tuple element1 = (Tuple) t1.get(0);
+Tuple element2 = (Tuple) t1.get(1);
 Assert.assertEquals(element1.get(0), m);
 Assert.assertEquals(element1.get(1), mm);
 Assert.assertEquals(element2.get(0), n);



[jira] [Updated] (CASSANDRA-7225) is = and is = in CQL

2014-05-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7225:
--

Issue Type: Bug  (was: Improvement)

That is not how it should work.  Can you give steps to reproduce?

  is = and  is = in CQL
 --

 Key: CASSANDRA-7225
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7225
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Stupp

 Just a small line of text in cqlsh help command indicates that  is = and 
  is = in CQL.
 This is confusing to many people (including me :) ) because I did not expect 
  to return the equals portion.
 Please allow distinct behaviours for , =,  and = in CQL queries. Maybe in 
 combination with CASSANDRA-5184 and/or CASSANDRA-4914 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7231) Support more concurrent requests per native transport connection

2014-05-15 Thread Benedict (JIRA)
Benedict created CASSANDRA-7231:
---

 Summary: Support more concurrent requests per native transport 
connection
 Key: CASSANDRA-7231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7231
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Priority: Minor
 Fix For: 2.1.0


Right now we only support 127 concurrent requests against a given native 
transport connection. This causes us to waste file handles opening multiple 
connections, increases driver complexity and dilutes writes across multiple 
connections so that batching cannot easily be performed.

I propose raising this limit substantially, to somewhere in the region of 
16-64K, and that this is a good time to do it since we're already bumping the 
protocol version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6993) Windows: remove mmap'ed I/O for index files and force standard file access

2014-05-15 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-6993:
---

Labels: Windows  (was: )

 Windows: remove mmap'ed I/O for index files and force standard file access
 --

 Key: CASSANDRA-6993
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6993
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
  Labels: Windows
 Fix For: 3.0

 Attachments: 6993_v1.txt, 6993_v2.txt


 Memory-mapped I/O on Windows causes issues with hard-links; we're unable to 
 delete hard-links to open files with memory-mapped segments even using nio.  
 We'll need to push for close to performance parity between mmap'ed I/O and 
 buffered going forward as the buffered / compressed path offers other 
 benefits.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7198) CqlPagingRecordReader throws IllegalStateException

2014-05-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7198:
--

Reviewer: Dave Brosius
Assignee: Brent Theisen

 CqlPagingRecordReader throws IllegalStateException
 --

 Key: CASSANDRA-7198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7198
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
 Environment: Spark with Calliope EA against Cassandra 2.0.7
Reporter: Brent Theisen
Assignee: Brent Theisen
Priority: Trivial
 Fix For: 2.0.9

 Attachments: trunk-7198-2.txt


 Getting the following exception when running a Spark job that does *not* 
 specify cassandra.input.page.row.size:
 {code}
 14/05/08 14:30:43 ERROR executor.Executor: Exception in task ID 12
 java.lang.IllegalStateException: Optional.get() cannot be called on an absent 
 value
 at com.google.common.base.Absent.get(Absent.java:47)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:120)
 at 
 com.tuplejump.calliope.cql3.Cql3CassandraRDD$$anon$1.init(Cql3CassandraRDD.scala:65)
 at 
 com.tuplejump.calliope.cql3.Cql3CassandraRDD.compute(Cql3CassandraRDD.scala:53)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
 at org.apache.spark.scheduler.Task.run(Task.scala:53)
 at 
 org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
 at 
 org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
 at 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 14/05/08 14:30:43 ERROR executor.Executor: Exception in task ID 21
 java.lang.IllegalStateException: Optional.get() cannot be called on an absent 
 value
 at com.google.common.base.Absent.get(Absent.java:47)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:120)
 at 
 com.tuplejump.calliope.cql3.Cql3CassandraRDD$$anon$1.init(Cql3CassandraRDD.scala:65)
 at 
 com.tuplejump.calliope.cql3.Cql3CassandraRDD.compute(Cql3CassandraRDD.scala:53)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
 at org.apache.spark.scheduler.Task.run(Task.scala:53)
 at 
 org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
 at 
 org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
 at 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 Reason why is CqlPagingRecordReader catching the wrong exception type. Patch 
 attached.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7177) Starting threads in the OutboundTcpConnectionPool constructor causes race conditions

2014-05-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997766#comment-13997766
 ] 

Jason Brown commented on CASSANDRA-7177:


Forgot to add, but successfully merged everything up on May 7 after resolving 
some other merge conflict

 Starting threads in the OutboundTcpConnectionPool constructor causes race 
 conditions
 

 Key: CASSANDRA-7177
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7177
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Sergio Bossa
Assignee: Sergio Bossa
 Fix For: 2.0.9, 2.1.0

 Attachments: CASSANDRA-7177-v2.patch, CASSANDRA-7177.patch


 The OutboundTcpConnectionPool starts connection threads in its constructor, 
 causing race conditions when MessagingService#getConnectionPool is 
 concurrently called for the first time for a given address.
 I.e., here's one of the races:
 {noformat}
  WARN 12:49:03,182 Error processing 
 org.apache.cassandra.metrics:type=Connection,scope=127.0.0.1,name=CommandPendingTasks
 javax.management.InstanceAlreadyExistsException: 
 org.apache.cassandra.metrics:type=Connection,scope=127.0.0.1,name=CommandPendingTasks
   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
   at 
 com.yammer.metrics.reporting.JmxReporter.registerBean(JmxReporter.java:464)
   at 
 com.yammer.metrics.reporting.JmxReporter.processGauge(JmxReporter.java:438)
   at 
 com.yammer.metrics.reporting.JmxReporter.processGauge(JmxReporter.java:16)
   at com.yammer.metrics.core.Gauge.processWith(Gauge.java:28)
   at 
 com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
   at 
 com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
   at 
 com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
   at 
 com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
   at com.yammer.metrics.Metrics.newGauge(Metrics.java:70)
   at 
 org.apache.cassandra.metrics.ConnectionMetrics.init(ConnectionMetrics.java:71)
   at 
 org.apache.cassandra.net.OutboundTcpConnectionPool.init(OutboundTcpConnectionPool.java:55)
   at 
 org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:498)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/2] git commit: Work around netty offheap capacity check bug

2014-05-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 05a54e0ce - 11c8a2270


Work around netty offheap capacity check bug

patch by tjake; reviewed by Mikhail Stepura for CASSANDRA-7196


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e3dd88a8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e3dd88a8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e3dd88a8

Branch: refs/heads/trunk
Commit: e3dd88a8cd49a86ef37b9c4204e87bf8e47aefb7
Parents: 65a4626
Author: Jake Luciani j...@apache.org
Authored: Fri May 9 16:39:48 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri May 9 16:39:48 2014 -0400

--
 src/java/org/apache/cassandra/transport/CBUtil.java | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e3dd88a8/src/java/org/apache/cassandra/transport/CBUtil.java
--
diff --git a/src/java/org/apache/cassandra/transport/CBUtil.java 
b/src/java/org/apache/cassandra/transport/CBUtil.java
index e6ba029..6ad4682 100644
--- a/src/java/org/apache/cassandra/transport/CBUtil.java
+++ b/src/java/org/apache/cassandra/transport/CBUtil.java
@@ -330,8 +330,11 @@ public abstract class CBUtil
 return;
 }
 
-cb.writeInt(bytes.remaining());
-cb.writeBytes(bytes.duplicate());
+int remaining = bytes.remaining();
+cb.writeInt(remaining);
+
+if (remaining  0)
+cb.writeBytes(bytes.duplicate());
 }
 
 public static int sizeOfValue(byte[] bytes)



[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-05-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993857#comment-13993857
 ] 

Brandon Williams commented on CASSANDRA-6285:
-

The line in question: 
https://github.com/xedin/disruptor_thrift_server/commit/77d6715af0eeba4c52f42fa6ba6549c8ae52ffa7#diff-18c889f19dc9fbeb73af99dcff152b6eR421

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7042) Disk space growth until restart

2014-05-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7042:
---

Description: 
Cassandra will constantly eat disk space not sure whats causing it the only 
thing that seems to fix it is a restart of cassandra this happens about every 
3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
restart cassandra it usually all clears itself up and disks return to normal 
for a while then something triggers its and starts climbing again. Sometimes 
when we restart compactions pending skyrocket and if we restart a second time 
the compactions pending drop off back to a normal level. One other thing to 
note is the space is not free'd until cassandra starts back up and not when 
shutdown.

I will get a clean log of before and after restarting next time it happens and 
post it.

Here is a common ERROR in our logs that might be related

{noformat}
ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
(line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
/local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
(No such file or directory)
at 
org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
at 
org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.FileNotFoundException: 
/local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
(No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.init(Unknown Source)
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.util.ThrottledReader.init(ThrottledReader.java:35)
at 
org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
... 17 more
{noformat}


  was:
Cassandra will constantly eat disk space not sure whats causing it the only 
thing that seems to fix it is a restart of cassandra this happens about every 
3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
restart cassandra it usually all clears itself up and disks return to normal 
for a while then something triggers its and starts climbing again. Sometimes 
when we restart compactions pending skyrocket and if we restart a second time 
the compactions pending drop off back to a normal level. One other thing to 
note is the space is not free'd until cassandra starts back up and not when 
shutdown.

I will get a clean log of before and after restarting next time it happens and 
post it.

Here is a common ERROR in our logs that might be related

ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
(line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
/local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
(No such file or directory)
at 
org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
at 
org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
at 

[5/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-05-15 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7b3deee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7b3deee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7b3deee

Branch: refs/heads/cassandra-2.1
Commit: e7b3deee648ed2703a0a207e37772c36297bf54c
Parents: a680f72 b927f79
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 14 14:33:29 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 14 14:33:29 2014 -0500

--
 test/pig/org/apache/cassandra/pig/CqlTableTest.java | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)
--




[jira] [Commented] (CASSANDRA-7144) CassandraDaemon RowMutation exception

2014-05-15 Thread Maxime Lamothe-Brassard (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13992841#comment-13992841
 ] 

Maxime Lamothe-Brassard commented on CASSANDRA-7144:


I was not using prepared statements. I was doing INSERT and SELECT, no DELETE.

I have some more info now. As I said I rebuilt the data on the node and it 
eliminated the problem. However, a day later, I found myself killing my 
ingestor (doing lots of INSERT), it's a python script using the new cassandra 
python-driver. When I did that I got the exception above. Thinking it was a 
runtime bug, I just kept going. Then the hinted-handoff started timing out on 
that box, so I restarted cassandra. From that point on, I would get the same 
exception without ever killing ingestors, at random interval. It seems as if 
killing the script during a query ended up sending data to cassandra that made 
it corrupt something on disk and that from that point on whenever it reached 
that part of the data on disk (I use that liberally, I just mean NOT directly 
from the script doing the ingestion) it would throw the exact same exception, 
leading to the timing out again. Restart of the cassandra node did nothing, I 
had to rebuild the data again. So now I'm very paranoid about killing my 
ingestion.

The ingestion uses UNLOGGED BATCH for some of the data for performance as well 
as normal INSERT.

 CassandraDaemon RowMutation exception
 -

 Key: CASSANDRA-7144
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7144
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12.04 w/ Oracle JVM, 5 nodes cluster. Nodes 2GB / 
 2 Cores in DigitalOcean.
Reporter: Maxime Lamothe-Brassard

 First time reporting a bug here, apologies if I'm not posting it in the right 
 space.
 At what seems like random interval, on random nodes in random situations I 
 will get the following exception. After this the hinted handoff start timing 
 out and the node stops participating in the cluster.
 I started seeing these after switching to the Cassandra Python-Driver from 
 the Python-CQL driver.
 {noformat}
 ERROR [WRITE-/10.128.180.108] 2014-05-03 13:45:12,843 CassandraDaemon.java 
 (line 198) Exception in thread Thread[WRITE-/10.128.180.108,5,main]
 java.lang.AssertionError
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:271)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:259)
   at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:120)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:251)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:203)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:151)
 ERROR [WRITE-/10.128.194.70] 2014-05-03 13:45:12,843 CassandraDaemon.java 
 (line 198) Exception in thread Thread[WRITE-/10.128.194.70,5,main]
 java.lang.AssertionError
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:271)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:259)
   at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:120)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:251)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:203)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:151)
 ERROR [MutationStage:118] 2014-05-03 13:45:15,048 CassandraDaemon.java (line 
 198) Exception in thread Thread[MutationStage:118,5,main]
 java.lang.AssertionError
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:271)
   at 
 org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:259)
   at 
 org.apache.cassandra.utils.FBUtilities.serialize(FBUtilities.java:654)
   at 
 org.apache.cassandra.db.HintedHandOffManager.hintFor(HintedHandOffManager.java:137)
   at 
 org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:908)
   at 
 org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:881)
   at 
 org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:1981)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 

[5/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-05-15 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f3d07ac
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f3d07ac
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f3d07ac

Branch: refs/heads/cassandra-2.1
Commit: 7f3d07ac02178c3d3012557f4df18fe116e5ec11
Parents: 361ad68 7484bd4
Author: Yuki Morishita yu...@apache.org
Authored: Fri May 9 10:41:36 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Fri May 9 10:41:36 2014 -0500

--
 .../org/apache/cassandra/streaming/StreamSession.java   | 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f3d07ac/src/java/org/apache/cassandra/streaming/StreamSession.java
--



[jira] [Updated] (CASSANDRA-5663) Add server side write batching to the native transport

2014-05-15 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-5663:
--

Summary: Add server side write batching to the native transport  (was: Add 
write batching for the native protocol)

 Add server side write batching to the native transport
 --

 Key: CASSANDRA-5663
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5663
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 rc1

 Attachments: 5663.txt


 As discussed in CASSANDRA-5422, adding write batching to the native protocol 
 implementation is likely to improve throughput in a number of cases. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Validate statements inside batch

2014-05-15 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 19ff1932c - 2e61cd5e0


Validate statements inside batch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e61cd5e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e61cd5e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e61cd5e

Branch: refs/heads/cassandra-2.0
Commit: 2e61cd5e07f3983d262ec6bba2aea329e28c5fdc
Parents: 19ff193
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed May 7 10:53:09 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed May 7 10:53:09 2014 +0200

--
 .../org/apache/cassandra/cql3/statements/BatchStatement.java| 2 ++
 .../apache/cassandra/cql3/statements/ModificationStatement.java | 5 +
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e61cd5e/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index c03548b..6a1201b 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -128,6 +128,8 @@ public class BatchStatement implements CQLStatement, 
MeasurableForPreparedCache
 {
 if (timestampSet  statement.isTimestampSet())
 throw new InvalidRequestException(Timestamp must be set 
either on BATCH or individual statements);
+
+statement.validate(state);
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e61cd5e/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 526a26c..f8c4042 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -155,7 +155,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 public void validate(ClientState state) throws InvalidRequestException
 {
 if (hasConditions()  attrs.isTimestampSet())
-throw new InvalidRequestException(Custom timestamps are not 
allowed when conditions are used);
+throw new InvalidRequestException(Cannot provide custom timestamp 
for conditional update);
 
 if (isCounter())
 {
@@ -765,9 +765,6 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 if (stmt.isCounter())
 throw new InvalidRequestException(Conditional updates are 
not supported on counter tables);
 
-if (attrs.timestamp != null)
-throw new InvalidRequestException(Cannot provide custom 
timestamp for conditional update);
-
 if (ifNotExists)
 {
 // To have both 'IF NOT EXISTS' and some other conditions 
doesn't make sense.



[6/6] git commit: Merge branch 'cassandra-2.1' into trunk

2014-05-15 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ac1a9cd6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ac1a9cd6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ac1a9cd6

Branch: refs/heads/trunk
Commit: ac1a9cd63ab06289b1bd7d8cce706a51991eef0c
Parents: 17afc08 7f3d07a
Author: Yuki Morishita yu...@apache.org
Authored: Fri May 9 10:41:45 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Fri May 9 10:41:45 2014 -0500

--
 .../org/apache/cassandra/streaming/StreamSession.java   | 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)
--




[jira] [Commented] (CASSANDRA-7196) Select query with IN restriction times out in CQLSH

2014-05-15 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993744#comment-13993744
 ] 

T Jake Luciani commented on CASSANDRA-7196:
---

Looks like it's caused by the offheap pooled netty allocator

{code}
public abstract class CBUtil
{
public static final ByteBufAllocator allocator = new 
PooledByteBufAllocator(true);
{code}

If you change true to false it starts working again

 Select query with IN restriction times out in CQLSH
 ---

 Key: CASSANDRA-7196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7196
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Stepura
Assignee: T Jake Luciani
  Labels: regression
 Fix For: 2.1 rc1

 Attachments: 7196-v2.txt, 7196-v3.txt, 7196.txt, init_bug.cql


 I've noticed that 
 {{pylib.cqlshlib.test.test_cqlsh_output.TestCqlshOutput#test_numeric_output}} 
 tests fails on the current 2.1 branch, which wasn't the case before.
 Here are the steps to reproduce. I'm attaching the script to populate schema.
 {code}
 mstepura-mac:cassandra mikhail$ bin/cqlsh -f path_to/init_bug.cql
 mstepura-mac:cassandra mikhail$ bin/cqlsh
 Connected to Test Cluster at 127.0.0.1:9042.
 [cqlsh 5.0.0 | Cassandra 2.1.0-beta2-SNAPSHOT | CQL spec 3.1.6 | Native 
 protocol v2]
 Use HELP for help.
 cqlsh use test;
 cqlsh:test select intcol, bigintcol, varintcol from has_all_types where num 
 in (0, 1, 2, 3, 4);
 errors={}, last_host=127.0.0.1
 cqlsh:test
 {code}
 That works perfectly on 2.0 branch. And there are no errors in the logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7143) shuffle broken on 2.0

2014-05-15 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13994030#comment-13994030
 ] 

Russ Hatch commented on CASSANDRA-7143:
---

duplicated the bug by running shuffle on the commit before, verified the fix by 
running shuffle on the fix commit.

I'm updating dtests so they will watch for the log message above going forward 
when they run shuffle.

 shuffle broken on 2.0
 -

 Key: CASSANDRA-7143
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7143
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
  Labels: qa-resolved
 Fix For: 2.0.8

 Attachments: 7143.txt


 In 1.2, shuffle works correctly, creating the list of relocations and then 
 following it, pausing correctly as needed:
 {noformat}
  WARN 20:45:58,153 Pausing until token count stabilizes (target=3, actual=4)
 {noformat}
 However on 2.0, it relocates all the ranges in one shot and never deletes 
 entries from the list of tokens to relocate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/2] git commit: Make batchlog replay asynchronous

2014-05-15 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1c86f6688 - eea5c3748


Make batchlog replay asynchronous

patch by Oleg Anastasyev; reviewed by Aleksey Yeschenko for
CASSANDRA-6134


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92c38c0e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92c38c0e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92c38c0e

Branch: refs/heads/trunk
Commit: 92c38c0e6a5e23bdb77c23073a28f118a9f23add
Parents: e7b3dee
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu May 15 01:13:09 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu May 15 01:13:09 2014 +0300

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/BatchlogManager.java| 287 ---
 2 files changed, 188 insertions(+), 100 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92c38c0e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3dd47a1..d43a0f5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * Fix repair hang when given CF does not exist (CASSANDRA-7189)
  * Allow c* to be shutdown in an embedded mode (CASSANDRA-5635)
  * Add server side batching to native transport (CASSANDRA-5663)
+ * Make batchlog replay asynchronous (CASSANDRA-6134)
 Merged from 2.0:
  * (Hadoop) Close java driver Cluster in CQLRR.close (CASSANDRA-7228)
  * Warn when 'USING TIMESTAMP' is used on a CAS BATCH (CASSANDRA-7067)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92c38c0e/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 3ffc7a7..1a441f6 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -48,6 +48,8 @@ import org.apache.cassandra.gms.FailureDetector;
 import org.apache.cassandra.io.sstable.Descriptor;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.io.util.DataOutputBuffer;
+import org.apache.cassandra.net.MessageIn;
+import org.apache.cassandra.net.MessageOut;
 import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.service.StorageProxy;
 import org.apache.cassandra.service.StorageService;
@@ -193,162 +195,247 @@ public class BatchlogManager implements 
BatchlogManagerMBean
 logger.debug(Finished replayAllFailedBatches);
 }
 
-// returns the UUID of the last seen batch
+private void deleteBatch(UUID id)
+{
+Mutation mutation = new Mutation(Keyspace.SYSTEM_KS, 
UUIDType.instance.decompose(id));
+mutation.delete(SystemKeyspace.BATCHLOG_CF, 
FBUtilities.timestampMicros());
+mutation.apply();
+}
+
 private UUID processBatchlogPage(UntypedResultSet page, RateLimiter 
rateLimiter)
 {
 UUID id = null;
+ArrayListBatch batches = new ArrayList(page.size());
+
+// Sending out batches for replay without waiting for them, so that 
one stuck batch doesn't affect others
 for (UntypedResultSet.Row row : page)
 {
 id = row.getUUID(id);
 long writtenAt = row.getLong(written_at);
-int version = row.has(version) ? row.getInt(version) : 
MessagingService.VERSION_12;
 // enough time for the actual write + batchlog entry mutation 
delivery (two separate requests).
 long timeout = DatabaseDescriptor.getWriteRpcTimeout() * 2; // 
enough time for the actual write + BM removal mutation
 if (System.currentTimeMillis()  writtenAt + timeout)
 continue; // not ready to replay yet, might still get a 
deletion.
-replayBatch(id, row.getBytes(data), writtenAt, version, 
rateLimiter);
+
+int version = row.has(version) ? row.getInt(version) : 
MessagingService.VERSION_12;
+Batch batch = new Batch(id, writtenAt, row.getBytes(data), 
version);
+try
+{
+if (batch.replay(rateLimiter)  0)
+{
+batches.add(batch);
+}
+else
+{
+deleteBatch(id); // no write mutations were sent (either 
expired or all CFs involved truncated).
+totalBatchesReplayed.incrementAndGet();
+}
+}
+catch (IOException e)
+{
+logger.warn(Skipped batch replay of {} due to {}, id, e);
+deleteBatch(id);
+}
+}
+
+// now waiting for all batches 

[jira] [Updated] (CASSANDRA-7198) CqlPagingRecordReader throws IllegalStateException

2014-05-15 Thread Brent Theisen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brent Theisen updated CASSANDRA-7198:
-

Attachment: trunk-7198-2.txt

 CqlPagingRecordReader throws IllegalStateException
 --

 Key: CASSANDRA-7198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7198
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
 Environment: Spark with Calliope EA against Cassandra 2.0.7
Reporter: Brent Theisen
Priority: Trivial
 Fix For: 2.0.9

 Attachments: trunk-7198-2.txt


 Getting the following exception when running a Spark job that does *not* 
 specify cassandra.input.page.row.size:
 {code}
 14/05/08 14:30:43 ERROR executor.Executor: Exception in task ID 12
 java.lang.IllegalStateException: Optional.get() cannot be called on an absent 
 value
 at com.google.common.base.Absent.get(Absent.java:47)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:120)
 at 
 com.tuplejump.calliope.cql3.Cql3CassandraRDD$$anon$1.init(Cql3CassandraRDD.scala:65)
 at 
 com.tuplejump.calliope.cql3.Cql3CassandraRDD.compute(Cql3CassandraRDD.scala:53)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
 at org.apache.spark.scheduler.Task.run(Task.scala:53)
 at 
 org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
 at 
 org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
 at 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 14/05/08 14:30:43 ERROR executor.Executor: Exception in task ID 21
 java.lang.IllegalStateException: Optional.get() cannot be called on an absent 
 value
 at com.google.common.base.Absent.get(Absent.java:47)
 at 
 org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:120)
 at 
 com.tuplejump.calliope.cql3.Cql3CassandraRDD$$anon$1.init(Cql3CassandraRDD.scala:65)
 at 
 com.tuplejump.calliope.cql3.Cql3CassandraRDD.compute(Cql3CassandraRDD.scala:53)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
 at org.apache.spark.scheduler.Task.run(Task.scala:53)
 at 
 org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
 at 
 org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
 at 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 Reason why is CqlPagingRecordReader catching the wrong exception type. Patch 
 attached.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7196) Select query with IN restriction times out in CQLSH

2014-05-15 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993832#comment-13993832
 ] 

Mikhail Stepura commented on CASSANDRA-7196:


v3 works as expected. +1

 Select query with IN restriction times out in CQLSH
 ---

 Key: CASSANDRA-7196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7196
 Project: Cassandra
  Issue Type: Bug
Reporter: Mikhail Stepura
Assignee: T Jake Luciani
  Labels: regression
 Fix For: 2.1 rc1

 Attachments: 7196-v2.txt, 7196-v3.txt, 7196.txt, init_bug.cql


 I've noticed that 
 {{pylib.cqlshlib.test.test_cqlsh_output.TestCqlshOutput#test_numeric_output}} 
 tests fails on the current 2.1 branch, which wasn't the case before.
 Here are the steps to reproduce. I'm attaching the script to populate schema.
 {code}
 mstepura-mac:cassandra mikhail$ bin/cqlsh -f path_to/init_bug.cql
 mstepura-mac:cassandra mikhail$ bin/cqlsh
 Connected to Test Cluster at 127.0.0.1:9042.
 [cqlsh 5.0.0 | Cassandra 2.1.0-beta2-SNAPSHOT | CQL spec 3.1.6 | Native 
 protocol v2]
 Use HELP for help.
 cqlsh use test;
 cqlsh:test select intcol, bigintcol, varintcol from has_all_types where num 
 in (0, 1, 2, 3, 4);
 errors={}, last_host=127.0.0.1
 cqlsh:test
 {code}
 That works perfectly on 2.0 branch. And there are no errors in the logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-05-15 Thread mishail
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70d18cd3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70d18cd3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70d18cd3

Branch: refs/heads/trunk
Commit: 70d18cd3e6299c74812575c0d5abe68a8a6e0e58
Parents: 211d81c b2dd6a7
Author: Mikhail Stepura mish...@apache.org
Authored: Thu May 8 16:21:13 2014 -0700
Committer: Mikhail Stepura mish...@apache.org
Committed: Thu May 8 16:21:13 2014 -0700

--
 pylib/cqlshlib/test/cassconnect.py   |  4 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py |  2 +-
 pylib/cqlshlib/test/test_cqlsh_output.py | 43 +++
 3 files changed, 24 insertions(+), 25 deletions(-)
--




[4/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-05-15 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6be62c2c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6be62c2c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6be62c2c

Branch: refs/heads/trunk
Commit: 6be62c2c46de06170dd4a10327ecab4ab7a41d78
Parents: 92c38c0 569177f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 14 17:36:48 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 14 17:36:48 2014 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/hadoop/cql3/CqlConfigHelper.java  | 32 ++--
 2 files changed, 30 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6be62c2c/CHANGES.txt
--
diff --cc CHANGES.txt
index d43a0f5,285efd1..450e337
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,7 +1,21 @@@
 -2.0.9
 +2.1.0-rc1
 + * Add PowerShell Windows launch scripts (CASSANDRA-7001)
 + * Make commitlog archive+restore more robust (CASSANDRA-6974)
 + * Fix marking commitlogsegments clean (CASSANDRA-6959)
 + * Add snapshot manifest describing files included (CASSANDRA-6326)
 + * Parallel streaming for sstableloader (CASSANDRA-3668)
 + * Fix bugs in supercolumns handling (CASSANDRA-7138)
 + * Fix ClassClassException on composite dense tables (CASSANDRA-7112)
 + * Cleanup and optimize collation and slice iterators (CASSANDRA-7107)
 + * Upgrade NBHM lib (CASSANDRA-7128)
 + * Optimize netty server (CASSANDRA-6861)
 + * Fix repair hang when given CF does not exist (CASSANDRA-7189)
 + * Allow c* to be shutdown in an embedded mode (CASSANDRA-5635)
 + * Add server side batching to native transport (CASSANDRA-5663)
 + * Make batchlog replay asynchronous (CASSANDRA-6134)
 +Merged from 2.0:
+  * (Hadoop) support authentication in CqlRecordReader (CASSANDRA-7221)
   * (Hadoop) Close java driver Cluster in CQLRR.close (CASSANDRA-7228)
 - * Fix potential SlabAllocator yield-starvation (CASSANDRA-7133)
   * Warn when 'USING TIMESTAMP' is used on a CAS BATCH (CASSANDRA-7067)
   * Starting threads in OutboundTcpConnectionPool constructor causes race 
conditions (CASSANDRA-7177)
   * return all cpu values from BackgroundActivityMonitor.readAndCompute 
(CASSANDRA-7183)



[2/3] git commit: More aggressive waiting in KeyCacheTest

2014-05-15 Thread brandonwilliams
More aggressive waiting in KeyCacheTest

Patch by Benedict, reviewed by brandonwilliams for CASSANDRA-7167


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/361ad681
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/361ad681
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/361ad681

Branch: refs/heads/trunk
Commit: 361ad681ecdde12e299026ccee5e17c184f943d8
Parents: 259e17d
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri May 9 10:12:16 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri May 9 10:12:16 2014 -0500

--
 test/unit/org/apache/cassandra/db/KeyCacheTest.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/361ad681/test/unit/org/apache/cassandra/db/KeyCacheTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/KeyCacheTest.java 
b/test/unit/org/apache/cassandra/db/KeyCacheTest.java
index e6745a1..c0560ab 100644
--- a/test/unit/org/apache/cassandra/db/KeyCacheTest.java
+++ b/test/unit/org/apache/cassandra/db/KeyCacheTest.java
@@ -22,7 +22,9 @@ import java.util.HashMap;
 import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
 
+import com.google.common.util.concurrent.Uninterruptibles;
 import org.junit.AfterClass;
 import org.junit.Test;
 
@@ -162,7 +164,8 @@ public class KeyCacheTest extends SchemaLoader
 for (SSTableReader reader : readers)
 reader.releaseReference();
 
-while (StorageService.tasks.getActiveCount()  0);
+Uninterruptibles.sleepUninterruptibly(10, TimeUnit.MILLISECONDS);;
+while (StorageService.tasks.getActiveCount() + 
StorageService.tasks.getQueue().size()  0);
 
 // after releasing the reference this should drop to 2
 assertKeyCacheSize(2, KEYSPACE1, COLUMN_FAMILY1);



[jira] [Comment Edited] (CASSANDRA-6962) examine shortening path length post-5202

2014-05-15 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997782#comment-13997782
 ] 

Joshua McKenzie edited comment on CASSANDRA-6962 at 5/14/14 5:10 PM:
-

CASSANDRA-4110 and the limitations in Schema.java provide us some protection 
but there's really nothing to stop users nesting their cassandra data 250 
characters deep in a path and having things blow up on them regardless of what 
length we limit ourselves to.

On snapshots we'll be using 204 chars worst-case (48 KS, 48 CF, *2 each, 9 for 
snapshots, 3 for slashes) so that doesn't leave us a lot of breathing room on 
path for data_file_directories.  Maybe lowering the NAME_LENGTH in Schema.java 
would be appropriate given CASSANDRA-7136?  Do we have a lot of users rolling 
out 40+ char KS and CF names in general, much less on Windows?


was (Author: joshuamckenzie):
CASSANDRA-4110 and the limitations in Schema.java provide us some protection 
but there's really nothing to stop users nesting their cassandra data 250 
characters deep in a path and having things blow up on them regardless of what 
length we limit ourselves to.

On snapshots we'll be using 204 chars worst-case (48 KS, 48 CF, *2 each, 9 for 
snapshots, 3 for \) so that doesn't leave us a lot of breathing room on path 
for data_file_directories.  Maybe lowering the NAME_LENGTH in Schema.java would 
be appropriate given CASSANDRA-7136?  Do we have a lot of users rolling out 40+ 
char KS and CF names in general, much less on Windows?

 examine shortening path length post-5202
 

 Key: CASSANDRA-6962
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6962
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 2.1 rc1

 Attachments: 6962-2.1.txt


 From CASSANDRA-5202 discussion:
 {quote}
 Did we give up on this?
 Could we clean up the redundancy a little by moving the ID into the directory 
 name? e.g., ks/cf-uuid/version-generation-component.db
 I'm worried about path length, which is limited on Windows.
 Edit: to give a specific example, for KS foo Table bar we now have
 /var/lib/cassandra/flush/foo/bar-2fbb89709a6911e3b7dc4d7d4e3ca4b4/foo-bar-ka-1-Data.db
 I'm proposing
 /var/lib/cassandra/flush/foo/bar-2fbb89709a6911e3b7dc4d7d4e3ca4b4/ka-1-Data.db
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/3] git commit: Ninja: Adjust cqlsh unit-tests for 2.1

2014-05-15 Thread mishail
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 56136946a - b2dd6a7f6
  refs/heads/trunk 211d81c24 - 70d18cd3e


Ninja: Adjust cqlsh unit-tests for 2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b2dd6a7f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b2dd6a7f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b2dd6a7f

Branch: refs/heads/cassandra-2.1
Commit: b2dd6a7f6c5d3751df9483a557f7ec8d54901e4b
Parents: 5613694
Author: Mikhail Stepura mish...@apache.org
Authored: Thu May 8 15:57:07 2014 -0700
Committer: Mikhail Stepura mish...@apache.org
Committed: Thu May 8 16:20:39 2014 -0700

--
 pylib/cqlshlib/test/cassconnect.py   |  4 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py |  2 +-
 pylib/cqlshlib/test/test_cqlsh_output.py | 43 +++
 3 files changed, 24 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dd6a7f/pylib/cqlshlib/test/cassconnect.py
--
diff --git a/pylib/cqlshlib/test/cassconnect.py 
b/pylib/cqlshlib/test/cassconnect.py
index bf62c2f..6ef6eb9 100644
--- a/pylib/cqlshlib/test/cassconnect.py
+++ b/pylib/cqlshlib/test/cassconnect.py
@@ -26,7 +26,7 @@ test_keyspace_init = os.path.join(rundir, 
'test_keyspace_init.cql')
 
 def get_cassandra_connection(cql_version=None):
 if cql_version is None:
-cql_version = '3.1.5'
+cql_version = '3.1.6'
 conn = cql((TEST_HOST,), TEST_PORT, cql_version=cql_version)
 # until the cql lib does this for us
 conn.cql_version = cql_version
@@ -73,7 +73,7 @@ def execute_cql_file(cursor, fname):
 return execute_cql_commands(cursor, f.read())
 
 def create_test_db():
-with cassandra_cursor(ks=None, cql_version='3.1.5') as c:
+with cassandra_cursor(ks=None, cql_version='3.1.6') as c:
 k = create_test_keyspace(c)
 execute_cql_file(c, test_keyspace_init)
 return k

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dd6a7f/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 0ccfe43..221c6b4 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -92,7 +92,7 @@ class CqlshCompletionCase(BaseTestCase):
 return self.module.CqlRuleSet.replication_strategies
 
 class TestCqlshCompletion(CqlshCompletionCase):
-cqlver = '3.1.5'
+cqlver = '3.1.6'
 module = cqlsh.cql3handling
 
 def test_complete_on_empty_string(self):

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dd6a7f/pylib/cqlshlib/test/test_cqlsh_output.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py 
b/pylib/cqlshlib/test/test_cqlsh_output.py
index 212f847..5a2a837 100644
--- a/pylib/cqlshlib/test/test_cqlsh_output.py
+++ b/pylib/cqlshlib/test/test_cqlsh_output.py
@@ -200,7 +200,7 @@ class TestCqlshOutput(BaseTestCase):
 (1 rows)
 
 ),
-), cqlver=3.1.5)
+), cqlver=3.1.6)
 
 q = 'select COUNT(*) FROM twenty_rows_composite_table limit 100;'
 self.assertQueriesGiveColoredOutput((
@@ -216,7 +216,7 @@ class TestCqlshOutput(BaseTestCase):
 (1 rows)
 
 ),
-), cqlver=3.1.5)
+), cqlver=3.1.6)
 
 def test_static_cf_output(self):
 self.assertCqlverQueriesGiveColoredOutput((
@@ -236,7 +236,7 @@ class TestCqlshOutput(BaseTestCase):
 (3 rows)
 
 ),
-), cqlver=3.1.5)
+), cqlver=3.1.6)
 
 self.assertQueriesGiveColoredOutput((
 ('select * from dynamic_columns;', 
@@ -259,14 +259,14 @@ class TestCqlshOutput(BaseTestCase):
 (5 rows)
 
 ),
-), cqlver=3.1.5)
+), cqlver=3.1.6)
 
 def test_empty_cf_output(self):
 self.assertCqlverQueriesGiveColoredOutput((
 ('select * from empty_table;', 
 (0 rows)
 ),
-), cqlver=3.1.5)
+), cqlver=3.1.6)
 
 q = 'select * from has_all_types where num = 999;'
 
@@ -275,7 +275,7 @@ class TestCqlshOutput(BaseTestCase):
 (q, 
 (0 rows)
 ),
-), cqlver=3.1.5)
+), cqlver=3.1.6)
 
 def test_columnless_key_output(self):
 q = select a from twenty_rows_table where a in ('1', '2', '-9192');
@@ -295,7 +295,7 @@ class TestCqlshOutput(BaseTestCase):
 (2 rows)
 
   

[jira] [Updated] (CASSANDRA-7197) Dead code in trunk

2014-05-15 Thread Daniel Shelepov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Shelepov updated CASSANDRA-7197:
---

Attachment: trunk-7197.txt

 Dead code in trunk
 --

 Key: CASSANDRA-7197
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7197
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Daniel Shelepov
Priority: Minor
  Labels: sourcecode
 Attachments: trunk-7197.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 I did some code analysis, and as a byproduct was able to identify some dead 
 code in the form of files that can be safely removed.  After filtering out 
 some false positives, this is what remained:
 gms/IFailureNotification.java
 - was there since FB open-sourced Cassandra; has never been used for 
 anything.  No classes implement the interface, and it's not mentioned 
 anywhere in the codebase.
 service/PendingRangeCalculatorServiceMBean.java 
 - empty MBean used as a base class for PendingRangeCalculatorService, but has 
 not been touched since being introduced several months ago.  NOTE: removing 
 this will require editing PendingRangeCalculatorService to not derive from 
 this anymore.
 db/ColumnFamilyNotDefinedException.java
 - used to be thrown in original FB Cassandra; no longer used anywhere.
 db/context/IContext.java
 - introduced in 2c4ac98c9ffa8ea52da801830c7cdb745ddc28f0 (CASSANDRA-1072); 
 was used extensively then, but no longer used anywhere.
 db/columniterator/SimpleAbstractColumnIterator.java
 - introduced in 48093358fb9022947592813a6aae43db148847ca (CASSANDRA-287); was 
 used then; no longer used anywhere.
 thrift/RequestType.java
 - enum introduced in 72199e23ec9d604449bef87733a32e1da9924437 
 (CASSANDRA-3272); was used then; no longer used anywhere.
 utils/AtomicLongArrayUpdater.java
 - introduced in 22e18f5a348a911f89deed9f9984950de451d28a (CASSANDRA-3578), 
 but has never been used for anything.  Not sure what the original intent 
 might have been.
 utils/DefaultDouble.java
 - introduced in 96588d4f322dfbb1f5ff9328afe4377babfb1d2c (CASSANDRA-1715); 
 was used then; no longer used anywhere.
 utils/LatencyTracker.java
 - introduced in 979a022f896aaa5a799b27a973cd476e5727820e (CASSANDRA-702); was 
 used then; no longer used anywhere.
 utils/SkipNullRepresenter.java
 - introduced in a6777492280ae481392cd4cb4ba613923f84989d(CASSANDRA-1133) was 
 used then; no longer used anywhere.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7197) Dead code in trunk

2014-05-15 Thread Daniel Shelepov (JIRA)
Daniel Shelepov created CASSANDRA-7197:
--

 Summary: Dead code in trunk
 Key: CASSANDRA-7197
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7197
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Daniel Shelepov
Priority: Minor
 Attachments: trunk-7197.txt

I did some code analysis, and as a byproduct was able to identify some dead 
code in the form of files that can be safely removed.  After filtering out some 
false positives, this is what remained:

gms/IFailureNotification.java

- was there since FB open-sourced Cassandra; has never been used for anything.  
No classes implement the interface, and it's not mentioned anywhere in the 
codebase.

service/PendingRangeCalculatorServiceMBean.java 

- empty MBean used as a base class for PendingRangeCalculatorService, but has 
not been touched since being introduced several months ago.  NOTE: removing 
this will require editing PendingRangeCalculatorService to not derive from this 
anymore.

db/ColumnFamilyNotDefinedException.java

- used to be thrown in original FB Cassandra; no longer used anywhere.

db/context/IContext.java

- introduced in 2c4ac98c9ffa8ea52da801830c7cdb745ddc28f0 (CASSANDRA-1072); was 
used extensively then, but no longer used anywhere.

db/columniterator/SimpleAbstractColumnIterator.java

- introduced in 48093358fb9022947592813a6aae43db148847ca (CASSANDRA-287); was 
used then; no longer used anywhere.

thrift/RequestType.java

- enum introduced in 72199e23ec9d604449bef87733a32e1da9924437 (CASSANDRA-3272); 
was used then; no longer used anywhere.

utils/AtomicLongArrayUpdater.java

- introduced in 22e18f5a348a911f89deed9f9984950de451d28a (CASSANDRA-3578), but 
has never been used for anything.  Not sure what the original intent might have 
been.

utils/DefaultDouble.java

- introduced in 96588d4f322dfbb1f5ff9328afe4377babfb1d2c (CASSANDRA-1715); was 
used then; no longer used anywhere.

utils/LatencyTracker.java

- introduced in 979a022f896aaa5a799b27a973cd476e5727820e (CASSANDRA-702); was 
used then; no longer used anywhere.

utils/SkipNullRepresenter.java

- introduced in a6777492280ae481392cd4cb4ba613923f84989d(CASSANDRA-1133) was 
used then; no longer used anywhere.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7185) cqlsh can't tab-complete disabling compaction

2014-05-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993103#comment-13993103
 ] 

Tyler Hobbs commented on CASSANDRA-7185:


Fixed the docs as commit 46f7f84cea0b608da22e1e315c8a05096ce494ac

 cqlsh can't tab-complete disabling compaction
 -

 Key: CASSANDRA-7185
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7185
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Mikhail Stepura
Priority: Trivial
 Fix For: 2.0.9, 2.1 rc1

 Attachments: CASSANDRA-2.0-7185.patch


 cqlsh can't tab-complete the following case where you want to disable 
 compaction:
 {noformat}
 alter table keys with compaction = {'class': 'SizeTieredCompactionStrategy', 
 'enabled': 'false'}
 {noformat}
 Specfically it doesn't know 'enabled' is a valid option.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7197) Dead code in trunk

2014-05-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7197:
--


[~dbrosius] to review

 Dead code in trunk
 --

 Key: CASSANDRA-7197
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7197
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Daniel Shelepov
Assignee: Daniel Shelepov
Priority: Minor
 Attachments: trunk-7197.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 I did some code analysis, and as a byproduct was able to identify some dead 
 code in the form of files that can be safely removed.  After filtering out 
 some false positives, this is what remained:
 gms/IFailureNotification.java
 - was there since FB open-sourced Cassandra; has never been used for 
 anything.  No classes implement the interface, and it's not mentioned 
 anywhere in the codebase.
 service/PendingRangeCalculatorServiceMBean.java 
 - empty MBean used as a base class for PendingRangeCalculatorService, but has 
 not been touched since being introduced several months ago.  NOTE: removing 
 this will require editing PendingRangeCalculatorService to not derive from 
 this anymore.
 db/ColumnFamilyNotDefinedException.java
 - used to be thrown in original FB Cassandra; no longer used anywhere.
 db/context/IContext.java
 - introduced in 2c4ac98c9ffa8ea52da801830c7cdb745ddc28f0 (CASSANDRA-1072); 
 was used extensively then, but no longer used anywhere.
 db/columniterator/SimpleAbstractColumnIterator.java
 - introduced in 48093358fb9022947592813a6aae43db148847ca (CASSANDRA-287); was 
 used then; no longer used anywhere.
 thrift/RequestType.java
 - enum introduced in 72199e23ec9d604449bef87733a32e1da9924437 
 (CASSANDRA-3272); was used then; no longer used anywhere.
 utils/AtomicLongArrayUpdater.java
 - introduced in 22e18f5a348a911f89deed9f9984950de451d28a (CASSANDRA-3578), 
 but has never been used for anything.  Not sure what the original intent 
 might have been.
 utils/DefaultDouble.java
 - introduced in 96588d4f322dfbb1f5ff9328afe4377babfb1d2c (CASSANDRA-1715); 
 was used then; no longer used anywhere.
 utils/LatencyTracker.java
 - introduced in 979a022f896aaa5a799b27a973cd476e5727820e (CASSANDRA-702); was 
 used then; no longer used anywhere.
 utils/SkipNullRepresenter.java
 - introduced in a6777492280ae481392cd4cb4ba613923f84989d(CASSANDRA-1133) was 
 used then; no longer used anywhere.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6877) pig tests broken

2014-05-15 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13998259#comment-13998259
 ] 

Alex Liu commented on CASSANDRA-6877:
-

The failing on 2.1 branch starts from 
https://github.com/apache/cassandra/commit/362cc05352ec67e707e0ac790732e96a15e63f6b
 commit. 

 pig tests broken
 

 Key: CASSANDRA-6877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6877
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Sam Tunnicliffe
 Fix For: 2.0.9, 2.1 rc1

 Attachments: 0001-Exclude-cobertura-xerces-dependency.patch, 
 0002-Fix-failed-pig-test.patch


 Not sure what happened here, but I get a smorgasbord of errors running the 
 pig tests now, from xml errors in xerces to NotFoundExceptions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/3] git commit: Add snapshot manifest describing files included patch by Sankalp Kohli; reviewed by jbellis for CASSANDRA-6326

2014-05-15 Thread jbellis
Add snapshot manifest describing files included
patch by Sankalp Kohli; reviewed by jbellis for CASSANDRA-6326


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce7bf5e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce7bf5e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce7bf5e9

Branch: refs/heads/trunk
Commit: ce7bf5e99405ce08dbd2b4955fd76582c27db403
Parents: 311c276
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 7 16:53:16 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 7 16:53:32 2014 -0500

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 23 +++-
 .../org/apache/cassandra/db/Directories.java|  5 +
 .../apache/cassandra/io/sstable/Descriptor.java | 13 +++
 4 files changed, 41 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce7bf5e9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5ecd19d..fc5786b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-rc1
+ * Add snapshot manifest describing files included (CASSANDRA-6326)
  * Parallel streaming for sstableloader (CASSANDRA-3668)
  * Fix bugs in supercolumns handling (CASSANDRA-7138)
  * Fix ClassClassException on composite dense tables (CASSANDRA-7112)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce7bf5e9/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 1fdcb73..c5afb25 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -35,6 +35,8 @@ import com.google.common.collect.*;
 import com.google.common.util.concurrent.*;
 import com.google.common.util.concurrent.Futures;
 import com.google.common.util.concurrent.Uninterruptibles;
+import org.apache.cassandra.io.FSWriteError;
+import org.json.simple.*;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -2139,7 +2141,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 for (ColumnFamilyStore cfs : concatWithIndexes())
 {
 DataTracker.View currentView = cfs.markCurrentViewReferenced();
-
+final JSONArray filesJSONArr = new JSONArray();
 try
 {
 for (SSTableReader ssTable : currentView.sstables)
@@ -2151,9 +2153,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 File snapshotDirectory = 
Directories.getSnapshotDirectory(ssTable.descriptor, snapshotName);
 ssTable.createLinks(snapshotDirectory.getPath()); // hard 
links
+
filesJSONArr.add(ssTable.descriptor.relativeFilenameFor(Component.DATA));
 if (logger.isDebugEnabled())
 logger.debug(Snapshot for {} keyspace data file {} 
created in {}, keyspace, ssTable.getFilename(), snapshotDirectory);
 }
+
+writeSnapshotManifest(filesJSONArr, snapshotName);
 }
 finally
 {
@@ -2162,6 +2167,22 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 }
 
+private void writeSnapshotManifest(final JSONArray filesJSONArr, final 
String snapshotName)
+{
+final File manifestFile = 
directories.getSnapshotManifestFile(snapshotName);
+final JSONObject manifestJSON = new JSONObject();
+manifestJSON.put(files, filesJSONArr);
+
+try
+{
+org.apache.commons.io.FileUtils.writeStringToFile(manifestFile, 
manifestJSON.toJSONString());
+}
+catch (IOException e)
+{
+throw new FSWriteError(e, manifestFile);
+}
+}
+
 public ListSSTableReader getSnapshotSSTableReader(String tag) throws 
IOException
 {
 MapDescriptor, SetComponent snapshots = 
directories.sstableLister().snapshots(tag).list();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce7bf5e9/src/java/org/apache/cassandra/db/Directories.java
--
diff --git a/src/java/org/apache/cassandra/db/Directories.java 
b/src/java/org/apache/cassandra/db/Directories.java
index 1350be2..a146855 100644
--- a/src/java/org/apache/cassandra/db/Directories.java
+++ b/src/java/org/apache/cassandra/db/Directories.java
@@ -358,6 +358,11 @@ public class Directories
 return getOrCreate(desc.directory, SNAPSHOT_SUBDIR, snapshotName);
  

[jira] [Commented] (CASSANDRA-6563) TTL histogram compactions not triggered at high Estimated droppable tombstones rate

2014-05-15 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993480#comment-13993480
 ] 

Marcus Eriksson commented on CASSANDRA-6563:


have to say I don't really feel comfortable dropping these checks, we could 
start doing alot of extra unnecessary IO

A better solution would be to get a more accurate estimate on how much the 
sstables overlap, (CASSANDRA-6474)

What we could do now (in 2.0) is perhaps loosen checks a bit, for example, we 
should probably only check for overlap in sstables which contain data that is 
older than the data in one we want to compact, since those are the ones that 
can block dropping the tombstones.

 TTL histogram compactions not triggered at high Estimated droppable 
 tombstones rate
 -

 Key: CASSANDRA-6563
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6563
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.12ish
Reporter: Chris Burroughs
Assignee: Paulo Ricardo Motta Gomes
 Fix For: 1.2.17, 2.0.8

 Attachments: 1.2.16-CASSANDRA-6563.txt, 2.0.7-CASSANDRA-6563.txt, 
 patched-droppadble-ratio.png, patched-storage-load.png, 
 patched1-compacted-bytes.png, patched2-compacted-bytes.png, 
 unpatched-droppable-ratio.png, unpatched-storage-load.png, 
 unpatched1-compacted-bytes.png, unpatched2-compacted-bytes.png


 I have several column families in a largish cluster where virtually all 
 columns are written with a (usually the same) TTL.  My understanding of 
 CASSANDRA-3442 is that sstables that have a high (  20%) estimated 
 percentage of droppable tombstones should be individually compacted.  This 
 does not appear to be occurring with size tired compaction.
 Example from one node:
 {noformat}
 $ ll /data/sstables/data/ks/Cf/*Data.db
 -rw-rw-r-- 31 cassandra cassandra 26651211757 Nov 26 22:59 
 /data/sstables/data/ks/Cf/ks-Cf-ic-295562-Data.db
 -rw-rw-r-- 31 cassandra cassandra  6272641818 Nov 27 02:51 
 /data/sstables/data/ks/Cf/ks-Cf-ic-296121-Data.db
 -rw-rw-r-- 31 cassandra cassandra  1814691996 Dec  4 21:50 
 /data/sstables/data/ks/Cf/ks-Cf-ic-320449-Data.db
 -rw-rw-r-- 30 cassandra cassandra 10909061157 Dec 11 17:31 
 /data/sstables/data/ks/Cf/ks-Cf-ic-340318-Data.db
 -rw-rw-r-- 29 cassandra cassandra   459508942 Dec 12 10:37 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342259-Data.db
 -rw-rw-r--  1 cassandra cassandra  336908 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342307-Data.db
 -rw-rw-r--  1 cassandra cassandra 2063935 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342309-Data.db
 -rw-rw-r--  1 cassandra cassandra 409 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342314-Data.db
 -rw-rw-r--  1 cassandra cassandra31180007 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342319-Data.db
 -rw-rw-r--  1 cassandra cassandra 2398345 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342322-Data.db
 -rw-rw-r--  1 cassandra cassandra   21095 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342331-Data.db
 -rw-rw-r--  1 cassandra cassandra   81454 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342335-Data.db
 -rw-rw-r--  1 cassandra cassandra 1063718 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342339-Data.db
 -rw-rw-r--  1 cassandra cassandra  127004 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342344-Data.db
 -rw-rw-r--  1 cassandra cassandra  146785 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342346-Data.db
 -rw-rw-r--  1 cassandra cassandra  697338 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342351-Data.db
 -rw-rw-r--  1 cassandra cassandra 3921428 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342367-Data.db
 -rw-rw-r--  1 cassandra cassandra  240332 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342370-Data.db
 -rw-rw-r--  1 cassandra cassandra   45669 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342374-Data.db
 -rw-rw-r--  1 cassandra cassandra53127549 Dec 12 12:03 
 /data/sstables/data/ks/Cf/ks-Cf-ic-342375-Data.db
 -rw-rw-r-- 16 cassandra cassandra 12466853166 Dec 25 22:40 
 /data/sstables/data/ks/Cf/ks-Cf-ic-396473-Data.db
 -rw-rw-r-- 12 cassandra cassandra  3903237198 Dec 29 19:42 
 /data/sstables/data/ks/Cf/ks-Cf-ic-408926-Data.db
 -rw-rw-r--  7 cassandra cassandra  3692260987 Jan  3 08:25 
 /data/sstables/data/ks/Cf/ks-Cf-ic-427733-Data.db
 -rw-rw-r--  4 cassandra cassandra  3971403602 Jan  6 20:50 
 /data/sstables/data/ks/Cf/ks-Cf-ic-437537-Data.db
 -rw-rw-r--  3 cassandra cassandra  1007832224 Jan  7 15:19 
 /data/sstables/data/ks/Cf/ks-Cf-ic-440331-Data.db
 -rw-rw-r--  2 cassandra cassandra   896132537 Jan  8 11:05 
 /data/sstables/data/ks/Cf/ks-Cf-ic-447740-Data.db
 -rw-rw-r--  1 cassandra 

git commit: make sure manifest's parent dirs exist before trying to write the file.

2014-05-15 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 e241319b2 - d267cf88c


make sure manifest's parent dirs exist before trying to write the file.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d267cf88
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d267cf88
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d267cf88

Branch: refs/heads/cassandra-2.1
Commit: d267cf88c870a05efc9109a53b51b8628b4dfe48
Parents: e241319
Author: Jason Brown jasobr...@apple.com
Authored: Wed May 7 16:34:29 2014 -0700
Committer: Jason Brown jasobr...@apple.com
Committed: Wed May 7 16:34:29 2014 -0700

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d267cf88/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 33b7303..417a5b4 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2171,9 +2171,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 final JSONObject manifestJSON = new JSONObject();
 manifestJSON.put(files, filesJSONArr);
 
-
 try
 {
+if (!manifestFile.getParentFile().exists())
+manifestFile.getParentFile().mkdirs();
 PrintStream out = new PrintStream(manifestFile);
 out.println(manifestJSON.toJSONString());
 out.close();



[jira] [Commented] (CASSANDRA-7216) Restricted superuser account request

2014-05-15 Thread Oded Peer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997366#comment-13997366
 ] 

Oded Peer commented on CASSANDRA-7216:
--

I can have just a single super user, however as tightly as I control this user 
it still poses a security threat.
This has implications in security audits, including external audits done by 
customers and partners.

I got to know the permissions better in Cassandra and it appears that in 
addition to creating keyspaces and users the restricted superuser account also 
needs to GRANT permissions to the newly-created user to access and modify the 
newly-created keyspace. If the restricted superuser account has GRANT 
permissions to any keyspace it still poses  security threat since it can create 
users with permissions to any arbitrary keyspace.

What we are trying to find an analogy of the postgres security model in 
Cassandra. In postgres objects have a single 'owner'. For most kinds of 
objects, the initial state is that only the owner can do anything with the 
object. [http://www.postgresql.org/docs/9.0/static/privileges.html].
Thus, in postgres, we have a restricted admin user used in the tenant 
provisioning process that can only create users. These newly-created users 
create database objects as their 'owner' and only the user creating the objects 
can use them. 

 Restricted superuser account request
 

 Key: CASSANDRA-7216
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7216
 Project: Cassandra
  Issue Type: Improvement
Reporter: Oded Peer
Priority: Minor

 I am developing a multi-tenant service.
 Every tenant has its own user, keyspace and can access only his keyspace.
 As new tenants are provisioned there is a need to create new users and 
 keyspaces.
 Only a superuser can issue CREATE USER requests, so we must have a super user 
 account in the system. On the other hand super users have access to all the 
 keyspaces, which poses a security risk.
 For tenant provisioning I would like to have a restricted account which can 
 only create new users, without read access to keyspaces.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7185) cqlsh can't tab-complete disabling compaction

2014-05-15 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13992996#comment-13992996
 ] 

Mikhail Stepura edited comment on CASSANDRA-7185 at 5/8/14 6:41 PM:


Hm, I don't see that option in 
http://cassandra.apache.org/doc/cql3/CQL.html#compactionOptions

http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/compactSubprop.html

bq. To disable background compactions, use nodetool 
disableautocompaction/enableautocompaction instead of setting min/max 
compaction thresholds to 0.


was (Author: mishail):
Hm, I don't see that option in 
http://cassandra.apache.org/doc/cql3/CQL.html#compactionOptions

 cqlsh can't tab-complete disabling compaction
 -

 Key: CASSANDRA-7185
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7185
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Mikhail Stepura
Priority: Trivial
 Fix For: 2.0.9, 2.1 rc1

 Attachments: CASSANDRA-2.0-7185.patch


 cqlsh can't tab-complete the following case where you want to disable 
 compaction:
 {noformat}
 alter table keys with compaction = {'class': 'SizeTieredCompactionStrategy', 
 'enabled': 'false'}
 {noformat}
 Specfically it doesn't know 'enabled' is a valid option.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7216) Restricted superuser account request

2014-05-15 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-7216:


Attachment: 7216.txt

against trunk

added, 

create user 'foo' with password 'bar' as useradmin;

which allows that user to create other users, but not superuser type stuffs.

 Restricted superuser account request
 

 Key: CASSANDRA-7216
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7216
 Project: Cassandra
  Issue Type: Improvement
Reporter: Oded Peer
Priority: Minor
 Attachments: 7216.txt


 I am developing a multi-tenant service.
 Every tenant has its own user, keyspace and can access only his keyspace.
 As new tenants are provisioned there is a need to create new users and 
 keyspaces.
 Only a superuser can issue CREATE USER requests, so we must have a super user 
 account in the system. On the other hand super users have access to all the 
 keyspaces, which poses a security risk.
 For tenant provisioning I would like to have a restricted account which can 
 only create new users, without read access to keyspaces.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7240) Altering Keyspace Replication On Large Cluster With vnodes Leads to Warns on All nodes

2014-05-15 Thread Russell Alexander Spitzer (JIRA)
Russell Alexander Spitzer created CASSANDRA-7240:


 Summary: Altering Keyspace Replication On Large Cluster With 
vnodes Leads to Warns on All nodes
 Key: CASSANDRA-7240
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7240
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1000 Nodes M1.large ubuntu 12.04
Reporter: Russell Alexander Spitzer


1000 Node cluster started with vnodes(256) on. 25 separate Nodes  began an all 
write workload against the first 1000 nodes. During the test I attempted to 
alter the key-space from simple strategy to a network topology strategy.

{code}
cqlsh ALTER KEYSPACE Keyspace1 WITH replication = {'class': 
'NetworkTopologyStrategy', 'DC1': '3', 'DC2':'3'}  AND durable_writes = true;   
 
errors={}, last_host=127.0.0.1
cqlsh ALTER KEYSPACE Keyspace1 WITH replication = {'class': 
'NetworkTopologyStrategy', 'DC1': '3', 'DC2':'3'}  AND durable_writes = true;
('Unable to complete the operation against any hosts', {Host: 127.0.0.1 DC1: 
ConnectionShutdown('Connection to 127.0.0.1 is defunct',)})
{code}

All one thousand nodes then began to repeat the following in their respective 
logs
{code}
WARN  [Thread-50131] 2014-05-14 23:34:07,631 IncomingTcpConnection.java:91 - 
UnknownColumnFamilyException reading from socket; closing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find 
cfId=46b7b090-dbaf-11e3-8413-fffd4403e7d2
at 
org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:318)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:298)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:326)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:268)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) 
~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:165)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.net.IncomingTcpConnection.handleModernVersion(IncomingTcpConnection.java:147)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82)
 ~[apache-cassandra-2.1.0-beta2.jar:2.1.0-beta2]
{code}

Stress continued but at a decreased speed
{code}
Excerpt from one of the 25 Stress Nodes
83222847  ,   14602,   14602, 6.7, 2.1,23.1,   132.1,   292.3,   
531.3, 5216.5,  0.00188
83239512  ,   13888,   13888, 7.3, 2.1,31.3,   129.9,   267.9,   
555.8, 5217.7,  0.00188
83258520  ,   14301,   14301, 7.0, 2.1,28.8,   125.4,   297.2,   
758.1, 5219.0,  0.00188
83277750  ,   14023,   14023, 7.1, 2.1,28.4,   132.8,   292.3,   
703.6, 5220.4,  0.00188
83301413  ,   14410,   14410, 6.9, 2.1,24.5,   124.8,   391.4,  
1010.1, 5222.0,  0.00188
83316846  ,   12313,   12313, 8.1, 2.1,35.1,   168.2,   275.3,   
467.9, 5223.3,  0.00188
83332883  ,   13753,   13753, 6.9, 2.1,28.1,   132.2,   276.1,   
498.9, 5224.4,  0.00188
#ALTER REQUEST HERE
83351413  ,9981,9981, 9.9, 2.1,46.7,   172.0,   447.8,  
1327.9, 5226.3,  0.00188
83358381  ,4464,4464,22.7, 2.2,   125.9,   257.8,   594.6,  
1650.6, 5227.8,  0.00188
83363153  ,3186,3186,31.7, 2.5,   153.0,   300.3,   477.0,   
566.1, 5229.3,  0.00189
83367341  ,2967,2967,33.7, 2.4,   173.9,   311.5,   465.8,   
761.9, 5230.7,  0.00190
83370738  ,2392,2392,41.4, 2.9,   208.0,   308.1,   434.8,   
839.6, 5232.2,  0.00191
83373651  ,2283,2283,43.0, 2.5,   213.9,   310.5,   409.3,   
503.3, 5233.4,  0.00192
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5818) Duplicated error messages on directory creation error at startup

2014-05-15 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-5818:
--

Attachment: cassandra-2.0-5818_v2.diff

Patch for cassandra-2.0 on 453a07430c3ebce938047f9d5d0339ff90c6bfcc


 Duplicated error messages on directory creation error at startup
 

 Key: CASSANDRA-5818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5818
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Lyuben Todorov
Priority: Trivial
 Fix For: 2.0.8, 2.1.0

 Attachments: 5818_v2.patch, cassandra-2.0-5818_v2.diff, patch.diff, 
 trunk-5818.patch


 When I start Cassandra without the appropriate OS access rights to the 
 default Cassandra directories, I get a flood of {{ERROR}} messages at 
 startup, whereas one per directory would be more appropriate. See bellow:
 {code}
 ERROR 13:37:39,792 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,797 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,798 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,798 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,799 Failed to create 
 /var/lib/cassandra/data/system/schema_triggers directory
 ERROR 13:37:39,800 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,801 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,802 Failed to create /var/lib/cassandra/data/system/batchlog 
 directory
 ERROR 13:37:39,802 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,803 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,803 Failed to create 
 /var/lib/cassandra/data/system/peer_events directory
 ERROR 13:37:39,804 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,805 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,805 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,806 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,807 Failed to create 
 /var/lib/cassandra/data/system/compactions_in_progress directory
 ERROR 13:37:39,808 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,809 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,811 Failed to create /var/lib/cassandra/data/system/hints 
 directory
 ERROR 13:37:39,812 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,812 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,813 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,814 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,814 Failed to create 
 /var/lib/cassandra/data/system/schema_keyspaces directory
 ERROR 13:37:39,815 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,816 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,817 Failed to create 
 /var/lib/cassandra/data/system/range_xfers directory
 ERROR 13:37:39,817 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,818 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,818 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,820 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,821 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,821 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,822 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,822 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,823 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,824 Failed to create 
 /var/lib/cassandra/data/system/schema_columnfamilies directory
 ERROR 13:37:39,824 Failed to create 
 

[jira] [Created] (CASSANDRA-7193) rows_per_partition_to_cache is not reflected in table DESCRIBE

2014-05-15 Thread Ryan McGuire (JIRA)
Ryan McGuire created CASSANDRA-7193:
---

 Summary: rows_per_partition_to_cache is not reflected in table 
DESCRIBE
 Key: CASSANDRA-7193
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7193
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: Marcus Eriksson
Priority: Minor


I can create a table with the new query cache from CASSANDRA-5357:

{code}
CREATE TABLE status (user text, status_id timeuuid, status text, PRIMARY KEY 
(user, status_id)) WITH caching = '{keys:ALL, rows_per_partition:10}';
{code}

However, that is not the syntax mentioned in that ticket. It says to use a 
rows_per_partition_to_cache setting instead, which does appear to work in cql:

{code}
CREATE TABLE status2 (user text, status_id timeuuid, status text, PRIMARY KEY 
(user, status_id)) WITH rows_per_partition_to_cache = 200;
{code}

But that setting is not reflected in the table description:

{code}
cqlsh:test DESCRIBE TABLE status2;

CREATE TABLE test.status2 (
user text,
status_id timeuuid,
status text,
PRIMARY KEY (user, status_id)
) WITH CLUSTERING ORDER BY (status_id ASC)
AND caching = '{keys:ALL, rows_per_partition:NONE}'
...
{code}

Similarly, alter table with that syntax, does not produce an error, but also 
does not seem to affect the setting :

{code}
ALTER TABLE test.status WITH rows_per_partition_to_cache = 200;
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-05-15 Thread dbrosius
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fb0a78a2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fb0a78a2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fb0a78a2

Branch: refs/heads/trunk
Commit: fb0a78a23ac09019944f267049501c697bfa1539
Parents: c4fcb16 d839350
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed May 7 21:19:00 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed May 7 21:19:00 2014 -0400

--
 CHANGES.txt| 1 +
 .../cassandra/service/PendingRangeCalculatorService.java   | 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fb0a78a2/CHANGES.txt
--
diff --cc CHANGES.txt
index 8df7d95,312cf06..6c8f1fb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -31,66 -15,16 +31,67 @@@ Merged from 1.2
   * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
   * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
   * Ensure that batchlog and hint timeouts do not produce hints 
(CASSANDRA-7058)
 - * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
   * Always clean up references in SerializingCache (CASSANDRA-6994)
 + * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
   * fix npe when doing -Dcassandra.fd_initial_value_ms (CASSANDRA-6751)
   * Preserves CQL metadata when updating table from thrift (CASSANDRA-6831)
 - * fix time conversion to milliseconds in SimpleCondition.await 
(CASSANDRA-7149)
   * remove duplicate query for local tokens (CASSANDRA-7182)
   * raise streaming phi convict threshold level (CASSANDRA-7063)
+  * reduce garbage creation in calculatePendingRanges (CASSANDRA-7191)
  
 -1.2.16
 +2.0.7
 + * Put nodes in hibernate when join_ring is false (CASSANDRA-6961)
 + * Avoid early loading of non-system keyspaces before compaction-leftovers 
 +   cleanup at startup (CASSANDRA-6913)
 + * Restrict Windows to parallel repairs (CASSANDRA-6907)
 + * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)
 + * Fix NPE in MeteredFlusher (CASSANDRA-6820)
 + * Fix race processing range scan responses (CASSANDRA-6820)
 + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821)
 + * Add uuid() function (CASSANDRA-6473)
 + * Omit tombstones from schema digests (CASSANDRA-6862)
 + * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884)
 + * Lower chances for losing new SSTables during nodetool refresh and
 +   ColumnFamilyStore.loadNewSSTables (CASSANDRA-6514)
 + * Add support for DELETE ... IF EXISTS to CQL3 (CASSANDRA-5708)
 + * Update hadoop_cql3_word_count example (CASSANDRA-6793)
 + * Fix handling of RejectedExecution in sync Thrift server (CASSANDRA-6788)
 + * Log more information when exceeding tombstone_warn_threshold 
(CASSANDRA-6865)
 + * Fix truncate to not abort due to unreachable fat clients (CASSANDRA-6864)
 + * Fix schema concurrency exceptions (CASSANDRA-6841)
 + * Fix leaking validator FH in StreamWriter (CASSANDRA-6832)
 + * Fix saving triggers to schema (CASSANDRA-6789)
 + * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 + * Fix accounting in FileCacheService to allow re-using RAR (CASSANDRA-6838)
 + * Fix static counter columns (CASSANDRA-6827)
 + * Restore expiring-deleted (cell) compaction optimization (CASSANDRA-6844)
 + * Fix CompactionManager.needsCleanup (CASSANDRA-6845)
 + * Correctly compare BooleanType values other than 0 and 1 (CASSANDRA-6779)
 + * Read message id as string from earlier versions (CASSANDRA-6840)
 + * Properly use the Paxos consistency for (non-protocol) batch 
(CASSANDRA-6837)
 + * Add paranoid disk failure option (CASSANDRA-6646)
 + * Improve PerRowSecondaryIndex performance (CASSANDRA-6876)
 + * Extend triggers to support CAS updates (CASSANDRA-6882)
 + * Static columns with IF NOT EXISTS don't always work as expected 
(CASSANDRA-6873)
 + * Fix paging with SELECT DISTINCT (CASSANDRA-6857)
 + * Fix UnsupportedOperationException on CAS timeout (CASSANDRA-6923)
 + * Improve MeteredFlusher handling of MF-unaffected column families
 +   (CASSANDRA-6867)
 + * Add CqlRecordReader using native pagination (CASSANDRA-6311)
 + * Add QueryHandler interface (CASSANDRA-6659)
 + * Track liveRatio per-memtable, not per-CF (CASSANDRA-6945)
 + * Make sure upgradesstables keeps sstable level (CASSANDRA-6958)
 + * Fix LIMIT with static columns (CASSANDRA-6956)
 + * Fix clash with CQL column name in thrift validation (CASSANDRA-6892)
 + * Fix error with super columns in mixed 1.2-2.0 clusters (CASSANDRA-6966)
 + * Fix bad skip 

[4/5] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-05-15 Thread dbrosius
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b4a3b520
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b4a3b520
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b4a3b520

Branch: refs/heads/cassandra-2.1
Commit: b4a3b52076e221f3fa7c65a70c7c4ddec439689c
Parents: 8d4dc6d 0132e54
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed May 7 01:37:48 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed May 7 01:37:48 2014 -0400

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/service/StorageService.java | 5 +++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4a3b520/CHANGES.txt
--
diff --cc CHANGES.txt
index d65a694,d7b7f00..517f0ab
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -26,69 -15,15 +26,70 @@@ Merged from 1.2
   * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
   * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
   * Ensure that batchlog and hint timeouts do not produce hints 
(CASSANDRA-7058)
 - * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
   * Always clean up references in SerializingCache (CASSANDRA-6994)
 + * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
   * fix npe when doing -Dcassandra.fd_initial_value_ms (CASSANDRA-6751)
   * Preserves CQL metadata when updating table from thrift (CASSANDRA-6831)
 - * fix time conversion to milliseconds in SimpleCondition.await 
(CASSANDRA-7149)
+  * remove duplicate query for local tokens (CASSANDRA-7182)
  
  
 -1.2.16
 +2.0.7
 + * Put nodes in hibernate when join_ring is false (CASSANDRA-6961)
 + * Continue assassinating even if the endpoint vanishes (CASSANDRA-6787)
 + * Non-droppable verbs shouldn't be dropped from OTC (CASSANDRA-6980)
 + * Shutdown batchlog executor in SS#drain() (CASSANDRA-7025)
 + * Schedule schema pulls on change (CASSANDRA-6971)
 + * Avoid early loading of non-system keyspaces before compaction-leftovers 
 +   cleanup at startup (CASSANDRA-6913)
 + * Restrict Windows to parallel repairs (CASSANDRA-6907)
 + * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)
 + * Fix NPE in MeteredFlusher (CASSANDRA-6820)
 + * Fix race processing range scan responses (CASSANDRA-6820)
 + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821)
 + * Add uuid() function (CASSANDRA-6473)
 + * Omit tombstones from schema digests (CASSANDRA-6862)
 + * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884)
 + * Lower chances for losing new SSTables during nodetool refresh and
 +   ColumnFamilyStore.loadNewSSTables (CASSANDRA-6514)
 + * Add support for DELETE ... IF EXISTS to CQL3 (CASSANDRA-5708)
 + * Update hadoop_cql3_word_count example (CASSANDRA-6793)
 + * Fix handling of RejectedExecution in sync Thrift server (CASSANDRA-6788)
 + * Log more information when exceeding tombstone_warn_threshold 
(CASSANDRA-6865)
 + * Fix truncate to not abort due to unreachable fat clients (CASSANDRA-6864)
 + * Fix schema concurrency exceptions (CASSANDRA-6841)
 + * Fix leaking validator FH in StreamWriter (CASSANDRA-6832)
 + * Fix saving triggers to schema (CASSANDRA-6789)
 + * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 + * Fix accounting in FileCacheService to allow re-using RAR (CASSANDRA-6838)
 + * Fix static counter columns (CASSANDRA-6827)
 + * Restore expiring-deleted (cell) compaction optimization (CASSANDRA-6844)
 + * Fix CompactionManager.needsCleanup (CASSANDRA-6845)
 + * Correctly compare BooleanType values other than 0 and 1 (CASSANDRA-6779)
 + * Read message id as string from earlier versions (CASSANDRA-6840)
 + * Properly use the Paxos consistency for (non-protocol) batch 
(CASSANDRA-6837)
 + * Add paranoid disk failure option (CASSANDRA-6646)
 + * Improve PerRowSecondaryIndex performance (CASSANDRA-6876)
 + * Extend triggers to support CAS updates (CASSANDRA-6882)
 + * Static columns with IF NOT EXISTS don't always work as expected 
(CASSANDRA-6873)
 + * Fix paging with SELECT DISTINCT (CASSANDRA-6857)
 + * Fix UnsupportedOperationException on CAS timeout (CASSANDRA-6923)
 + * Improve MeteredFlusher handling of MF-unaffected column families
 +   (CASSANDRA-6867)
 + * Add CqlRecordReader using native pagination (CASSANDRA-6311)
 + * Add QueryHandler interface (CASSANDRA-6659)
 + * Track liveRatio per-memtable, not per-CF (CASSANDRA-6945)
 + * Make sure upgradesstables keeps sstable level (CASSANDRA-6958)
 + * Fix LIMIT with static columns (CASSANDRA-6956)
 + * Fix clash with CQL column name in 

[jira] [Comment Edited] (CASSANDRA-6962) examine shortening path length post-5202

2014-05-15 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997782#comment-13997782
 ] 

Joshua McKenzie edited comment on CASSANDRA-6962 at 5/14/14 5:10 PM:
-

CASSANDRA-4110 and the limitations in Schema.java provide us some protection 
but there's really nothing to stop users nesting their cassandra data 250 
characters deep in a path and having things blow up on them regardless of what 
length we limit ourselves to.

On snapshots we'll be using 204 chars worst-case (48 KS, 48 CF, *2 each, 9 for 
snapshots, 3 for \) so that doesn't leave us a lot of breathing room on path 
for data_file_directories.  Maybe lowering the NAME_LENGTH in Schema.java would 
be appropriate given CASSANDRA-7136?  Do we have a lot of users rolling out 40+ 
char KS and CF names in general, much less on Windows?


was (Author: joshuamckenzie):
CASSANDRA-4110 and the limitations in Schema.java provide us some protection 
but there's really nothing to stop users nesting their cassandra data 250 
characters deep in a path and having things blow up on them regardless of what 
length we limit ourselves to.

On snapshots we'll be using 204 chars worst-case (48 KS, 48 CF, *2 each, +9 for 
snapshots, +3 for \) so that doesn't leave us a lot of breathing room on path 
for data_file_directories.  Maybe lowering the NAME_LENGTH in Schema.java would 
be appropriate given CASSANDRA-7136?  Do we have a lot of users rolling out 40+ 
char KS and CF names in general, much less on Windows?

 examine shortening path length post-5202
 

 Key: CASSANDRA-6962
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6962
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 2.1 rc1

 Attachments: 6962-2.1.txt


 From CASSANDRA-5202 discussion:
 {quote}
 Did we give up on this?
 Could we clean up the redundancy a little by moving the ID into the directory 
 name? e.g., ks/cf-uuid/version-generation-component.db
 I'm worried about path length, which is limited on Windows.
 Edit: to give a specific example, for KS foo Table bar we now have
 /var/lib/cassandra/flush/foo/bar-2fbb89709a6911e3b7dc4d7d4e3ca4b4/foo-bar-ka-1-Data.db
 I'm proposing
 /var/lib/cassandra/flush/foo/bar-2fbb89709a6911e3b7dc4d7d4e3ca4b4/ka-1-Data.db
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/2] git commit: Followup to 6916, don't try to snapshot readers that are opened early.

2014-05-15 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 15e0814c5 - 33939cae6


Followup to 6916, don't try to snapshot readers that are opened early.

Patch by benedict; reviewed by marcuse for CASSANDRA-6916.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33bc1e8f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33bc1e8f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33bc1e8f

Branch: refs/heads/trunk
Commit: 33bc1e8f8e44fa61f87d47add52f5bda3456f62c
Parents: 2a77695
Author: Marcus Eriksson marc...@apache.org
Authored: Thu May 15 08:13:19 2014 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Thu May 15 08:13:19 2014 +0200

--
 .../apache/cassandra/db/ColumnFamilyStore.java  |  2 +-
 .../cassandra/io/sstable/SSTableReader.java | 26 +---
 .../cassandra/io/sstable/SSTableWriter.java |  5 ++--
 3 files changed, 21 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33bc1e8f/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 417a5b4..3786ef5 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2144,7 +2144,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 for (SSTableReader ssTable : currentView.sstables)
 {
-if (predicate != null  !predicate.apply(ssTable))
+if (ssTable.isOpenEarly || (predicate != null  
!predicate.apply(ssTable)))
 {
 continue;
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/33bc1e8f/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index 98fe5b6..53f7e53 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -159,6 +159,7 @@ public class SSTableReader extends SSTable
  * The age is in milliseconds since epoc and is local to this host.
  */
 public final long maxDataAge;
+public final boolean isOpenEarly;
 
 // indexfile and datafile: might be null before a call to load()
 private SegmentedFile ifile;
@@ -336,7 +337,8 @@ public class SSTableReader extends SSTable
   metadata,
   partitioner,
   System.currentTimeMillis(),
-  statsMetadata);
+  statsMetadata,
+  false);
 
 // special implementation of load to use non-pooled SegmentedFile 
builders
 SegmentedFile.Builder ibuilder = new BufferedSegmentedFile.Builder();
@@ -384,7 +386,8 @@ public class SSTableReader extends SSTable
   metadata,
   partitioner,
   System.currentTimeMillis(),
-  statsMetadata);
+  statsMetadata,
+  false);
 
 // load index and filter
 long start = System.nanoTime();
@@ -463,7 +466,8 @@ public class SSTableReader extends SSTable
   IndexSummary isummary,
   IFilter bf,
   long maxDataAge,
-  StatsMetadata sstableMetadata)
+  StatsMetadata sstableMetadata,
+  boolean isOpenEarly)
 {
 assert desc != null  partitioner != null  ifile != null  dfile 
!= null  isummary != null  bf != null  sstableMetadata != null;
 return new SSTableReader(desc,
@@ -474,7 +478,8 @@ public class SSTableReader extends SSTable
  isummary,
  bf,
  maxDataAge,
- sstableMetadata);
+ sstableMetadata,
+ isOpenEarly);
 }
 

[jira] [Updated] (CASSANDRA-6974) Replaying archived commitlogs isn't working

2014-05-15 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6974:


Attachment: 6974.txt

In reducing the overhead size I broke the test which had the overhead size 
hardcoded. I've exposed the size from CLS and use it in the test, so that we 
shouldn't have this problem in future.

 Replaying archived commitlogs isn't working
 ---

 Key: CASSANDRA-6974
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6974
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: Benedict
  Labels: qa-resolved
 Fix For: 2.1 rc1

 Attachments: 2.0.system.log, 2.1.system.log, 6974.txt


 I have a test for restoring archived commitlogs, which is not working in 2.1 
 HEAD.  My commitlogs consist of 30,000 inserts, but system.log indicates 
 there were only 2 mutations replayed:
 {code}
 INFO  [main] 2014-04-02 11:49:54,173 CommitLog.java:115 - Log replay 
 complete, 2 replayed mutations
 {code}
 There are several warnings in the logs about bad headers and invalid CRCs: 
 {code}
 WARN  [main] 2014-04-02 11:49:54,156 CommitLogReplayer.java:138 - Encountered 
 bad header at position 0 of commit log /tmp/dtest
 -mZIlPE/test/node1/commitlogs/CommitLog-4-1396453793570.log, with invalid 
 CRC. The end of segment marker should be zero.
 {code}
 compare that to the same test run on 2.0, where it replayed many more 
 mutations:
 {code}
  INFO [main] 2014-04-02 11:49:04,673 CommitLog.java (line 132) Log replay 
 complete, 35960 replayed mutations
 {code}
 I'll attach the system logs for reference.
 [Here is the dtest to reproduce 
 this|https://github.com/riptano/cassandra-dtest/blob/master/snapshot_test.py#L75]
  - (This currently relies on the fix for snapshots available in 
 CASSANDRA-6965.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4718) More-efficient ExecutorService for improved throughput

2014-05-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13998310#comment-13998310
 ] 

Jason Brown commented on CASSANDRA-4718:


[~enigmacurry] How many threads are you running thrift with? If you aren't 
setting it explicitly, (iirc) it gets set to the number of processors, which is 
far below what anything sane should run with. For our machines, I've been using 
512 for writes, and 128 for reads (mirroring what we run with in prod, which is 
same hardware as the machines I'm testing on, more or less). I think this may 
explain we we do not see the vast discrepancy between thrift and native 
protocol ops/second - native protocol default to 128 threads.

Also, are you using sync or hsha for thrift? 



 More-efficient ExecutorService for improved throughput
 --

 Key: CASSANDRA-4718
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4718
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Benedict
Priority: Minor
  Labels: performance
 Fix For: 2.1.0

 Attachments: 4718-v1.patch, PerThreadQueue.java, aws.svg, 
 aws_read.svg, backpressure-stress.out.txt, baq vs trunk.png, 
 belliotsmith_branches-stress.out.txt, jason_read.svg, jason_read_latency.svg, 
 jason_write.svg, op costs of various queues.ods, stress op rate with various 
 queues.ods, v1-stress.out


 Currently all our execution stages dequeue tasks one at a time.  This can 
 result in contention between producers and consumers (although we do our best 
 to minimize this by using LinkedBlockingQueue).
 One approach to mitigating this would be to make consumer threads do more 
 work in bulk instead of just one task per dequeue.  (Producer threads tend 
 to be single-task oriented by nature, so I don't see an equivalent 
 opportunity there.)
 BlockingQueue has a drainTo(collection, int) method that would be perfect for 
 this.  However, no ExecutorService in the jdk supports using drainTo, nor 
 could I google one.
 What I would like to do here is create just such a beast and wire it into (at 
 least) the write and read stages.  (Other possible candidates for such an 
 optimization, such as the CommitLog and OutboundTCPConnection, are not 
 ExecutorService-based and will need to be one-offs.)
 AbstractExecutorService may be useful.  The implementations of 
 ICommitLogExecutorService may also be useful. (Despite the name these are not 
 actual ExecutorServices, although they share the most important properties of 
 one.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/3] git commit: Fix marking commitlog segments clean patch by bes; reviewed by jbellis for CASSANDRA-6959

2014-05-15 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1753f3749 - 6e7934280


Fix marking commitlog segments clean
patch by bes; reviewed by jbellis for CASSANDRA-6959


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7da56205
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7da56205
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7da56205

Branch: refs/heads/trunk
Commit: 7da562053fe729adb41061e52bfda17837f77d62
Parents: af80201
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu May 8 10:51:36 2014 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu May 8 10:51:59 2014 -0500

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7da56205/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 714a475..5afe800 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-rc1
+ * Fix marking commitlogsegments clean (CASSANDRA-6959)
  * Add snapshot manifest describing files included (CASSANDRA-6326)
  * Parallel streaming for sstableloader (CASSANDRA-3668)
  * Fix bugs in supercolumns handling (CASSANDRA-7138)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7da56205/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
index e5c9b3e..3830966 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
@@ -469,7 +469,7 @@ public class CommitLogSegment
 UUID cfId = clean.getKey();
 AtomicInteger cleanPos = clean.getValue();
 AtomicInteger dirtyPos = cfDirty.get(cfId);
-if (dirtyPos != null  dirtyPos.intValue()  cleanPos.intValue())
+if (dirtyPos != null  dirtyPos.intValue() = cleanPos.intValue())
 {
 cfDirty.remove(cfId);
 iter.remove();
@@ -482,9 +482,9 @@ public class CommitLogSegment
  */
 public synchronized CollectionUUID getDirtyCFIDs()
 {
-removeCleanFromDirty();
 if (cfClean.isEmpty() || cfDirty.isEmpty())
 return cfDirty.keySet();
+
 ListUUID r = new ArrayList(cfDirty.size());
 for (Map.EntryUUID, AtomicInteger dirty : cfDirty.entrySet())
 {



[jira] [Commented] (CASSANDRA-6626) Create 2.0-2.1 counter upgrade dtests

2014-05-15 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997700#comment-13997700
 ] 

Russ Hatch commented on CASSANDRA-6626:
---

Here's where the changes for counters were introduced:
https://github.com/riptano/cassandra-dtest/commit/236c8e3bea19fa5735e067e98756c1fbee9a9162

I may have tweaked it a bit since though. 
https://github.com/riptano/cassandra-dtest/blob/master/upgrade_through_versions_test.py#L254

 Create 2.0-2.1 counter upgrade dtests
 --

 Key: CASSANDRA-6626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6626
 Project: Cassandra
  Issue Type: Test
Reporter: Aleksey Yeschenko
Assignee: Russ Hatch
 Fix For: 2.1 rc1


 Create 2.0-2.1 counter upgrade dtests. Something more extensive, yet more 
 specific than 
 https://github.com/riptano/cassandra-dtest/blob/master/upgrade_through_versions_test.py



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-6877) pig tests broken

2014-05-15 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-6877:
--

Assignee: Sam Tunnicliffe  (was: Brandon Williams)

 pig tests broken
 

 Key: CASSANDRA-6877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6877
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Sam Tunnicliffe
 Fix For: 2.0.9, 2.1 rc1

 Attachments: 0001-Exclude-cobertura-xerces-dependency.patch


 Not sure what happened here, but I get a smorgasbord of errors running the 
 pig tests now, from xml errors in xerces to NotFoundExceptions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-3569) Failure detector downs should not break streams

2014-05-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997699#comment-13997699
 ] 

Jonathan Ellis commented on CASSANDRA-3569:
---

So you yank the cable, FD marks it down, plug the cable back in, repair 
finishes?

 Failure detector downs should not break streams
 ---

 Key: CASSANDRA-3569
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3569
 Project: Cassandra
  Issue Type: New Feature
Reporter: Peter Schuller
Assignee: Joshua McKenzie
 Fix For: 2.1.1

 Attachments: 3569-2.0.txt, 3569_v1.txt


 CASSANDRA-2433 introduced this behavior just to get repairs to don't sit 
 there waiting forever. In my opinion the correct fix to that problem is to 
 use TCP keep alive. Unfortunately the TCP keep alive period is insanely high 
 by default on a modern Linux, so just doing that is not entirely good either.
 But using the failure detector seems non-sensicle to me. We have a 
 communication method which is the TCP transport, that we know is used for 
 long-running processes that you don't want to incorrectly be killed for no 
 good reason, and we are using a failure detector tuned to detecting when not 
 to send real-time sensitive request to nodes in order to actively kill a 
 working connection.
 So, rather than add complexity with protocol based ping/pongs and such, I 
 propose that we simply just use TCP keep alive for streaming connections and 
 instruct operators of production clusters to tweak 
 net.ipv4.tcp_keepalive_{probes,intvl} as appropriate (or whatever equivalent 
 on their OS).
 I can submit the patch. Awaiting opinions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/6] git commit: Remove unused isLocalTask method.

2014-05-15 Thread brandonwilliams
Remove unused isLocalTask method.

Patch by Lyuben Todorov, reviewed by brandonwilliams for CASSANDRA-7181


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dceb739f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dceb739f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dceb739f

Branch: refs/heads/trunk
Commit: dceb739f32b7d8a8525b3af99cf51b7acdd3b7f6
Parents: ea0c399
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 7 16:30:05 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 7 16:30:05 2014 -0500

--
 .../org/apache/cassandra/repair/StreamingRepairTask.java| 9 -
 1 file changed, 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dceb739f/src/java/org/apache/cassandra/repair/StreamingRepairTask.java
--
diff --git a/src/java/org/apache/cassandra/repair/StreamingRepairTask.java 
b/src/java/org/apache/cassandra/repair/StreamingRepairTask.java
index 1fd2b4f..f7203a4 100644
--- a/src/java/org/apache/cassandra/repair/StreamingRepairTask.java
+++ b/src/java/org/apache/cassandra/repair/StreamingRepairTask.java
@@ -46,15 +46,6 @@ public class StreamingRepairTask implements Runnable, 
StreamEventHandler
 this.request = request;
 }
 
-/**
- * Returns true if the task if the task can be executed locally, false if
- * it has to be forwarded.
- */
-public boolean isLocalTask()
-{
-return request.initiator.equals(request.src);
-}
-
 public void run()
 {
 if (request.src.equals(FBUtilities.getBroadcastAddress()))



[1/3] git commit: Fix hardcoded overhead size in commit log test.

2014-05-15 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 b2dd6a7f6 - 259e17df3
  refs/heads/trunk 70d18cd3e - b7d5f5a16


Fix hardcoded overhead size in commit log test.

Patch by Benedict, reviewed by brandonwilliams for CASSANDRA-6974


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/259e17df
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/259e17df
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/259e17df

Branch: refs/heads/cassandra-2.1
Commit: 259e17df3bd39d1cedf5e3a40c88b1d0d8efdc33
Parents: b2dd6a7
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri May 9 10:04:55 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri May 9 10:04:55 2014 -0500

--
 src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java | 2 +-
 test/unit/org/apache/cassandra/db/CommitLogTest.java | 3 ++-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/259e17df/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
index 2120d3e..c87b328 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java
@@ -74,7 +74,7 @@ public class CommitLogSegment
 }
 
 // The commit log entry overhead in bytes (int: length + int: head 
checksum + int: tail checksum)
-static final int ENTRY_OVERHEAD_SIZE = 4 + 4 + 4;
+public static final int ENTRY_OVERHEAD_SIZE = 4 + 4 + 4;
 
 // The commit log (chained) sync marker/header size in bytes (int: length 
+ int: checksum [segmentId, position])
 static final int SYNC_MARKER_SIZE = 4 + 4;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/259e17df/test/unit/org/apache/cassandra/db/CommitLogTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/CommitLogTest.java 
b/test/unit/org/apache/cassandra/db/CommitLogTest.java
index ddab9ea..660e91e 100644
--- a/test/unit/org/apache/cassandra/db/CommitLogTest.java
+++ b/test/unit/org/apache/cassandra/db/CommitLogTest.java
@@ -34,6 +34,7 @@ import org.apache.cassandra.config.Config;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.db.commitlog.CommitLog;
 import org.apache.cassandra.db.commitlog.CommitLogDescriptor;
+import org.apache.cassandra.db.commitlog.CommitLogSegment;
 import org.apache.cassandra.db.composites.CellName;
 import org.apache.cassandra.net.MessagingService;
 
@@ -174,7 +175,7 @@ public class CommitLogTest extends SchemaLoader
 rm.add(Standard1, Util.cellname(c1), ByteBuffer.allocate(0), 0);
 
 int max = (DatabaseDescriptor.getCommitLogSegmentSize() / 2);
-max -= (4 + 8 + 8); // log entry overhead
+max -= CommitLogSegment.ENTRY_OVERHEAD_SIZE; // log entry overhead
 return max - (int) Mutation.serializer.serializedSize(rm, 
MessagingService.current_version);
 }
 



[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-05-15 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13994068#comment-13994068
 ] 

Pavel Yaskevich commented on CASSANDRA-6285:


No, by default it's turned off, because Thrift side expectation is that once 
the invocation is complete nobody else holds the buffers, but it seems like the 
problem is that on Cassandra side we actually never copy the buffer for the 
commit log (or was it something else?). So we need to set thrift server to 
alwayReallocate explicitly.

[~rbranson] I can give you updated jar so you don't have to wait for the 
release of Cassandra which would have alwaysReallocate set to true by default.

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/3] git commit: don't use o.a.commons.io for one method

2014-05-15 Thread brandonwilliams
don't use o.a.commons.io for one method


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/39b0d0e3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/39b0d0e3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/39b0d0e3

Branch: refs/heads/trunk
Commit: 39b0d0e3efe3f0b58c317a0fa8766dc99ad5ec11
Parents: ce7bf5e
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 7 18:11:47 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 7 18:11:47 2014 -0500

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/39b0d0e3/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index c5afb25..33b7303 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -17,9 +17,7 @@
  */
 package org.apache.cassandra.db;
 
-import java.io.File;
-import java.io.FileFilter;
-import java.io.IOException;
+import java.io.*;
 import java.lang.management.ManagementFactory;
 import java.nio.ByteBuffer;
 import java.util.*;
@@ -2173,9 +2171,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 final JSONObject manifestJSON = new JSONObject();
 manifestJSON.put(files, filesJSONArr);
 
+
 try
 {
-org.apache.commons.io.FileUtils.writeStringToFile(manifestFile, 
manifestJSON.toJSONString());
+PrintStream out = new PrintStream(manifestFile);
+out.println(manifestJSON.toJSONString());
+out.close();
 }
 catch (IOException e)
 {



  1   2   3   >