[jira] [Commented] (CASSANDRA-7769) Implement pg-style dollar syntax for string constants

2014-08-15 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098267#comment-14098267
 ] 

Robert Stupp commented on CASSANDRA-7769:
-

So, this syntax would support:

# full, nested dollar-qutoted string constants like this 
{{$function$the quick brown fox $q$jump$$q$ over the lazy $d$dog$d$$function$}} 
which is valid and would be the alternative of
{{'the quick brown fox ''jump$'' over the lazy ''dog'''}}
# "simple" non-nested ones like this 
{{$function$the quick brown fox jump$ over the lazy dog$function$}}
# and even this simple style 
{{$$the quick brown fox jump$ over the lazy dog$$}}


> Implement pg-style dollar syntax for string constants
> -
>
> Key: CASSANDRA-7769
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7769
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
>
> Follow-up of CASSANDRA-7740:
> {{$function$...$function$}} in addition to string style variant.
> See also 
> http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7769) Implement pg-style dollar syntax for string constants

2014-08-15 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098311#comment-14098311
 ] 

Sylvain Lebresne commented on CASSANDRA-7769:
-

I'm unclear on why the "full, nested" one is useful (that is, I don't 
understand the justification of the PG doc). But it suggest some interpretation 
that PG does when executing the function which I'm not sure we care about/is 
relevant to us. If I'm missing something obvious, so be it, but otherwise, I 
think we should skip that nesting part (if only because it will probably be 
annoying to handle in antlr).

> Implement pg-style dollar syntax for string constants
> -
>
> Key: CASSANDRA-7769
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7769
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
>
> Follow-up of CASSANDRA-7740:
> {{$function$...$function$}} in addition to string style variant.
> See also 
> http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7769) Implement pg-style dollar syntax for string constants

2014-08-15 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098322#comment-14098322
 ] 

Robert Stupp commented on CASSANDRA-7769:
-

bq. why the "full, nested" one is useful (that is, I don't understand the 
justification of the PG doc)

Yea - that's true - including the PG doc thing. Might be that PG needs to omit 
{{'}} in the dollar-quoted completely. Even the 
{{$someArbitraryFoo$...$someArbitraryFoo$}} might cause me to shout $%!&"$ 
words on antlr ;)

> Implement pg-style dollar syntax for string constants
> -
>
> Key: CASSANDRA-7769
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7769
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
>
> Follow-up of CASSANDRA-7740:
> {{$function$...$function$}} in addition to string style variant.
> See also 
> http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6839) Support non equal conditions (for LWT)

2014-08-15 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098325#comment-14098325
 ] 

Sylvain Lebresne commented on CASSANDRA-6839:
-

bq. I would like to remove the check for incompatible conditions

I'm personally good with that.

> Support non equal conditions (for LWT)
> --
>
> Key: CASSANDRA-6839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6839
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 2.0.10
>
> Attachments: 6839-v2.txt, 6839-v3.txt, 6839.txt
>
>
> We currently only support equal conditions in conditional updates, but it 
> would be relatively trivial to support non-equal ones as well. At the very 
> least we should support '>', '>=', '<' and '<=', though it would probably 
> also make sense to add a non-equal relation too ('!=').



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7542) Reduce CAS contention

2014-08-15 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098326#comment-14098326
 ] 

Benedict commented on CASSANDRA-7542:
-

OK. Not sure if it is worth our pursuing this right now then, at least as far 
as a 2.0 delivery is concerned. When I get some more free time I'll create some 
benchmarks to test how much of an improvement these (or future) changes have.

> Reduce CAS contention
> -
>
> Key: CASSANDRA-7542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7542
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Benedict
> Fix For: 2.0.10
>
>
> CAS updates on same CQL partition can lead to heavy contention inside C*. I 
> am looking for simple ways(no algorithmic changes) to reduce contention as 
> the penalty of it is high in terms of latency, specially for reads. 
> We can put some sort of synchronization on CQL partition at StorageProxy 
> level. This will reduce contention at least for all requests landing on one 
> box for same partition. 
> Here is an example of why it will help:
> 1) Say 1 write and 2 read CAS requests for the same partition key is send to 
> C* in parallel. 
> 2) Since client is token-aware, it sends these 3 request to the same C* 
> instance A. (Lets assume that all 3 requests goto same instance A) 
> 3) In this C* instance A, all 3 CAS requests will contend with each other in 
> Paxos. (This is bad)
> To improve contention in 3), what I am proposing is to add a lock on 
> partition key similar to what we do in PaxosState.java to serialize these 3 
> requests. This will remove the contention and improve performance as these 3 
> requests will not collide with each other.
> Another improvement we can do in client is to pick a deterministic live 
> replica for a given partition doing CAS.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7775) Cassandra attempts to flush an empty memtable into disk and fails

2014-08-15 Thread Omri Bahumi (JIRA)
Omri Bahumi created CASSANDRA-7775:
--

 Summary: Cassandra attempts to flush an empty memtable into disk 
and fails
 Key: CASSANDRA-7775
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7775
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: $ nodetool version
ReleaseVersion: 2.0.6
$ java -version
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Omri Bahumi


I'm not sure what triggers this flush, but when it happens the following 
appears in our logs:
{code}
 INFO [OptionalTasks:1] 2014-08-15 02:24:20,115 ColumnFamilyStore.java (line 
785) Enqueuing flush of Memtable-app_recs_best_in_expr_prefix2@1219170646(0/0 
serialized/live bytes, 0 ops)
 INFO [FlushWriter:34] 2014-08-15 02:24:20,116 Memtable.java (line 331) Writing 
Memtable-app_recs_best_in_expr_prefix2@1219170646(0/0 serialized/live bytes, 0 
ops)
ERROR [FlushWriter:34] 2014-08-15 02:24:20,127 CassandraDaemon.java (line 196) 
Exception in thread Thread[FlushWriter:34,5,main]
java.lang.RuntimeException: Cannot get comparator 1 in 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type).
 This might due to a mismatch between the schema and the data read
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:133)
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:140)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:96)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
at 
org.apache.cassandra.db.RangeTombstone$Tracker$1.compare(RangeTombstone.java:125)
at 
org.apache.cassandra.db.RangeTombstone$Tracker$1.compare(RangeTombstone.java:122)
at java.util.TreeMap.compare(TreeMap.java:1188)
at java.util.TreeMap$NavigableSubMap.(TreeMap.java:1264)
at java.util.TreeMap$AscendingSubMap.(TreeMap.java:1699)
at java.util.TreeMap.tailMap(TreeMap.java:905)
at java.util.TreeSet.tailSet(TreeSet.java:350)
at java.util.TreeSet.tailSet(TreeSet.java:383)
at 
org.apache.cassandra.db.RangeTombstone$Tracker.update(RangeTombstone.java:203)
at org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:192)
at 
org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:138)
at 
org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:202)
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:187)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:365)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:318)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: index (1) must be less than 
size (1)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:306)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:285)
at 
com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:45)
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:124)
... 23 more
{code}

After this happens, the MemtablePostFlusher thread pool starts piling up.
When trying to restart the cluster, a similar exception occurs when trying to 
replay the commit log.
Our way of recovering from this is to delete all commit logs in the faulty 
node, start it and issue a repair.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7704) FileNotFoundException during STREAM-OUT triggers 100% CPU usage

2014-08-15 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-7704:


Attachment: 7704-2.1.txt

Attaching a new version which does not cancel the task that was run, and 
updates the unit tests to match the new behaviour

> FileNotFoundException during STREAM-OUT triggers 100% CPU usage
> ---
>
> Key: CASSANDRA-7704
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7704
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rick Branson
>Assignee: Benedict
> Fix For: 2.0.10, 2.1.0
>
> Attachments: 7704-2.1.txt, 7704.txt, backtrace.txt, other-errors.txt
>
>
> See attached backtrace which was what triggered this. This stream failed and 
> then ~12 seconds later it emitted that exception. At that point, all CPUs 
> went to 100%. A thread dump shows all the ReadStage threads stuck inside 
> IntervalTree.searchInternal inside of CFS.markReferenced().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7704) FileNotFoundException during STREAM-OUT triggers 100% CPU usage

2014-08-15 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-7704:


Attachment: (was: 7704.20.v2.txt)

> FileNotFoundException during STREAM-OUT triggers 100% CPU usage
> ---
>
> Key: CASSANDRA-7704
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7704
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rick Branson
>Assignee: Benedict
> Fix For: 2.0.10, 2.1.0
>
> Attachments: 7704-2.1.txt, 7704.txt, backtrace.txt, other-errors.txt
>
>
> See attached backtrace which was what triggered this. This stream failed and 
> then ~12 seconds later it emitted that exception. At that point, all CPUs 
> went to 100%. A thread dump shows all the ReadStage threads stuck inside 
> IntervalTree.searchInternal inside of CFS.markReferenced().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7775) Cassandra attempts to flush an empty memtable into disk and fails

2014-08-15 Thread Omri Bahumi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098361#comment-14098361
 ] 

Omri Bahumi commented on CASSANDRA-7775:


Here's another manifestation of this bug, this time with the startup exception:
{code}
 INFO [OptionalTasks:1] 2014-08-15 04:31:56,372 ColumnFamilyStore.java (line 
785) Enqueuing flush of Memtable-app_recs_best_in_expr_prefix2@1878214183(0/0 
serialized/live bytes, 0 ops)
 INFO [FlushWriter:38] 2014-08-15 04:31:56,373 Memtable.java (line 331) Writing 
Memtable-app_recs_best_in_expr_prefix2@1878214183(0/0 serialized/live bytes, 0 
ops)
ERROR [FlushWriter:38] 2014-08-15 04:31:56,380 CassandraDaemon.java (line 196) 
Exception in thread Thread[FlushWriter:38,5,main]
java.lang.RuntimeException: Cannot get comparator 1 in 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type).
 This might due to a mismatch between the schema and the data read
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:133)
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:140)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:96)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
at 
org.apache.cassandra.db.RangeTombstone$Tracker$1.compare(RangeTombstone.java:125)
at 
org.apache.cassandra.db.RangeTombstone$Tracker$1.compare(RangeTombstone.java:122)
at java.util.TreeMap.compare(TreeMap.java:1188)
at java.util.TreeMap$NavigableSubMap.(TreeMap.java:1264)
at java.util.TreeMap$AscendingSubMap.(TreeMap.java:1699)
at java.util.TreeMap.tailMap(TreeMap.java:905)
at java.util.TreeSet.tailSet(TreeSet.java:350)
at java.util.TreeSet.tailSet(TreeSet.java:383)
at 
org.apache.cassandra.db.RangeTombstone$Tracker.update(RangeTombstone.java:203)
at org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:192)
at 
org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:138)
at 
org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:202)
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:187)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:365)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:318)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: index (1) must be less than 
size (1)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:306)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:285)
at 
com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:45)
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:124)
... 23 more
{code}

Startup exception:
{code}
 INFO [main] 2014-08-15 08:52:53,796 CommitLog.java (line 130) Replaying 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485034.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485035.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485040.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485041.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485042.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485043.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485044.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485045.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485046.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485047.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485048.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485049.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485050.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485051.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485052.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485053.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485054.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485055.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485056.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485057.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485058.log, 
/var/lib/cassandra/commitlog/CommitLog-3-1408027485059.log, 
/var/lib/cassandra/commitlog/CommitLog-3-140802

[jira] [Commented] (CASSANDRA-7763) cql_tests static_with_empty_clustering test failure

2014-08-15 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098406#comment-14098406
 ] 

Benedict commented on CASSANDRA-7763:
-

It's a shame we've spot this now, as it's a bit late to optimise this again for 
2.1, but we should perhaps revisit later (for 3.0), as the introduction of 
these virtual method invocations was a large part of the reason for 
CASSANDRA-6934 in the first place. It should be possible to avoid these 
invocations on most calls, since we only actually incur static columns 
infrequently, but let's leave it for now.

This patch does need to include the changes to the 
AbstractCType.compareUnsigned, WithCollection.compare() and 
AbstractNativeCell.compare() methods as well though



> cql_tests static_with_empty_clustering test failure
> ---
>
> Key: CASSANDRA-7763
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7763
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ryan McGuire
>Assignee: Sylvain Lebresne
> Fix For: 2.1 rc6
>
> Attachments: 7763.txt
>
>
> {code}
> ==
> FAIL: static_with_empty_clustering_test (cql_tests.TestCQL)
> --
> Traceback (most recent call last):
>   File "/home/ryan/git/datastax/cassandra-dtest/tools.py", line 213, in 
> wrapped
> f(obj)
>   File "/home/ryan/git/datastax/cassandra-dtest/cql_tests.py", line 4082, in 
> static_with_empty_clustering_test
> assert_one(cursor, "SELECT * FROM test", ['partition1', '', 'static 
> value', 'value'])
>   File "/home/ryan/git/datastax/cassandra-dtest/assertions.py", line 40, in 
> assert_one
> assert res == [expected], res
> AssertionError: [[u'partition1', u'', None, None], [u'partition1', u'', None, 
> None], [u'partition1', u'', None, u'value']]
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-Ex54V7
> - >> end captured logging << -
> --
> Ran 1 test in 6.866s
> FAILED (failures=1)
> {code}
> regression from CASSANDRA-7455?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7029) Investigate alternative transport protocols for both client and inter-server communications

2014-08-15 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098549#comment-14098549
 ] 

T Jake Luciani commented on CASSANDRA-7029:
---

Somewhat related and a easy win here is Java 7 lets you drop in replace sockets 
to work with infiniband networks which bypass the OS

http://www.infoq.com/articles/Java-7-Sockets-Direct-Protocol

"Infiniband delivers 600% better low-latency and 370% better throughput 
performance  than Ethernet (10GE)."

> Investigate alternative transport protocols for both client and inter-server 
> communications
> ---
>
> Key: CASSANDRA-7029
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7029
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>  Labels: performance
> Fix For: 3.0
>
>
> There are a number of reasons to think we can do better than TCP for our 
> communications:
> 1) We can actually tolerate sporadic small message losses, so guaranteed 
> delivery isn't essential (although for larger messages it probably is)
> 2) As shown in \[1\] and \[2\], Linux can behave quite suboptimally with 
> regard to TCP message delivery when the system is under load. Judging from 
> the theoretical description, this is likely to apply even when the 
> system-load is not high, but the number of processes to schedule is high. 
> Cassandra generally has a lot of threads to schedule, so this is quite 
> pertinent for us. UDP performs substantially better here.
> 3) Even when the system is not under load, UDP has a lower CPU burden, and 
> that burden is constant regardless of the number of connections it processes. 
> 4) On a simple benchmark on my local PC, using non-blocking IO for UDP and 
> busy spinning on IO I can actually push 20-40% more throughput through 
> loopback (where TCP should be optimal, as no latency), even for very small 
> messages. Since we can see networking taking multiple CPUs' worth of time 
> during a stress test, using a busy-spin for ~100micros after last message 
> receipt is almost certainly acceptable, especially as we can (ultimately) 
> process inter-server and client communications on the same thread/socket in 
> this model.
> 5) We can optimise the threading model heavily: since we generally process 
> very small messages (200 bytes not at all implausible), the thread signalling 
> costs on the processing thread can actually dramatically impede throughput. 
> In general it costs ~10micros to signal (and passing the message to another 
> thread for processing in the current model requires signalling). For 200-byte 
> messages this caps our throughput at 20MB/s.
> I propose to knock up a highly naive UDP-based connection protocol with 
> super-trivial congestion control over the course of a few days, with the only 
> initial goal being maximum possible performance (not fairness, reliability, 
> or anything else), and trial it in Netty (possibly making some changes to 
> Netty to mitigate thread signalling costs). The reason for knocking up our 
> own here is to get a ceiling on what the absolute limit of potential for this 
> approach is. Assuming this pans out with performance gains in C* proper, we 
> then look to contributing to/forking the udt-java project and see how easy it 
> is to bring performance in line with what we can get with our naive approach 
> (I don't suggest starting here, as the project is using blocking old-IO, and 
> modifying it with latency in mind may be challenging, and we won't know for 
> sure what the best case scenario is).
> \[1\] 
> http://test-docdb.fnal.gov/0016/001648/002/Potential%20Performance%20Bottleneck%20in%20Linux%20TCP.PDF
> \[2\] 
> http://cd-docdb.fnal.gov/cgi-bin/RetrieveFile?docid=1968;filename=Performance%20Analysis%20of%20Linux%20Networking%20-%20Packet%20Receiving%20(Official).pdf;version=2
> Further related reading:
> http://public.dhe.ibm.com/software/commerce/doc/mft/cdunix/41/UDTWhitepaper.pdf
> https://mospace.umsystem.edu/xmlui/bitstream/handle/10355/14482/ChoiUndPerTcp.pdf?sequence=1
> https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html
> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.153.3762&rep=rep1&type=pdf



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7585) cassandra sstableloader connection refused with inter_node_encryption

2014-08-15 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098550#comment-14098550
 ] 

Marcus Eriksson commented on CASSANDRA-7585:


ok, +1

> cassandra sstableloader connection refused with inter_node_encryption
> -
>
> Key: CASSANDRA-7585
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7585
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Tools
>Reporter: Samphel Norden
>Assignee: Yuki Morishita
> Fix For: 2.0.10, 2.1.1
>
> Attachments: 7585-2.0-v2.txt, 7585-2.0.txt, sstableloader-help.txt
>
>
> cassandra sstableloader connection refused with inter_node_encryption
> When using sstableloader to import tables  (cassandra 2.0.5) with inter-node 
> encryption and client encryption enabled, I get a connection refused error
> I am using
> sstableloader -d $myhost -p 9160 -u cassandra -pw cassandra -ciphers 
> TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
>  -st JKS  -tf org.apache.cassandra.thrift.SSLTransportFactory -ts 
> /path/to/truststore  -tspw  $fullpath/$table
> Errors out with
> Streaming session ID: 1bc395c0-fbb2-11e3-9812-73da15121373
>  WARN 17:13:34,147 Failed attempt 1 to connect to
> Similar problem reported in cassandra 2.0.8 by another user
> http://stackoverflow.com/questions/24390604/cassandra-sstableloader-connection-refused-with-inter-node-encryption
> ==
> Relevant cassandra.yaml snippet (with obfuscation)
> server_encryption_options:
>   
> internode_encryption: all 
>  
> keystore:/path/to/keystore
>
> keystore_password:
>  
> truststore:/path/to/truststore
>  
> truststore_password:  
>
> # More advanced defaults below:   
>   
> protocol: TLS 
>   
> algorithm: SunX509
>   
> store_type: JKS   
>   
> cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]
>
> require_client_auth: true 
>   
>   
>   
> # enable or disable client/server encryption. 
>   
> client_encryption_options:
>   
> enabled: true 
>   
> keystore: /path/to/keystore   
>  
> keystore_password:  
>
> #require_client_auth: true
>   
> # Set trustore and truststore_password if require_client_auth is true 
>   
> truststore:/path/to/truststore
> 
> truststore_password:
>
> # More advanced defaults below:   
>   
> protocol: TLS 
>   
> algorithm: SunX509
>   
> store_type: JKS   
>   
> cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SH

[jira] [Resolved] (CASSANDRA-7286) Exception: NPE

2014-08-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-7286.
-

Resolution: Duplicate

Almost certainly CASSANDRA-7756, let's address it there.

> Exception: NPE 
> ---
>
> Key: CASSANDRA-7286
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7286
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Julien Anguenot
> Attachments: npe_cassandra_2_0_8.txt, readstage_npe.txt
>
>
> Sometimes Cassandra nodes (in a multi datacenter deployment)  are throwing 
> NPE (see attached stack trace)
> Let me know what additional information I could provide.
> Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7776) Allow multiple MR jobs to concurrently write to the same column family from the same node using CqlBulkOutputFormat

2014-08-15 Thread Paul Pak (JIRA)
Paul Pak created CASSANDRA-7776:
---

 Summary: Allow multiple MR jobs to concurrently write to the same 
column family from the same node using CqlBulkOutputFormat
 Key: CASSANDRA-7776
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7776
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Reporter: Paul Pak
Assignee: Paul Pak
Priority: Minor


This can be done by using unique output directories for each MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7776) Allow multiple MR jobs to concurrently write to the same column family from the same node using CqlBulkOutputFormat

2014-08-15 Thread Paul Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Pak updated CASSANDRA-7776:


Attachment: trunk-7776-v1.txt

> Allow multiple MR jobs to concurrently write to the same column family from 
> the same node using CqlBulkOutputFormat
> ---
>
> Key: CASSANDRA-7776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7776
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
> Attachments: trunk-7776-v1.txt
>
>
> This can be done by using unique output directories for each MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7777) Ability to clean up local sstable files after they've been loaded by the CqlBulkRecordWriter

2014-08-15 Thread Paul Pak (JIRA)
Paul Pak created CASSANDRA-:
---

 Summary: Ability to clean up local sstable files after they've 
been loaded by the CqlBulkRecordWriter
 Key: CASSANDRA-
 URL: https://issues.apache.org/jira/browse/CASSANDRA-
 Project: Cassandra
  Issue Type: Improvement
Reporter: Paul Pak
Assignee: Paul Pak
Priority: Minor


Deleting the source files should most likely be the default behavior with the 
ability to disable it via config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7777) Ability to clean up local sstable files after they've been loaded by the CqlBulkRecordWriter

2014-08-15 Thread Paul Pak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098671#comment-14098671
 ] 

Paul Pak commented on CASSANDRA-:
-

Deletion (if enabled) should only occur on successful load.

> Ability to clean up local sstable files after they've been loaded by the 
> CqlBulkRecordWriter
> 
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
>
> Deleting the source files should most likely be the default behavior with the 
> ability to disable it via config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7777) Ability to clean up local sstable files after they've been loaded by the CqlBulkRecordWriter

2014-08-15 Thread Paul Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Pak updated CASSANDRA-:


Attachment: trunk--v1.txt

> Ability to clean up local sstable files after they've been loaded by the 
> CqlBulkRecordWriter
> 
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
> Attachments: trunk--v1.txt
>
>
> Deleting the source files should most likely be the default behavior with the 
> ability to disable it via config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7777) Ability to clean up local sstable files after they've been loaded by the CqlBulkRecordWriter

2014-08-15 Thread Paul Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Pak updated CASSANDRA-:


Labels: cql3 hadoop  (was: )

> Ability to clean up local sstable files after they've been loaded by the 
> CqlBulkRecordWriter
> 
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
>  Labels: cql3, hadoop
> Attachments: trunk--v1.txt
>
>
> Deleting the source files should most likely be the default behavior with the 
> ability to disable it via config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7777) Ability to clean up local sstable files after they've been loaded by the CqlBulkRecordWriter

2014-08-15 Thread Paul Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Pak updated CASSANDRA-:


Component/s: Hadoop

> Ability to clean up local sstable files after they've been loaded by the 
> CqlBulkRecordWriter
> 
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
>  Labels: cql3, hadoop
> Attachments: trunk--v1.txt
>
>
> Deleting the source files should most likely be the default behavior with the 
> ability to disable it via config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7776) Allow multiple MR jobs to concurrently write to the same column family from the same node using CqlBulkOutputFormat

2014-08-15 Thread Paul Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Pak updated CASSANDRA-7776:


Labels: cql3 hadoop  (was: )

> Allow multiple MR jobs to concurrently write to the same column family from 
> the same node using CqlBulkOutputFormat
> ---
>
> Key: CASSANDRA-7776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7776
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
>  Labels: cql3, hadoop
> Attachments: trunk-7776-v1.txt
>
>
> This can be done by using unique output directories for each MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7768) Error when creating multiple CQLSSTableWriters for more than one column family in the same keyspace

2014-08-15 Thread Paul Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Pak updated CASSANDRA-7768:


Component/s: Hadoop

> Error when creating multiple CQLSSTableWriters for more than one column 
> family in the same keyspace
> ---
>
> Key: CASSANDRA-7768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7768
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
>  Labels: cql3, hadoop
> Attachments: trunk-7768-v1.txt
>
>
> The reason why this occurs is if the keyspace has already been loaded (due to 
> another column family being previously loaded in the same keyspace), 
> CQLSSTableWriter builder only loads the column family via 
> Schema.load(CFMetaData). However, Schema.load(CFMetaData) only adds to the 
> Schema.cfIdMap without making the proper addition to the CFMetaData map 
> belonging to the KSMetaData map.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7768) Error when creating multiple CQLSSTableWriters for more than one column family in the same keyspace

2014-08-15 Thread Paul Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Pak updated CASSANDRA-7768:


Labels: cql3 hadoop  (was: )

> Error when creating multiple CQLSSTableWriters for more than one column 
> family in the same keyspace
> ---
>
> Key: CASSANDRA-7768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7768
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
>  Labels: cql3, hadoop
> Attachments: trunk-7768-v1.txt
>
>
> The reason why this occurs is if the keyspace has already been loaded (due to 
> another column family being previously loaded in the same keyspace), 
> CQLSSTableWriter builder only loads the column family via 
> Schema.load(CFMetaData). However, Schema.load(CFMetaData) only adds to the 
> Schema.cfIdMap without making the proper addition to the CFMetaData map 
> belonging to the KSMetaData map.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7766) Secondary index not working after a while

2014-08-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098683#comment-14098683
 ] 

Brandon Williams commented on CASSANDRA-7766:
-

Haven't been able to reproduce on 2.1.0 head, though I'm not sure how long a 
while is.  Forcing a flush/compaction doesn't help either.

> Secondary index not working after a while
> -
>
> Key: CASSANDRA-7766
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7766
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.0-rc5 with small clusters (one or two nodes)
>Reporter: Fabrice Larcher
> Attachments: result-failure.txt, result-success.txt
>
>
> Since 2.1.0-rc2, it appears that the secondary indexes are not always 
> working. Immediately after the INSERT of a row, the index seems to be there. 
> But after a while (I do not know when or why), SELECT statements based on any 
> secondary index do not return the corresponding row(s) anymore. I noticed 
> that a restart of C* may have an impact (the data inserted before the restart 
> may be seen through the index, even if it was not returned before the 
> restart).
> Here is a use-case example (in order to clarify my request) :
> {code}
> CREATE TABLE IF NOT EXISTS ks.cf ( k int PRIMARY KEY, ind ascii, value text);
> CREATE INDEX IF NOT EXISTS ks_cf_index ON ks.cf(ind);
> INSERT INTO ks.cf (k, ind, value) VALUES (1, 'toto', 'Hello');
> SELECT * FROM ks.cf WHERE ind = 'toto'; // Returns no result after a while
> {code}
> The last SELECT statement may or may not return a row depending on the 
> instant of the request. I experienced that with 2.1.0-rc5 through CQLSH with 
> clusters of one and two nodes. Since it depends on the instant of the 
> request, I am not able to deliver any way to reproduce that systematically 
> (It appears to be linked with some scheduled job inside C*).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7731) Get max values for live/tombstone cells per slice

2014-08-15 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098700#comment-14098700
 ] 

Jeremiah Jordan commented on CASSANDRA-7731:


LGTM +1

> Get max values for live/tombstone cells per slice
> -
>
> Key: CASSANDRA-7731
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7731
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Cyril Scetbon
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7731-2.0.txt, 7731-2.1.txt
>
>
> I think you should not say that slice statistics are valid for the [last five 
> minutes 
> |https://github.com/apache/cassandra/blob/cassandra-2.0/src/java/org/apache/cassandra/tools/NodeCmd.java#L955-L956]
>  in CFSTATS command of nodetool. I've read the documentation from yammer for 
> Histograms and there is no way to force values to expire after x minutes 
> except by 
> [clearing|http://grepcode.com/file/repo1.maven.org/maven2/com.yammer.metrics/metrics-core/2.1.2/com/yammer/metrics/core/Histogram.java#96]
>  it . The only thing I can see is that the last snapshot used to provide the 
> median (or whatever you'd used instead) value is based on 1028 values.
> I think we should also be able to detect that some requests are accessing a 
> lot of live/tombstone cells per query and that's not possible for now without 
> activating DEBUG for SliceQueryFilter for example and by tweaking the 
> threshold. Currently as nodetool cfstats returns the median if a low part of 
> the queries are scanning a lot of live/tombstone cells we miss it !



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7777) Ability to clean up local sstable files after they've been loaded by the CqlBulkRecordWriter

2014-08-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098702#comment-14098702
 ] 

Brandon Williams commented on CASSANDRA-:
-

I think I'd prefer it the other way around; the principle of least surprise.

> Ability to clean up local sstable files after they've been loaded by the 
> CqlBulkRecordWriter
> 
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
>  Labels: cql3, hadoop
> Attachments: trunk--v1.txt
>
>
> Deleting the source files should most likely be the default behavior with the 
> ability to disable it via config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7777) Ability to clean up local sstable files after they've been loaded by the CqlBulkRecordWriter

2014-08-15 Thread Paul Pak (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098720#comment-14098720
 ] 

Paul Pak commented on CASSANDRA-:
-

That's perfectly fine with me. Updating.

> Ability to clean up local sstable files after they've been loaded by the 
> CqlBulkRecordWriter
> 
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
>  Labels: cql3, hadoop
> Attachments: trunk--v1.txt
>
>
> Deleting the source files should most likely be the default behavior with the 
> ability to disable it via config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7777) Ability to clean up local sstable files after they've been loaded by the CqlBulkRecordWriter

2014-08-15 Thread Paul Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Pak updated CASSANDRA-:


Attachment: trunk--v2.txt

Now defaults to not cleaning up the source directory.

> Ability to clean up local sstable files after they've been loaded by the 
> CqlBulkRecordWriter
> 
>
> Key: CASSANDRA-
> URL: https://issues.apache.org/jira/browse/CASSANDRA-
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Hadoop
>Reporter: Paul Pak
>Assignee: Paul Pak
>Priority: Minor
>  Labels: cql3, hadoop
> Attachments: trunk--v1.txt, trunk--v2.txt
>
>
> Deleting the source files should most likely be the default behavior with the 
> ability to disable it via config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7659) cqlsh: DESCRIBE KEYSPACE should order types according to cross-type dependencies

2014-08-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098726#comment-14098726
 ] 

Tyler Hobbs commented on CASSANDRA-7659:


bq. I would extend the scope of this ticket to "remove 
pylib.cqlshlib.cql3handling.UserTypesMeta in favor of 
cassandra.metadata.UserType", that would cover the original issue as well

+1

> cqlsh: DESCRIBE KEYSPACE should order types according to cross-type 
> dependencies
> 
>
> Key: CASSANDRA-7659
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7659
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
>  Labels: lhf
> Fix For: 2.1.1
>
>
> Since UDTs may use other UDTs for fields, DESCRIBE KEYSPACE should list types 
> in an order that handles the dependencies.  This was recently done in the 
> python driver here: https://github.com/datastax/python-driver/pull/165.  We 
> can either update to the latest python driver, or copy that code for cqlsh.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7477) JSON to SSTable import failing

2014-08-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098761#comment-14098761
 ] 

Tyler Hobbs commented on CASSANDRA-7477:


+1

It may be worth adding a check that we have a compound sparse table (i.e. check 
{{cfm.isCql3Table()}}) so that the error is better if the json is malformed.

> JSON to SSTable import failing
> --
>
> Key: CASSANDRA-7477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7477
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux Mint 17 64-bit | 16GiB | C* 2.1
>Reporter: Kishan Karunaratne
>Assignee: Mikhail Stepura
> Fix For: 2.1.0, 2.1.1
>
> Attachments: CASSANDRA-2.1.0-7477.patch, log2.log, schema.json
>
>
> Issue affects C* version >= 2.1. Commit found by using git bisect. The 
> previous commit to this one also fails, but due to other reasons (CCM server 
> won't start). This commit is the one that give the same error as 2.1 HEAD:
> {noformat}
> 02d1e7497a9930120fac367ce82a3b22940acafb is the first bad commit
> commit 02d1e7497a9930120fac367ce82a3b22940acafb
> Author: Brandon Williams 
> Date:   Mon Apr 21 14:42:29 2014 -0500
> Default flush dir to data dir.
> Patch by brandonwilliams, reviewed by yukim for CASSANDRA-7064
> :04 04 c50a123f305b73583ccbfa9c455efc4e4cee228f 
> 507a90290dccb8a929afadf1f833d926049c46ad Mconf
> {noformat}
> {noformat}
> $ PRINT_DEBUG=true nosetests -x -s -v json_tools_test.py 
> json_tools_test (json_tools_test.TestJson) ... cluster ccm directory: 
> /tmp/dtest-8WVBq9
> Starting cluster...
> Version: 2.1.0
> Getting CQLSH...
> Inserting data...
> Flushing and stopping cluster...
> Exporting to JSON file...
> -- test-users-ka-1-Data.db -
> Deleting cluster and creating new...
> Inserting data...
> Importing JSON file...
> Counting keys to import, please wait... (NOTE: to skip this use -n )
> Importing 2 keys...
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
> to org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:168)
>   at 
> org.apache.cassandra.tools.SSTableImport$JsonColumn.(SSTableImport.java:165)
>   at 
> org.apache.cassandra.tools.SSTableImport.addColumnsToCF(SSTableImport.java:242)
>   at 
> org.apache.cassandra.tools.SSTableImport.addToStandardCF(SSTableImport.java:225)
>   at 
> org.apache.cassandra.tools.SSTableImport.importSorted(SSTableImport.java:464)
>   at 
> org.apache.cassandra.tools.SSTableImport.importJson(SSTableImport.java:351)
>   at org.apache.cassandra.tools.SSTableImport.main(SSTableImport.java:575)
> ERROR: org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be 
> cast to org.apache.cassandra.db.composites.CellName
> Verifying import...
> data: [[u'gandalf', 1955, u'male', u'p@$$', u'WA']]
> FAIL
> removing ccm cluster test at: /tmp/dtest-8WVBq9
> ERROR
> ==
> ERROR: json_tools_test (json_tools_test.TestJson)
> --
> Traceback (most recent call last):
>   File "/home/kishan/git/cstar/cassandra-dtest/dtest.py", line 214, in 
> tearDown
> raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
> errors))
> AssertionError: Unexpected error in node1 node log: ['ERROR 
> [SSTableBatchOpen:1] 2014-06-30 13:56:01,032 CassandraDaemon.java:166 - 
> Exception in thread Thread[SSTableBatchOpen:1,5,main]\n']
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-8WVBq9
> dtest: DEBUG: Starting cluster...
> dtest: DEBUG: Version: 2.1.0
> dtest: DEBUG: Getting CQLSH...
> dtest: DEBUG: Inserting data...
> dtest: DEBUG: Flushing and stopping cluster...
> dtest: DEBUG: Exporting to JSON file...
> dtest: DEBUG: Deleting cluster and creating new...
> dtest: DEBUG: Inserting data...
> dtest: DEBUG: Importing JSON file...
> dtest: DEBUG: Verifying import...
> dtest: DEBUG: data: [[u'gandalf', 1955, u'male', u'p@$$', u'WA']]
> dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-8WVBq9
> - >> end captured logging << -
> ==
> FAIL: json_tools_test (json_tools_test.TestJson)
> --
> Traceback (most recent call last):
>   File "/home/kishan/git/cstar/cassandra-dtest/json_tools_test.py", line 91, 
> in json_tools_test
> [u'gandalf', 1955, u'male', u'p@$$', u'WA'] ] )
> AssertionError: Element counts were not equal:
> First has 0, Second has 

[jira] [Updated] (CASSANDRA-7756) NullPointerException in getTotalBufferSize

2014-08-15 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-7756:
--

Attachment: 7756-v2.txt

> NullPointerException in getTotalBufferSize
> --
>
> Key: CASSANDRA-7756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7756
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux, OpenJDK 1.7
>Reporter: Leonid Shalupov
>Assignee: T Jake Luciani
> Fix For: 2.0.10
>
> Attachments: 7756-2.0.txt, 7756-2.1.txt, 7756-v2.txt
>
>
> 18:59:50.499 [SharedPool-Worker-1] WARN  o.apache.cassandra.io.util.FileUtils 
> - Failed closing 
> /xxx/cassandra/data/pr1407782307/trigramindexcounter-d2817030218611e4b65c619763d48c52/pr1407782307-trigramindexcounter-ka-1-Data.db
>  - chunk length 65536, data length 8199819.
> java.lang.NullPointerException: null
>  at 
> org.apache.cassandra.io.util.RandomAccessReader.getTotalBufferSize(RandomAccessReader.java:157)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.getTotalBufferSize(CompressedRandomAccessReader.java:159)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.FileCacheService.sizeInBytes(FileCacheService.java:186)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.FileCacheService.put(FileCacheService.java:150) 
> ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.recycle(PoolingSegmentedFile.java:50)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:230)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:222) 
> ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:69)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:89)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:261)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:59)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1873)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1681)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:345) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.getCurrentValuesFromCFS(CounterMutation.java:274)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.getCurrentValues(CounterMutation.java:241)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.processModifications(CounterMutation.java:209)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.db.CounterMutation.apply(CounterMutation.java:136) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1116)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2065)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_65]
>  at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7756) NullPointerException in getTotalBufferSize

2014-08-15 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-7756:
--

Attachment: (was: 7756-v2.txt)

> NullPointerException in getTotalBufferSize
> --
>
> Key: CASSANDRA-7756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7756
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux, OpenJDK 1.7
>Reporter: Leonid Shalupov
>Assignee: T Jake Luciani
> Fix For: 2.0.10
>
> Attachments: 7756-2.0.txt, 7756-2.1.txt, 7756-v2.txt
>
>
> 18:59:50.499 [SharedPool-Worker-1] WARN  o.apache.cassandra.io.util.FileUtils 
> - Failed closing 
> /xxx/cassandra/data/pr1407782307/trigramindexcounter-d2817030218611e4b65c619763d48c52/pr1407782307-trigramindexcounter-ka-1-Data.db
>  - chunk length 65536, data length 8199819.
> java.lang.NullPointerException: null
>  at 
> org.apache.cassandra.io.util.RandomAccessReader.getTotalBufferSize(RandomAccessReader.java:157)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.getTotalBufferSize(CompressedRandomAccessReader.java:159)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.FileCacheService.sizeInBytes(FileCacheService.java:186)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.FileCacheService.put(FileCacheService.java:150) 
> ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.recycle(PoolingSegmentedFile.java:50)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:230)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:222) 
> ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:69)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:89)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:261)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:59)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1873)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1681)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:345) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.getCurrentValuesFromCFS(CounterMutation.java:274)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.getCurrentValues(CounterMutation.java:241)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.processModifications(CounterMutation.java:209)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.db.CounterMutation.apply(CounterMutation.java:136) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1116)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2065)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_65]
>  at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7778) Use PID to automatically scale thread pools and throttles.

2014-08-15 Thread Matt Stump (JIRA)
Matt Stump created CASSANDRA-7778:
-

 Summary: Use PID to automatically scale thread pools and throttles.
 Key: CASSANDRA-7778
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7778
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matt Stump


Most customers deploy with non-optimal configurations. Examples include 
compaction throttle, streaming throttle, RPC requests threadpool size, which 
are set too aggressively or too conservatively.  Often these problems aren't 
discovered until the cluster is in the field, and the problem will manifest as 
a critical outage. This results in the perception that Cassandra "falls over" 
without warning. Because it's difficult to ship with a set of tuning parameters 
that are valid for all or even most scenarios I propose that we use a PID 
algorithm to automatically tune several key parameters. The goal of the PID 
would be to keep load within a healthy range. If the user chooses they could 
always revert to explicitly defined configuration.

http://en.wikipedia.org/wiki/PID_controller



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7756) NullPointerException in getTotalBufferSize

2014-08-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098772#comment-14098772
 ] 

Jonathan Ellis commented on CASSANDRA-7756:
---

LGTM.  I'd just add a mention of this ticket to both the test and the comment 
in RAR.

> NullPointerException in getTotalBufferSize
> --
>
> Key: CASSANDRA-7756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7756
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux, OpenJDK 1.7
>Reporter: Leonid Shalupov
>Assignee: T Jake Luciani
> Fix For: 2.0.10
>
> Attachments: 7756-2.0.txt, 7756-2.1.txt, 7756-v2.txt
>
>
> 18:59:50.499 [SharedPool-Worker-1] WARN  o.apache.cassandra.io.util.FileUtils 
> - Failed closing 
> /xxx/cassandra/data/pr1407782307/trigramindexcounter-d2817030218611e4b65c619763d48c52/pr1407782307-trigramindexcounter-ka-1-Data.db
>  - chunk length 65536, data length 8199819.
> java.lang.NullPointerException: null
>  at 
> org.apache.cassandra.io.util.RandomAccessReader.getTotalBufferSize(RandomAccessReader.java:157)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.getTotalBufferSize(CompressedRandomAccessReader.java:159)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.FileCacheService.sizeInBytes(FileCacheService.java:186)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.FileCacheService.put(FileCacheService.java:150) 
> ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.recycle(PoolingSegmentedFile.java:50)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:230)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:222) 
> ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:69)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:89)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:261)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:59)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1873)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1681)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:345) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.getCurrentValuesFromCFS(CounterMutation.java:274)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.getCurrentValues(CounterMutation.java:241)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.processModifications(CounterMutation.java:209)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.db.CounterMutation.apply(CounterMutation.java:136) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1116)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2065)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_65]
>  at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7704) FileNotFoundException during STREAM-OUT triggers 100% CPU usage

2014-08-15 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098784#comment-14098784
 ] 

Yuki Morishita commented on CASSANDRA-7704:
---

+1

> FileNotFoundException during STREAM-OUT triggers 100% CPU usage
> ---
>
> Key: CASSANDRA-7704
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7704
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rick Branson
>Assignee: Benedict
> Fix For: 2.0.10, 2.1.0
>
> Attachments: 7704-2.1.txt, 7704.txt, backtrace.txt, other-errors.txt
>
>
> See attached backtrace which was what triggered this. This stream failed and 
> then ~12 seconds later it emitted that exception. At that point, all CPUs 
> went to 100%. A thread dump shows all the ReadStage threads stuck inside 
> IntervalTree.searchInternal inside of CFS.markReferenced().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Fix NPE in FileCacheService.sizeInBytes

2014-08-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 e7566609e -> ad6ba3d24


Fix NPE in FileCacheService.sizeInBytes

patch by tjake; reviewed by jbellis for (CASSANDRA-7756)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad6ba3d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad6ba3d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad6ba3d2

Branch: refs/heads/cassandra-2.0
Commit: ad6ba3d243058f060569ad16d6713f46e2ce6160
Parents: e756660
Author: Jake Luciani 
Authored: Fri Aug 15 13:24:15 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:24:15 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 70 
 3 files changed, 75 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 04902ad..4306de5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.10
+ * Fix NPE in FileCacheService.sizeInBytes (CASSANDRA-7756)
  * (cqlsh) cqlsh should automatically disable tracing when selecting
from system_traces (CASSANDRA-7641)
  * (Hadoop) Add CqlOutputFormat (CASSANDRA-6927)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
--
diff --git a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java 
b/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
index 9a03480..09ecac0 100644
--- a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
+++ b/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
@@ -154,7 +154,10 @@ public class RandomAccessReader extends RandomAccessFile 
implements FileDataInpu
 
 public int getTotalBufferSize()
 {
-return buffer.length;
+//This may NPE so we make a ref
+//https://issues.apache.org/jira/browse/CASSANDRA-7756
+byte[] ref = buffer;
+return ref != null ? ref.length : 0;
 }
 
 public void reset()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java 
b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
index 90c27e3..a16b291 100644
--- a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
+++ b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
@@ -19,6 +19,7 @@
  */
 package org.apache.cassandra.io.util;
 
+import org.apache.cassandra.service.FileCacheService;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
 import java.io.File;
@@ -28,6 +29,11 @@ import java.nio.ByteBuffer;
 import java.nio.channels.ClosedChannelException;
 import java.util.Arrays;
 import java.util.concurrent.Callable;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import static org.apache.cassandra.Util.expectEOF;
 import static org.apache.cassandra.Util.expectException;
@@ -508,6 +514,70 @@ public class BufferedRandomAccessFileTest
 }
 
 @Test
+public void testFileCacheService() throws IOException, InterruptedException
+{
+//see https://issues.apache.org/jira/browse/CASSANDRA-7756
+
+final int THREAD_COUNT = 40;
+ExecutorService executorService = 
Executors.newFixedThreadPool(THREAD_COUNT);
+
+SequentialWriter w1 = createTempFile("fscache1");
+SequentialWriter w2 = createTempFile("fscache2");
+
+w1.write(new byte[30]);
+w1.close();
+
+w2.write(new byte[30]);
+w2.close();
+
+for (int i = 0; i < 20; i++)
+{
+
+
+RandomAccessReader r1 = RandomAccessReader.open(w1);
+RandomAccessReader r2 = RandomAccessReader.open(w2);
+
+
+FileCacheService.instance.put(r1);
+FileCacheService.instance.put(r2);
+
+final CountDownLatch finished = new CountDownLatch(THREAD_COUNT);
+final AtomicBoolean hadError = new AtomicBoolean(false);
+
+for (int k = 0; k < THREAD_COUNT; k++)
+{
+executorService.execute( new Runnable()
+{
+@Override
+public void run()
+ 

[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-15 Thread jake
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9d0214a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9d0214a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9d0214a

Branch: refs/heads/cassandra-2.1.0
Commit: e9d0214a16dbb45b39ddb1c3ff7c44ecc23cb8f3
Parents: d7f7eec ad6ba3d
Author: Jake Luciani 
Authored: Fri Aug 15 13:27:47 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:27:47 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 70 
 3 files changed, 75 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9d0214a/CHANGES.txt
--
diff --cc CHANGES.txt
index 7c54a9e,4306de5..dfe9c47
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,10 +1,21 @@@
 -2.0.10
 +2.1.0-rc6
 + * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
 + * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)
 + * Add tracing entries about concurrent range requests (CASSANDRA-7599)
 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729)
 + * Remove netty buffer ref-counting (CASSANDRA-7735)
 + * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742)
 + * Include stress yaml example in release and deb (CASSANDRA-7717)
 + * workaround for netty issue causing corrupted data off the wire 
(CASSANDRA-7695)
 + * cqlsh DESC CLUSTER fails retrieving ring information (CASSANDRA-7687)
 + * Fix binding null values inside UDT (CASSANDRA-7685)
 + * Fix UDT field selection with empty fields (CASSANDRA-7670)
 + * Bogus deserialization of static cells from sstable (CASSANDRA-7684)
 + * Fix NPE on compaction leftover cleanup for dropped table (CASSANDRA-7770)
 +Merged from 2.0:
+  * Fix NPE in FileCacheService.sizeInBytes (CASSANDRA-7756)
 - * (cqlsh) cqlsh should automatically disable tracing when selecting
 -   from system_traces (CASSANDRA-7641)
 - * (Hadoop) Add CqlOutputFormat (CASSANDRA-6927)
 - * Don't depend on cassandra config for nodetool ring (CASSANDRA-7508)
 - * (cqlsh) Fix failing cqlsh formatting tests (CASSANDRA-7703)
 + * Remove duplicates from StorageService.getJoiningNodes (CASSANDRA-7478)
 + * Clone token map outside of hot gossip loops (CASSANDRA-7758)
   * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752)
   * Do not flush on truncate if durable_writes is false (CASSANDRA-7750)
   * Give CRR a default input_cql Statement (CASSANDRA-7226)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9d0214a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9d0214a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--



[1/2] git commit: Fix NPE in FileCacheService.sizeInBytes

2014-08-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 d7f7eec27 -> e9d0214a1


Fix NPE in FileCacheService.sizeInBytes

patch by tjake; reviewed by jbellis for (CASSANDRA-7756)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad6ba3d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad6ba3d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad6ba3d2

Branch: refs/heads/cassandra-2.1.0
Commit: ad6ba3d243058f060569ad16d6713f46e2ce6160
Parents: e756660
Author: Jake Luciani 
Authored: Fri Aug 15 13:24:15 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:24:15 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 70 
 3 files changed, 75 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 04902ad..4306de5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.10
+ * Fix NPE in FileCacheService.sizeInBytes (CASSANDRA-7756)
  * (cqlsh) cqlsh should automatically disable tracing when selecting
from system_traces (CASSANDRA-7641)
  * (Hadoop) Add CqlOutputFormat (CASSANDRA-6927)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
--
diff --git a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java 
b/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
index 9a03480..09ecac0 100644
--- a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
+++ b/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
@@ -154,7 +154,10 @@ public class RandomAccessReader extends RandomAccessFile 
implements FileDataInpu
 
 public int getTotalBufferSize()
 {
-return buffer.length;
+//This may NPE so we make a ref
+//https://issues.apache.org/jira/browse/CASSANDRA-7756
+byte[] ref = buffer;
+return ref != null ? ref.length : 0;
 }
 
 public void reset()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java 
b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
index 90c27e3..a16b291 100644
--- a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
+++ b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
@@ -19,6 +19,7 @@
  */
 package org.apache.cassandra.io.util;
 
+import org.apache.cassandra.service.FileCacheService;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
 import java.io.File;
@@ -28,6 +29,11 @@ import java.nio.ByteBuffer;
 import java.nio.channels.ClosedChannelException;
 import java.util.Arrays;
 import java.util.concurrent.Callable;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import static org.apache.cassandra.Util.expectEOF;
 import static org.apache.cassandra.Util.expectException;
@@ -508,6 +514,70 @@ public class BufferedRandomAccessFileTest
 }
 
 @Test
+public void testFileCacheService() throws IOException, InterruptedException
+{
+//see https://issues.apache.org/jira/browse/CASSANDRA-7756
+
+final int THREAD_COUNT = 40;
+ExecutorService executorService = 
Executors.newFixedThreadPool(THREAD_COUNT);
+
+SequentialWriter w1 = createTempFile("fscache1");
+SequentialWriter w2 = createTempFile("fscache2");
+
+w1.write(new byte[30]);
+w1.close();
+
+w2.write(new byte[30]);
+w2.close();
+
+for (int i = 0; i < 20; i++)
+{
+
+
+RandomAccessReader r1 = RandomAccessReader.open(w1);
+RandomAccessReader r2 = RandomAccessReader.open(w2);
+
+
+FileCacheService.instance.put(r1);
+FileCacheService.instance.put(r2);
+
+final CountDownLatch finished = new CountDownLatch(THREAD_COUNT);
+final AtomicBoolean hadError = new AtomicBoolean(false);
+
+for (int k = 0; k < THREAD_COUNT; k++)
+{
+executorService.execute( new Runnable()
+{
+@Override
+public void run()
+ 

[3/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-15 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54fbb0ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54fbb0ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54fbb0ab

Branch: refs/heads/cassandra-2.1
Commit: 54fbb0abb917e5717bfe8332560dd3a9663cf60a
Parents: 04a1fc6 e9d0214
Author: Jake Luciani 
Authored: Fri Aug 15 13:28:35 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:28:35 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 70 
 3 files changed, 75 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/54fbb0ab/CHANGES.txt
--



[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-15 Thread jake
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9d0214a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9d0214a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9d0214a

Branch: refs/heads/cassandra-2.1
Commit: e9d0214a16dbb45b39ddb1c3ff7c44ecc23cb8f3
Parents: d7f7eec ad6ba3d
Author: Jake Luciani 
Authored: Fri Aug 15 13:27:47 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:27:47 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 70 
 3 files changed, 75 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9d0214a/CHANGES.txt
--
diff --cc CHANGES.txt
index 7c54a9e,4306de5..dfe9c47
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,10 +1,21 @@@
 -2.0.10
 +2.1.0-rc6
 + * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
 + * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)
 + * Add tracing entries about concurrent range requests (CASSANDRA-7599)
 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729)
 + * Remove netty buffer ref-counting (CASSANDRA-7735)
 + * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742)
 + * Include stress yaml example in release and deb (CASSANDRA-7717)
 + * workaround for netty issue causing corrupted data off the wire 
(CASSANDRA-7695)
 + * cqlsh DESC CLUSTER fails retrieving ring information (CASSANDRA-7687)
 + * Fix binding null values inside UDT (CASSANDRA-7685)
 + * Fix UDT field selection with empty fields (CASSANDRA-7670)
 + * Bogus deserialization of static cells from sstable (CASSANDRA-7684)
 + * Fix NPE on compaction leftover cleanup for dropped table (CASSANDRA-7770)
 +Merged from 2.0:
+  * Fix NPE in FileCacheService.sizeInBytes (CASSANDRA-7756)
 - * (cqlsh) cqlsh should automatically disable tracing when selecting
 -   from system_traces (CASSANDRA-7641)
 - * (Hadoop) Add CqlOutputFormat (CASSANDRA-6927)
 - * Don't depend on cassandra config for nodetool ring (CASSANDRA-7508)
 - * (cqlsh) Fix failing cqlsh formatting tests (CASSANDRA-7703)
 + * Remove duplicates from StorageService.getJoiningNodes (CASSANDRA-7478)
 + * Clone token map outside of hot gossip loops (CASSANDRA-7758)
   * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752)
   * Do not flush on truncate if durable_writes is false (CASSANDRA-7750)
   * Give CRR a default input_cql Statement (CASSANDRA-7226)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9d0214a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9d0214a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--



[1/3] git commit: Fix NPE in FileCacheService.sizeInBytes

2014-08-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 04a1fc6e1 -> 54fbb0abb


Fix NPE in FileCacheService.sizeInBytes

patch by tjake; reviewed by jbellis for (CASSANDRA-7756)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad6ba3d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad6ba3d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad6ba3d2

Branch: refs/heads/cassandra-2.1
Commit: ad6ba3d243058f060569ad16d6713f46e2ce6160
Parents: e756660
Author: Jake Luciani 
Authored: Fri Aug 15 13:24:15 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:24:15 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 70 
 3 files changed, 75 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 04902ad..4306de5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.10
+ * Fix NPE in FileCacheService.sizeInBytes (CASSANDRA-7756)
  * (cqlsh) cqlsh should automatically disable tracing when selecting
from system_traces (CASSANDRA-7641)
  * (Hadoop) Add CqlOutputFormat (CASSANDRA-6927)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
--
diff --git a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java 
b/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
index 9a03480..09ecac0 100644
--- a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
+++ b/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
@@ -154,7 +154,10 @@ public class RandomAccessReader extends RandomAccessFile 
implements FileDataInpu
 
 public int getTotalBufferSize()
 {
-return buffer.length;
+//This may NPE so we make a ref
+//https://issues.apache.org/jira/browse/CASSANDRA-7756
+byte[] ref = buffer;
+return ref != null ? ref.length : 0;
 }
 
 public void reset()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java 
b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
index 90c27e3..a16b291 100644
--- a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
+++ b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
@@ -19,6 +19,7 @@
  */
 package org.apache.cassandra.io.util;
 
+import org.apache.cassandra.service.FileCacheService;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
 import java.io.File;
@@ -28,6 +29,11 @@ import java.nio.ByteBuffer;
 import java.nio.channels.ClosedChannelException;
 import java.util.Arrays;
 import java.util.concurrent.Callable;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import static org.apache.cassandra.Util.expectEOF;
 import static org.apache.cassandra.Util.expectException;
@@ -508,6 +514,70 @@ public class BufferedRandomAccessFileTest
 }
 
 @Test
+public void testFileCacheService() throws IOException, InterruptedException
+{
+//see https://issues.apache.org/jira/browse/CASSANDRA-7756
+
+final int THREAD_COUNT = 40;
+ExecutorService executorService = 
Executors.newFixedThreadPool(THREAD_COUNT);
+
+SequentialWriter w1 = createTempFile("fscache1");
+SequentialWriter w2 = createTempFile("fscache2");
+
+w1.write(new byte[30]);
+w1.close();
+
+w2.write(new byte[30]);
+w2.close();
+
+for (int i = 0; i < 20; i++)
+{
+
+
+RandomAccessReader r1 = RandomAccessReader.open(w1);
+RandomAccessReader r2 = RandomAccessReader.open(w2);
+
+
+FileCacheService.instance.put(r1);
+FileCacheService.instance.put(r2);
+
+final CountDownLatch finished = new CountDownLatch(THREAD_COUNT);
+final AtomicBoolean hadError = new AtomicBoolean(false);
+
+for (int k = 0; k < THREAD_COUNT; k++)
+{
+executorService.execute( new Runnable()
+{
+@Override
+public void run()
+ 

[jira] [Commented] (CASSANDRA-7499) Unable to update list element by index using CAS condition

2014-08-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098795#comment-14098795
 ] 

Tyler Hobbs commented on CASSANDRA-7499:


+1

I also ran this through some fairly thorough dtests I'm working on for 
CASSANDRA-6389, and it looks good.



> Unable to update list element by index using CAS condition
> --
>
> Key: CASSANDRA-7499
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7499
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra *2.0.9*, Java Driver *2.0.2* & *2.0.3*
> Client: cqlsh *3.1.8*, CQL spec *3.1.0*, Thrift protocol *19.39.0*
>Reporter: DOAN DuyHai
>Assignee: Sylvain Lebresne
> Fix For: 2.0.10
>
> Attachments: 7499-2.0.txt, 7499-2.1.txt, 7499.txt
>
>
> While running IT tests for *Achilles*, I ran into a strange bug:
> *With CQLSH*
> {code:sql}
> cqlsh:test> CREATE TABLE cas_update(id int PRIMARY KEY,name text,friends 
> list);
> cqlsh:test> INSERT INTO cas_update (id, name , friends ) VALUES ( 
> 10,'John',['Paul','George']);
> cqlsh:test> SELECT * FROM cas_update WHERE id=10;
>  id | friends| name
> ++--
>  10 | ['Paul', 'George'] | John
> cqlsh:test> UPDATE cas_update SET friends[0]='Helen' WHERE id=10 IF 
> name='John';
> Bad Request: List index 0 out of bound, list has size 0
> cqlsh:test> UPDATE cas_update SET friends[0]='Helen' WHERE id=10;
> cqlsh:test> SELECT * FROM cas_update WHERE id=10;
>  id | friends | name
> +-+--
>  10 | ['Helen', 'George'] | John
> {code}
> It seems that we cannot update list element by index with a CAS condition.
> *With Java driver 2.0.2 or 2.0.3*
> {code:java}
>  ACHILLES_DML_STATEMENT@:writeDMLStatementLog Prepared statement : [INSERT 
> INTO CompleteBean(id,followers,friends,name,preferences) VALUES 
> (:id,:followers,:friends,:name,:preferences) USING TTL :ttl;] with 
> CONSISTENCY LEVEL [ONE] 
>  ACHILLES_DML_STATEMENT@:writeDMLStatementLogbound values : 
> [621309709026375591, [], [Paul, Andrew], John, {}, 0] 
>  ACHILLES_DML_STATEMENT@:writeDMLStartBatch  
>  ACHILLES_DML_STATEMENT@:writeDMLStartBatch  
>  ACHILLES_DML_STATEMENT@:writeDMLStartBatch ** BATCH UNLOGGED START 
> ** 
>  ACHILLES_DML_STATEMENT@:writeDMLStartBatch  
>  ACHILLES_DML_STATEMENT@:writeDMLStatementLog Parameterized statement : 
> [UPDATE CompleteBean USING TTL 100 SET friends[0]=? WHERE 
> id=621309709026375591 IF name=?;] with CONSISTENCY LEVEL [ONE] 
>  ACHILLES_DML_STATEMENT@:writeDMLStatementLogbound values : [100, 0, 
> Helen, 621309709026375591, John] 
>  ACHILLES_DML_STATEMENT@:writeDMLStatementLog Parameterized statement : 
> [UPDATE CompleteBean USING TTL 100 SET friends[1]=null WHERE 
> id=621309709026375591 IF name=?;] with CONSISTENCY LEVEL [ONE] 
>  ACHILLES_DML_STATEMENT@:writeDMLStatementLogbound values : [100, 1, 
> null, 621309709026375591, John] 
>  ACHILLES_DML_STATEMENT@:writeDMLEndBatch  
>  ACHILLES_DML_STATEMENT@:writeDMLEndBatch   ** BATCH UNLOGGED END with 
> CONSISTENCY LEVEL [DEFAULT] ** 
>  ACHILLES_DML_STATEMENT@:writeDMLEndBatch  
>  ACHILLES_DML_STATEMENT@:writeDMLEndBatch  
>  ACHILLES_DML_STATEMENT@:truncateTable   Simple query : [TRUNCATE 
> entity_with_enum] with CONSISTENCY LEVEL [ALL] 
>  ACHILLES_DML_STATEMENT@:truncateTable   Simple query : [TRUNCATE 
> CompleteBean] with CONSISTENCY LEVEL [ALL] 
> com.datastax.driver.core.exceptions.InvalidQueryException: List index 0 out 
> of bound, list has size 0
> at 
> com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
> at 
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
> at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172)
> at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
> {code}
> With Cassandra *2.0.8* and Java Driver 2.0.2 or 2.0.3, *the test passed* so 
> it seems that there is a regression somewhere in the CAS update code



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7779) Add option to sstableloader to only stream to the local dc

2014-08-15 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-7779:
--

 Summary: Add option to sstableloader to only stream to the local dc
 Key: CASSANDRA-7779
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7779
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Nick Bailey
 Fix For: 1.2.19, 2.0.10, 2.1.1


This is meant to be a potential workaround for CASSANDRA-4756. Due to that 
ticket, trying to load a cluster wide snapshot via sstableloader will 
potentially stream an enormous amount of data. In a 3 datacenter cluster with 
rf=3 in each datacenter, 81 copies of the data would be streamed. Once we have 
per range sstables we can optimize sstableloader to merge data and only stream 
one copy, but until then we need a workaround. By only streaming to the local 
datacenter we can load the data locally in each datacenter and only have 9 
copies of the data rather than 81.

This could potentially be achieved by the option to ignore certain nodes that 
already exists in sstableloader, but in the case of vnodes and topology changes 
in the cluster, this could require specifying every node in the cluster as 
'ignored' on the command line which could be problematic. This is just a 
shortcut to avoid that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[4/4] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-15 Thread jake
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/io/util/RandomAccessReader.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fe8829fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fe8829fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fe8829fa

Branch: refs/heads/trunk
Commit: fe8829fa6ac5a0775ac041fe5a64d6c47c34961f
Parents: 4dd1a15 54fbb0a
Author: Jake Luciani 
Authored: Fri Aug 15 13:38:56 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:38:56 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 71 
 3 files changed, 76 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe8829fa/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe8829fa/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
--
diff --cc src/java/org/apache/cassandra/io/util/RandomAccessReader.java
index e395510,81e45b5..58205d8
--- a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
+++ b/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
@@@ -167,7 -154,10 +167,10 @@@ public class RandomAccessReader extend
  
  public int getTotalBufferSize()
  {
- return buffer.capacity();
+ //This may NPE so we make a ref
+ //https://issues.apache.org/jira/browse/CASSANDRA-7756
 -byte[] ref = buffer;
 -return ref != null ? ref.length : 0;
++ByteBuffer ref = buffer;
++return ref != null ? ref.capacity() : 0;
  }
  
  public void reset()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe8829fa/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--
diff --cc 
test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
index 8053553,cfabf62..7993160
--- a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
+++ b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
@@@ -520,6 -526,70 +526,71 @@@ public class BufferedRandomAccessFileTe
  }
  
  @Test
+ public void testFileCacheService() throws IOException, 
InterruptedException
+ {
+ //see https://issues.apache.org/jira/browse/CASSANDRA-7756
+ 
++final FileCacheService.CacheKey cacheKey = new 
FileCacheService.CacheKey();
+ final int THREAD_COUNT = 40;
+ ExecutorService executorService = 
Executors.newFixedThreadPool(THREAD_COUNT);
+ 
+ SequentialWriter w1 = createTempFile("fscache1");
+ SequentialWriter w2 = createTempFile("fscache2");
+ 
+ w1.write(new byte[30]);
+ w1.close();
+ 
+ w2.write(new byte[30]);
+ w2.close();
+ 
+ for (int i = 0; i < 20; i++)
+ {
+ 
+ 
+ RandomAccessReader r1 = RandomAccessReader.open(w1);
+ RandomAccessReader r2 = RandomAccessReader.open(w2);
+ 
+ 
 -FileCacheService.instance.put(r1);
 -FileCacheService.instance.put(r2);
++FileCacheService.instance.put(cacheKey, r1);
++FileCacheService.instance.put(cacheKey, r2);
+ 
+ final CountDownLatch finished = new CountDownLatch(THREAD_COUNT);
+ final AtomicBoolean hadError = new AtomicBoolean(false);
+ 
+ for (int k = 0; k < THREAD_COUNT; k++)
+ {
+ executorService.execute( new Runnable()
+ {
+ @Override
+ public void run()
+ {
+ try
+ {
+ long size = 
FileCacheService.instance.sizeInBytes();
+ 
+ while (size > 0)
+ size = 
FileCacheService.instance.sizeInBytes();
+ }
+ catch (Throwable t)
+ {
+ t.printStackTrace();
+ hadError.set(true);
+ }
+ finally
+ {
+ finished.countDown();
+ }
+ }
+ });
+ 
+ }
+ 
+ finished.await();
+ assert !hadError.get();
+ }
+ }
+ 
+ @Test
  public void testReadOnly() throws IOException
  {
  SequentialWriter file = createTempFile("br

[jira] [Updated] (CASSANDRA-7756) NullPointerException in getTotalBufferSize

2014-08-15 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-7756:
--

Fix Version/s: 2.1 rc6

> NullPointerException in getTotalBufferSize
> --
>
> Key: CASSANDRA-7756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7756
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux, OpenJDK 1.7
>Reporter: Leonid Shalupov
>Assignee: T Jake Luciani
> Fix For: 2.0.10, 2.1 rc6
>
> Attachments: 7756-2.0.txt, 7756-2.1.txt, 7756-v2.txt
>
>
> 18:59:50.499 [SharedPool-Worker-1] WARN  o.apache.cassandra.io.util.FileUtils 
> - Failed closing 
> /xxx/cassandra/data/pr1407782307/trigramindexcounter-d2817030218611e4b65c619763d48c52/pr1407782307-trigramindexcounter-ka-1-Data.db
>  - chunk length 65536, data length 8199819.
> java.lang.NullPointerException: null
>  at 
> org.apache.cassandra.io.util.RandomAccessReader.getTotalBufferSize(RandomAccessReader.java:157)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.getTotalBufferSize(CompressedRandomAccessReader.java:159)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.FileCacheService.sizeInBytes(FileCacheService.java:186)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.FileCacheService.put(FileCacheService.java:150) 
> ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.recycle(PoolingSegmentedFile.java:50)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:230)
>  ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:222) 
> ~[cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:69)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:89)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:261)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:59)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1873)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1681)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:345) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:55)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.getCurrentValuesFromCFS(CounterMutation.java:274)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.getCurrentValues(CounterMutation.java:241)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.db.CounterMutation.processModifications(CounterMutation.java:209)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.db.CounterMutation.apply(CounterMutation.java:136) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1116)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2065)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_65]
>  at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
>  [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
> [cassandra-all-2.1.0-rc5.jar:2.1.0-rc5]
>  at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/4] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-15 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54fbb0ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54fbb0ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54fbb0ab

Branch: refs/heads/trunk
Commit: 54fbb0abb917e5717bfe8332560dd3a9663cf60a
Parents: 04a1fc6 e9d0214
Author: Jake Luciani 
Authored: Fri Aug 15 13:28:35 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:28:35 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 70 
 3 files changed, 75 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/54fbb0ab/CHANGES.txt
--



[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-15 Thread jake
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9d0214a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9d0214a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9d0214a

Branch: refs/heads/trunk
Commit: e9d0214a16dbb45b39ddb1c3ff7c44ecc23cb8f3
Parents: d7f7eec ad6ba3d
Author: Jake Luciani 
Authored: Fri Aug 15 13:27:47 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:27:47 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 70 
 3 files changed, 75 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9d0214a/CHANGES.txt
--
diff --cc CHANGES.txt
index 7c54a9e,4306de5..dfe9c47
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,10 +1,21 @@@
 -2.0.10
 +2.1.0-rc6
 + * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
 + * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)
 + * Add tracing entries about concurrent range requests (CASSANDRA-7599)
 + * (cqlsh) Fix DESCRIBE for NTS keyspaces (CASSANDRA-7729)
 + * Remove netty buffer ref-counting (CASSANDRA-7735)
 + * Pass mutated cf to index updater for use by PRSI (CASSANDRA-7742)
 + * Include stress yaml example in release and deb (CASSANDRA-7717)
 + * workaround for netty issue causing corrupted data off the wire 
(CASSANDRA-7695)
 + * cqlsh DESC CLUSTER fails retrieving ring information (CASSANDRA-7687)
 + * Fix binding null values inside UDT (CASSANDRA-7685)
 + * Fix UDT field selection with empty fields (CASSANDRA-7670)
 + * Bogus deserialization of static cells from sstable (CASSANDRA-7684)
 + * Fix NPE on compaction leftover cleanup for dropped table (CASSANDRA-7770)
 +Merged from 2.0:
+  * Fix NPE in FileCacheService.sizeInBytes (CASSANDRA-7756)
 - * (cqlsh) cqlsh should automatically disable tracing when selecting
 -   from system_traces (CASSANDRA-7641)
 - * (Hadoop) Add CqlOutputFormat (CASSANDRA-6927)
 - * Don't depend on cassandra config for nodetool ring (CASSANDRA-7508)
 - * (cqlsh) Fix failing cqlsh formatting tests (CASSANDRA-7703)
 + * Remove duplicates from StorageService.getJoiningNodes (CASSANDRA-7478)
 + * Clone token map outside of hot gossip loops (CASSANDRA-7758)
   * Fix MS expiring map timeout for Paxos messages (CASSANDRA-7752)
   * Do not flush on truncate if durable_writes is false (CASSANDRA-7750)
   * Give CRR a default input_cql Statement (CASSANDRA-7226)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9d0214a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9d0214a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--



[1/4] git commit: Fix NPE in FileCacheService.sizeInBytes

2014-08-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4dd1a15cc -> fe8829fa6


Fix NPE in FileCacheService.sizeInBytes

patch by tjake; reviewed by jbellis for (CASSANDRA-7756)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad6ba3d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad6ba3d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad6ba3d2

Branch: refs/heads/trunk
Commit: ad6ba3d243058f060569ad16d6713f46e2ce6160
Parents: e756660
Author: Jake Luciani 
Authored: Fri Aug 15 13:24:15 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 13:24:15 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/io/util/RandomAccessReader.java   |  5 +-
 .../io/util/BufferedRandomAccessFileTest.java   | 70 
 3 files changed, 75 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 04902ad..4306de5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.10
+ * Fix NPE in FileCacheService.sizeInBytes (CASSANDRA-7756)
  * (cqlsh) cqlsh should automatically disable tracing when selecting
from system_traces (CASSANDRA-7641)
  * (Hadoop) Add CqlOutputFormat (CASSANDRA-6927)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
--
diff --git a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java 
b/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
index 9a03480..09ecac0 100644
--- a/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
+++ b/src/java/org/apache/cassandra/io/util/RandomAccessReader.java
@@ -154,7 +154,10 @@ public class RandomAccessReader extends RandomAccessFile 
implements FileDataInpu
 
 public int getTotalBufferSize()
 {
-return buffer.length;
+//This may NPE so we make a ref
+//https://issues.apache.org/jira/browse/CASSANDRA-7756
+byte[] ref = buffer;
+return ref != null ? ref.length : 0;
 }
 
 public void reset()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad6ba3d2/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java 
b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
index 90c27e3..a16b291 100644
--- a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
+++ b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
@@ -19,6 +19,7 @@
  */
 package org.apache.cassandra.io.util;
 
+import org.apache.cassandra.service.FileCacheService;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
 import java.io.File;
@@ -28,6 +29,11 @@ import java.nio.ByteBuffer;
 import java.nio.channels.ClosedChannelException;
 import java.util.Arrays;
 import java.util.concurrent.Callable;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
 
 import static org.apache.cassandra.Util.expectEOF;
 import static org.apache.cassandra.Util.expectException;
@@ -508,6 +514,70 @@ public class BufferedRandomAccessFileTest
 }
 
 @Test
+public void testFileCacheService() throws IOException, InterruptedException
+{
+//see https://issues.apache.org/jira/browse/CASSANDRA-7756
+
+final int THREAD_COUNT = 40;
+ExecutorService executorService = 
Executors.newFixedThreadPool(THREAD_COUNT);
+
+SequentialWriter w1 = createTempFile("fscache1");
+SequentialWriter w2 = createTempFile("fscache2");
+
+w1.write(new byte[30]);
+w1.close();
+
+w2.write(new byte[30]);
+w2.close();
+
+for (int i = 0; i < 20; i++)
+{
+
+
+RandomAccessReader r1 = RandomAccessReader.open(w1);
+RandomAccessReader r2 = RandomAccessReader.open(w2);
+
+
+FileCacheService.instance.put(r1);
+FileCacheService.instance.put(r2);
+
+final CountDownLatch finished = new CountDownLatch(THREAD_COUNT);
+final AtomicBoolean hadError = new AtomicBoolean(false);
+
+for (int k = 0; k < THREAD_COUNT; k++)
+{
+executorService.execute( new Runnable()
+{
+@Override
+public void run()
+{
+  

[jira] [Commented] (CASSANDRA-7561) On DROP we should invalidate CounterKeyCache as well as Key/Row cache

2014-08-15 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098811#comment-14098811
 ] 

Benedict commented on CASSANDRA-7561:
-

bq. Well. It shouldn't be throwing any exceptions, AFAIK

CounterCacheKey.getPathInfo() is called during serialization, which is not safe 
if the CF has been dropped (since it will get a null cf back). So we still need 
to address preventing an autosave happening whilst the map contains keys that 
are in a dropped CF, or we need getPathInfo() at least to be safe during this 
(and return a result that is valid for all use cases), whichever is easiest.

> On DROP we should invalidate CounterKeyCache as well as Key/Row cache
> -
>
> Key: CASSANDRA-7561
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7561
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: 7561.txt
>
>
> We should also probably ensure we don't attempt to auto save _any_ of the 
> caches while they are in an inconsistent state (i.e. there are keys present 
> to be saved that should not be restored, or that would throw exceptions when 
> we save (e.g. CounterCacheKey))



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7561) On DROP we should invalidate CounterKeyCache as well as Key/Row cache

2014-08-15 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098811#comment-14098811
 ] 

Benedict edited comment on CASSANDRA-7561 at 8/15/14 5:46 PM:
--

bq. Well. It shouldn't be throwing any exceptions, AFAIK

CounterCacheKey.getPathInfo() is called during serialization, which is not safe 
if the CF has been dropped (since it will get a null cf back). So we still need 
to address preventing an autosave happening whilst the map contains keys that 
are in a dropped CF, or we need getPathInfo() at least to be safe during this 
(and return a result that is valid for all use cases), whichever is easiest.

It looks like this bug may affect the row cache as well, except that we've 
simply never noticed it since the window is too small. I filed this ticket a 
long time ago so cannot remember where/why I saw this happen. Mea culpa for not 
filling it into the ticket in the first place.


was (Author: benedict):
bq. Well. It shouldn't be throwing any exceptions, AFAIK

CounterCacheKey.getPathInfo() is called during serialization, which is not safe 
if the CF has been dropped (since it will get a null cf back). So we still need 
to address preventing an autosave happening whilst the map contains keys that 
are in a dropped CF, or we need getPathInfo() at least to be safe during this 
(and return a result that is valid for all use cases), whichever is easiest.

> On DROP we should invalidate CounterKeyCache as well as Key/Row cache
> -
>
> Key: CASSANDRA-7561
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7561
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: 7561.txt
>
>
> We should also probably ensure we don't attempt to auto save _any_ of the 
> caches while they are in an inconsistent state (i.e. there are keys present 
> to be saved that should not be restored, or that would throw exceptions when 
> we save (e.g. CounterCacheKey))



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7561) On DROP we should invalidate CounterKeyCache as well as Key/Row cache

2014-08-15 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098858#comment-14098858
 ] 

Benedict commented on CASSANDRA-7561:
-

Since this is holding up 2.1-rc6, I'm comfortable splitting the remainder of 
the fix out into a separate ticket. The code as it stands at least reduces the 
bug to a window of risk after DROP rather than a guaranteed failure.

> On DROP we should invalidate CounterKeyCache as well as Key/Row cache
> -
>
> Key: CASSANDRA-7561
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7561
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: 7561.txt
>
>
> We should also probably ensure we don't attempt to auto save _any_ of the 
> caches while they are in an inconsistent state (i.e. there are keys present 
> to be saved that should not be restored, or that would throw exceptions when 
> we save (e.g. CounterCacheKey))



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7780) Cassandra Daemon throws file not found exception.

2014-08-15 Thread Dharsan Logendran (JIRA)
Dharsan Logendran created CASSANDRA-7780:


 Summary: Cassandra Daemon  throws  file not found exception.
 Key: CASSANDRA-7780
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7780
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Redhat 6 
Cassandra 2.0.9
3 nodes cluster 
Reporter: Dharsan Logendran


ERROR [CompactionExecutor:450] 2014-08-11 16:24:26,778 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:450,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
/opt/5620sam/samauxdb/data/samdb/acc_stats_log_records/samdb-acc_stats_log_records-jb-1501-Data.db
 (No such file or directory)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
at 
org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy$LeveledScanner.computeNext(LeveledCompactionStrategy.java:294)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy$LeveledScanner.computeNext(LeveledCompactionStrategy.java:226)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:123)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:154)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: 
/opt/5620sam/samauxdb/data/samdb/acc_stats_log_records/samdb-acc_stats_log_records-jb-1501-Data.db
 (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.(CompressedThrottledReader.java:34)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48)
... 24 more
ERROR [CompactionExecutor:450] 2014-08-11 16:24:26,782 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:450,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
/opt/5620sam/samauxdb/data/samdb/acc_stats_log_records/samdb-acc_stats_log_records-jb-1501-Data.db
 (No such file or directory)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6602) Compaction improvements to optimize time series data

2014-08-15 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098880#comment-14098880
 ] 

Robert Coli commented on CASSANDRA-6602:


For historical background, Kelvin Kakugawa did a basic version of the 
DateTieredCompationStrategy, at Digg, for a timeline product, in 2012 or so. It 
was successful in the limited use it saw there, especially when it dropped 
entire SSTables on the floor when they no longer contained data we cared about.

> Compaction improvements to optimize time series data
> 
>
> Key: CASSANDRA-6602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6602
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Tupshin Harper
>Assignee: Björn Hegerfors
>  Labels: compaction, performance
> Fix For: 3.0
>
> Attachments: 
> cassandra-2.0-CASSANDRA-6602-DateTieredCompactionStrategy.txt, 
> cassandra-2.0-CASSANDRA-6602-DateTieredCompactionStrategy_v2.txt, 
> cassandra-2.0-CASSANDRA-6602-DateTieredCompactionStrategy_v3.txt
>
>
> There are some unique characteristics of many/most time series use cases that 
> both provide challenges, as well as provide unique opportunities for 
> optimizations.
> One of the major challenges is in compaction. The existing compaction 
> strategies will tend to re-compact data on disk at least a few times over the 
> lifespan of each data point, greatly increasing the cpu and IO costs of that 
> write.
> Compaction exists to
> 1) ensure that there aren't too many files on disk
> 2) ensure that data that should be contiguous (part of the same partition) is 
> laid out contiguously
> 3) deleting data due to ttls or tombstones
> The special characteristics of time series data allow us to optimize away all 
> three.
> Time series data
> 1) tends to be delivered in time order, with relatively constrained exceptions
> 2) often has a pre-determined and fixed expiration date
> 3) Never gets deleted prior to TTL
> 4) Has relatively predictable ingestion rates
> Note that I filed CASSANDRA-5561 and this ticket potentially replaces or 
> lowers the need for it. In that ticket, jbellis reasonably asks, how that 
> compaction strategy is better than disabling compaction.
> Taking that to heart, here is a compaction-strategy-less approach that could 
> be extremely efficient for time-series use cases that follow the above 
> pattern.
> (For context, I'm thinking of an example use case involving lots of streams 
> of time-series data with a 5GB per day ingestion rate, and a 1000 day 
> retention with TTL, resulting in an eventual steady state of 5TB per node)
> 1) You have an extremely large memtable (preferably off heap, if/when doable) 
> for the table, and that memtable is sized to be able to hold a lengthy window 
> of time. A typical period might be one day. At the end of that period, you 
> flush the contents of the memtable to an sstable and move to the next one. 
> This is basically identical to current behaviour, but with thresholds 
> adjusted so that you can ensure flushing at predictable intervals. (Open 
> question is whether predictable intervals is actually necessary, or whether 
> just waiting until the huge memtable is nearly full is sufficient)
> 2) Combine the behaviour with CASSANDRA-5228 so that sstables will be 
> efficiently dropped once all of the columns have. (Another side note, it 
> might be valuable to have a modified version of CASSANDRA-3974 that doesn't 
> bother storing per-column TTL since it is required that all columns have the 
> same TTL)
> 3) Be able to mark column families as read/write only (no explicit deletes), 
> so no tombstones.
> 4) Optionally add back an additional type of delete that would delete all 
> data earlier than a particular timestamp, resulting in immediate dropping of 
> obsoleted sstables.
> The result is that for in-order delivered data, Every cell will be laid out 
> optimally on disk on the first pass, and over the course of 1000 days and 5TB 
> of data, there will "only" be 1000 5GB sstables, so the number of filehandles 
> will be reasonable.
> For exceptions (out-of-order delivery), most cases will be caught by the 
> extended (24 hour+) memtable flush times and merged correctly automatically. 
> For those that were slightly askew at flush time, or were delivered so far 
> out of order that they go in the wrong sstable, there is relatively low 
> overhead to reading from two sstables for a time slice, instead of one, and 
> that overhead would be incurred relatively rarely unless out-of-order 
> delivery was the common case, in which case, this strategy should not be used.
> Another possible optimization to address out-of-order would be to maintain 
> more than one time-centric memtables in memory

[jira] [Updated] (CASSANDRA-7780) Cassandra Daemon throws file not found exception.

2014-08-15 Thread Dharsan Logendran (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dharsan Logendran updated CASSANDRA-7780:
-

Description: 
Under heavy load, the Cassandra nodes throwing   file not found exceptions,


ERROR [CompactionExecutor:450] 2014-08-11 16:24:26,778 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:450,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
/opt/5620sam/samauxdb/data/samdb/acc_stats_log_records/samdb-acc_stats_log_records-jb-1501-Data.db
 (No such file or directory)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
at 
org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy$LeveledScanner.computeNext(LeveledCompactionStrategy.java:294)
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy$LeveledScanner.computeNext(LeveledCompactionStrategy.java:226)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:123)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:154)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: 
/opt/5620sam/samauxdb/data/samdb/acc_stats_log_records/samdb-acc_stats_log_records-jb-1501-Data.db
 (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.(CompressedThrottledReader.java:34)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48)
... 24 more
ERROR [CompactionExecutor:450] 2014-08-11 16:24:26,782 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:450,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
/opt/5620sam/samauxdb/data/samdb/acc_stats_log_records/samdb-acc_stats_log_records-jb-1501-Data.db
 (No such file or directory)

  was:
ERROR [CompactionExecutor:450] 2014-08-11 16:24:26,778 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:450,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
/opt/5620sam/samauxdb/data/samdb/acc_stats_log_records/samdb-acc_stats_log_records-jb-1501-Data.db
 (No such file or directory)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
at 
org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
at 
org.apache.cassandra.db.compaction.Le

[4/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-08-15 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
src/java/org/apache/cassandra/streaming/ConnectionHandler.java
src/java/org/apache/cassandra/streaming/StreamPlan.java
src/java/org/apache/cassandra/streaming/StreamResultFuture.java
src/java/org/apache/cassandra/streaming/StreamSession.java
src/java/org/apache/cassandra/tools/BulkLoader.java
test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de5bb585
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de5bb585
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de5bb585

Branch: refs/heads/trunk
Commit: de5bb5854de5c02e13c3311e0e4f70dc44f5bb83
Parents: 54fbb0a 563cea1
Author: Yuki Morishita 
Authored: Fri Aug 15 13:24:31 2014 -0500
Committer: Yuki Morishita 
Committed: Fri Aug 15 13:24:31 2014 -0500

--
 CHANGES.txt |   1 +
 .../config/YamlConfigurationLoader.java |   7 +-
 .../cassandra/io/sstable/SSTableLoader.java |  22 ++-
 .../cassandra/streaming/ConnectionHandler.java  |  49 +--
 .../streaming/DefaultConnectionFactory.java |  75 ++
 .../streaming/StreamConnectionFactory.java  |  30 
 .../cassandra/streaming/StreamCoordinator.java  |  13 +-
 .../apache/cassandra/streaming/StreamPlan.java  |  17 ++-
 .../cassandra/streaming/StreamResultFuture.java |   2 +-
 .../cassandra/streaming/StreamSession.java  |  13 +-
 .../tools/BulkLoadConnectionFactory.java|  68 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 139 +--
 .../streaming/StreamTransferTaskTest.java   |   2 +-
 13 files changed, 337 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de5bb585/CHANGES.txt
--
diff --cc CHANGES.txt
index 191b187,e335484..ced91fd
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -171,7 -47,9 +171,8 @@@ Merged from 2.0
   * Backport CASSANDRA-6747 (CASSANDRA-7560)
   * Track max/min timestamps for range tombstones (CASSANDRA-7647)
   * Fix NPE when listing saved caches dir (CASSANDRA-7632)
+  * Fix sstableloader unable to connect encrypted node (CASSANDRA-7585)
  Merged from 1.2:
 - * Remove duplicates from StorageService.getJoiningNodes (CASSANDRA-7478)
   * Clone token map outside of hot gossip loops (CASSANDRA-7758)
   * Add stop method to EmbeddedCassandraService (CASSANDRA-7595)
   * Support connecting to ipv6 jmx with nodetool (CASSANDRA-7669)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de5bb585/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
--
diff --cc src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
index 78621f2,b520d07..0b62ff4
--- a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
+++ b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
@@@ -81,14 -69,18 +81,19 @@@ public class YamlConfigurationLoader im
  
  public Config loadConfig() throws ConfigurationException
  {
+ return loadConfig(getStorageConfigURL());
+ }
+ 
+ public Config loadConfig(URL url) throws ConfigurationException
+ {
+ InputStream input = null;
  try
  {
- URL url = getStorageConfigURL();
  logger.info("Loading settings from {}", url);
 -try
 +byte[] configBytes;
 +try (InputStream is = url.openStream())
  {
 -input = url.openStream();
 +configBytes = ByteStreams.toByteArray(is);
  }
  catch (IOException e)
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de5bb585/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--
diff --cc src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
index bbb1277,85dc0e4..3d7eea7
--- a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
@@@ -157,7 -149,7 +157,7 @@@ public class SSTableLoader implements S
  client.init(keyspace);
  outputHandler.output("Established connection to initial hosts");
  
- StreamPlan plan = new StreamPlan("Bulk Load", 0, connectionsPerHost);
 -StreamPlan plan = new StreamPlan("Bulk 
Load").connectionFactory(client.getConnectionFactory());
++StreamPlan plan = new StreamPlan("Bulk Load", 0, 
connect

[2/6] git commit: Fix sstableloader unable to connect encrypted node

2014-08-15 Thread yukim
Fix sstableloader unable to connect encrypted node

patch by yukim; reviewed by krummas for CASSANDRA-7585


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/563cea14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/563cea14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/563cea14

Branch: refs/heads/cassandra-2.1
Commit: 563cea14b4bb87cd37ab10399904f08757c34d27
Parents: ad6ba3d
Author: Yuki Morishita 
Authored: Fri Aug 15 12:31:59 2014 -0500
Committer: Yuki Morishita 
Committed: Fri Aug 15 12:31:59 2014 -0500

--
 CHANGES.txt |   1 +
 .../config/YamlConfigurationLoader.java |   6 +-
 .../cassandra/io/sstable/SSTableLoader.java |  22 ++-
 .../cassandra/streaming/ConnectionHandler.java  |  48 +-
 .../streaming/DefaultConnectionFactory.java |  74 +
 .../streaming/StreamConnectionFactory.java  |  30 
 .../apache/cassandra/streaming/StreamPlan.java  |  16 +-
 .../cassandra/streaming/StreamResultFuture.java |   2 +-
 .../cassandra/streaming/StreamSession.java  |  13 +-
 .../tools/BulkLoadConnectionFactory.java|  68 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 149 +--
 .../streaming/StreamTransferTaskTest.java   |   2 +-
 12 files changed, 330 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/563cea14/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4306de5..e335484 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -47,6 +47,7 @@
  * Backport CASSANDRA-6747 (CASSANDRA-7560)
  * Track max/min timestamps for range tombstones (CASSANDRA-7647)
  * Fix NPE when listing saved caches dir (CASSANDRA-7632)
+ * Fix sstableloader unable to connect encrypted node (CASSANDRA-7585)
 Merged from 1.2:
  * Remove duplicates from StorageService.getJoiningNodes (CASSANDRA-7478)
  * Clone token map outside of hot gossip loops (CASSANDRA-7758)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/563cea14/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
--
diff --git a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java 
b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
index 6b5a152..b520d07 100644
--- a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
+++ b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
@@ -69,10 +69,14 @@ public class YamlConfigurationLoader implements 
ConfigurationLoader
 
 public Config loadConfig() throws ConfigurationException
 {
+return loadConfig(getStorageConfigURL());
+}
+
+public Config loadConfig(URL url) throws ConfigurationException
+{
 InputStream input = null;
 try
 {
-URL url = getStorageConfigURL();
 logger.info("Loading settings from {}", url);
 try
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/563cea14/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
index 4a1604d..85dc0e4 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
@@ -50,7 +50,7 @@ public class SSTableLoader implements StreamEventHandler
 private final OutputHandler outputHandler;
 private final Set failedHosts = new HashSet<>();
 
-private final List sstables = new 
ArrayList();
+private final List sstables = new ArrayList<>();
 private final Multimap streamingDetails = 
HashMultimap.create();
 
 static
@@ -94,7 +94,7 @@ public class SSTableLoader implements StreamEventHandler
 return false;
 }
 
-Set components = new HashSet();
+Set components = new HashSet<>();
 components.add(Component.DATA);
 components.add(Component.PRIMARY_INDEX);
 if (new File(desc.filenameFor(Component.SUMMARY)).exists())
@@ -149,7 +149,7 @@ public class SSTableLoader implements StreamEventHandler
 client.init(keyspace);
 outputHandler.output("Established connection to initial hosts");
 
-StreamPlan plan = new StreamPlan("Bulk Load");
+StreamPlan plan = new StreamPlan("Bulk 
Load").connectionFactory(client.getConnectionFactory());
 
 Map>> endpointToRanges = 
client.getEndpointToRangesMap();
 openSSTables(endpointToRanges);
@@ -220,7 +220,7 @@ public 

[jira] [Updated] (CASSANDRA-7778) Use PID to automatically scale thread pools and throttles.

2014-08-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7778:
--

Fix Version/s: 3.0

> Use PID to automatically scale thread pools and throttles.
> --
>
> Key: CASSANDRA-7778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7778
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>  Labels: ponies
> Fix For: 3.0
>
>
> Most customers deploy with non-optimal configurations. Examples include 
> compaction throttle, streaming throttle, RPC requests threadpool size, which 
> are set too aggressively or too conservatively.  Often these problems aren't 
> discovered until the cluster is in the field, and the problem will manifest 
> as a critical outage. This results in the perception that Cassandra "falls 
> over" without warning. Because it's difficult to ship with a set of tuning 
> parameters that are valid for all or even most scenarios I propose that we 
> use a PID algorithm to automatically tune several key parameters. The goal of 
> the PID would be to keep load within a healthy range. If the user chooses 
> they could always revert to explicitly defined configuration.
> http://en.wikipedia.org/wiki/PID_controller



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/6] git commit: Fix sstableloader unable to connect encrypted node

2014-08-15 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 ad6ba3d24 -> 563cea14b
  refs/heads/cassandra-2.1 54fbb0abb -> de5bb5854
  refs/heads/trunk fe8829fa6 -> ff88f2376


Fix sstableloader unable to connect encrypted node

patch by yukim; reviewed by krummas for CASSANDRA-7585


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/563cea14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/563cea14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/563cea14

Branch: refs/heads/cassandra-2.0
Commit: 563cea14b4bb87cd37ab10399904f08757c34d27
Parents: ad6ba3d
Author: Yuki Morishita 
Authored: Fri Aug 15 12:31:59 2014 -0500
Committer: Yuki Morishita 
Committed: Fri Aug 15 12:31:59 2014 -0500

--
 CHANGES.txt |   1 +
 .../config/YamlConfigurationLoader.java |   6 +-
 .../cassandra/io/sstable/SSTableLoader.java |  22 ++-
 .../cassandra/streaming/ConnectionHandler.java  |  48 +-
 .../streaming/DefaultConnectionFactory.java |  74 +
 .../streaming/StreamConnectionFactory.java  |  30 
 .../apache/cassandra/streaming/StreamPlan.java  |  16 +-
 .../cassandra/streaming/StreamResultFuture.java |   2 +-
 .../cassandra/streaming/StreamSession.java  |  13 +-
 .../tools/BulkLoadConnectionFactory.java|  68 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 149 +--
 .../streaming/StreamTransferTaskTest.java   |   2 +-
 12 files changed, 330 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/563cea14/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4306de5..e335484 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -47,6 +47,7 @@
  * Backport CASSANDRA-6747 (CASSANDRA-7560)
  * Track max/min timestamps for range tombstones (CASSANDRA-7647)
  * Fix NPE when listing saved caches dir (CASSANDRA-7632)
+ * Fix sstableloader unable to connect encrypted node (CASSANDRA-7585)
 Merged from 1.2:
  * Remove duplicates from StorageService.getJoiningNodes (CASSANDRA-7478)
  * Clone token map outside of hot gossip loops (CASSANDRA-7758)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/563cea14/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
--
diff --git a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java 
b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
index 6b5a152..b520d07 100644
--- a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
+++ b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
@@ -69,10 +69,14 @@ public class YamlConfigurationLoader implements 
ConfigurationLoader
 
 public Config loadConfig() throws ConfigurationException
 {
+return loadConfig(getStorageConfigURL());
+}
+
+public Config loadConfig(URL url) throws ConfigurationException
+{
 InputStream input = null;
 try
 {
-URL url = getStorageConfigURL();
 logger.info("Loading settings from {}", url);
 try
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/563cea14/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
index 4a1604d..85dc0e4 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
@@ -50,7 +50,7 @@ public class SSTableLoader implements StreamEventHandler
 private final OutputHandler outputHandler;
 private final Set failedHosts = new HashSet<>();
 
-private final List sstables = new 
ArrayList();
+private final List sstables = new ArrayList<>();
 private final Multimap streamingDetails = 
HashMultimap.create();
 
 static
@@ -94,7 +94,7 @@ public class SSTableLoader implements StreamEventHandler
 return false;
 }
 
-Set components = new HashSet();
+Set components = new HashSet<>();
 components.add(Component.DATA);
 components.add(Component.PRIMARY_INDEX);
 if (new File(desc.filenameFor(Component.SUMMARY)).exists())
@@ -149,7 +149,7 @@ public class SSTableLoader implements StreamEventHandler
 client.init(keyspace);
 outputHandler.output("Established connection to initial hosts");
 
-StreamPlan plan = new StreamPlan("Bulk Load");
+StreamPlan plan = new StreamPlan("Bulk 
Load").conn

[3/6] git commit: Fix sstableloader unable to connect encrypted node

2014-08-15 Thread yukim
Fix sstableloader unable to connect encrypted node

patch by yukim; reviewed by krummas for CASSANDRA-7585


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/563cea14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/563cea14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/563cea14

Branch: refs/heads/trunk
Commit: 563cea14b4bb87cd37ab10399904f08757c34d27
Parents: ad6ba3d
Author: Yuki Morishita 
Authored: Fri Aug 15 12:31:59 2014 -0500
Committer: Yuki Morishita 
Committed: Fri Aug 15 12:31:59 2014 -0500

--
 CHANGES.txt |   1 +
 .../config/YamlConfigurationLoader.java |   6 +-
 .../cassandra/io/sstable/SSTableLoader.java |  22 ++-
 .../cassandra/streaming/ConnectionHandler.java  |  48 +-
 .../streaming/DefaultConnectionFactory.java |  74 +
 .../streaming/StreamConnectionFactory.java  |  30 
 .../apache/cassandra/streaming/StreamPlan.java  |  16 +-
 .../cassandra/streaming/StreamResultFuture.java |   2 +-
 .../cassandra/streaming/StreamSession.java  |  13 +-
 .../tools/BulkLoadConnectionFactory.java|  68 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 149 +--
 .../streaming/StreamTransferTaskTest.java   |   2 +-
 12 files changed, 330 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/563cea14/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4306de5..e335484 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -47,6 +47,7 @@
  * Backport CASSANDRA-6747 (CASSANDRA-7560)
  * Track max/min timestamps for range tombstones (CASSANDRA-7647)
  * Fix NPE when listing saved caches dir (CASSANDRA-7632)
+ * Fix sstableloader unable to connect encrypted node (CASSANDRA-7585)
 Merged from 1.2:
  * Remove duplicates from StorageService.getJoiningNodes (CASSANDRA-7478)
  * Clone token map outside of hot gossip loops (CASSANDRA-7758)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/563cea14/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
--
diff --git a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java 
b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
index 6b5a152..b520d07 100644
--- a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
+++ b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
@@ -69,10 +69,14 @@ public class YamlConfigurationLoader implements 
ConfigurationLoader
 
 public Config loadConfig() throws ConfigurationException
 {
+return loadConfig(getStorageConfigURL());
+}
+
+public Config loadConfig(URL url) throws ConfigurationException
+{
 InputStream input = null;
 try
 {
-URL url = getStorageConfigURL();
 logger.info("Loading settings from {}", url);
 try
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/563cea14/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
index 4a1604d..85dc0e4 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
@@ -50,7 +50,7 @@ public class SSTableLoader implements StreamEventHandler
 private final OutputHandler outputHandler;
 private final Set failedHosts = new HashSet<>();
 
-private final List sstables = new 
ArrayList();
+private final List sstables = new ArrayList<>();
 private final Multimap streamingDetails = 
HashMultimap.create();
 
 static
@@ -94,7 +94,7 @@ public class SSTableLoader implements StreamEventHandler
 return false;
 }
 
-Set components = new HashSet();
+Set components = new HashSet<>();
 components.add(Component.DATA);
 components.add(Component.PRIMARY_INDEX);
 if (new File(desc.filenameFor(Component.SUMMARY)).exists())
@@ -149,7 +149,7 @@ public class SSTableLoader implements StreamEventHandler
 client.init(keyspace);
 outputHandler.output("Established connection to initial hosts");
 
-StreamPlan plan = new StreamPlan("Bulk Load");
+StreamPlan plan = new StreamPlan("Bulk 
Load").connectionFactory(client.getConnectionFactory());
 
 Map>> endpointToRanges = 
client.getEndpointToRangesMap();
 openSSTables(endpointToRanges);
@@ -220,7 +220,7 @@ public class SS

[jira] [Updated] (CASSANDRA-7778) Use PID to automatically scale thread pools and throttles.

2014-08-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7778:


Labels: ponies  (was: )

> Use PID to automatically scale thread pools and throttles.
> --
>
> Key: CASSANDRA-7778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7778
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>  Labels: ponies
> Fix For: 3.0
>
>
> Most customers deploy with non-optimal configurations. Examples include 
> compaction throttle, streaming throttle, RPC requests threadpool size, which 
> are set too aggressively or too conservatively.  Often these problems aren't 
> discovered until the cluster is in the field, and the problem will manifest 
> as a critical outage. This results in the perception that Cassandra "falls 
> over" without warning. Because it's difficult to ship with a set of tuning 
> parameters that are valid for all or even most scenarios I propose that we 
> use a PID algorithm to automatically tune several key parameters. The goal of 
> the PID would be to keep load within a healthy range. If the user chooses 
> they could always revert to explicitly defined configuration.
> http://en.wikipedia.org/wiki/PID_controller



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[6/6] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-15 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ff88f237
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ff88f237
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ff88f237

Branch: refs/heads/trunk
Commit: ff88f237662e0ad3389373bc4466bf80c10420d1
Parents: fe8829f de5bb58
Author: Yuki Morishita 
Authored: Fri Aug 15 13:26:39 2014 -0500
Committer: Yuki Morishita 
Committed: Fri Aug 15 13:26:39 2014 -0500

--
 CHANGES.txt |   1 +
 .../config/YamlConfigurationLoader.java |   7 +-
 .../cassandra/io/sstable/SSTableLoader.java |  22 ++-
 .../cassandra/streaming/ConnectionHandler.java  |  49 +--
 .../streaming/DefaultConnectionFactory.java |  75 ++
 .../streaming/StreamConnectionFactory.java  |  30 
 .../cassandra/streaming/StreamCoordinator.java  |  13 +-
 .../apache/cassandra/streaming/StreamPlan.java  |  17 ++-
 .../cassandra/streaming/StreamResultFuture.java |   2 +-
 .../cassandra/streaming/StreamSession.java  |  13 +-
 .../tools/BulkLoadConnectionFactory.java|  68 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 139 +--
 .../streaming/StreamTransferTaskTest.java   |   2 +-
 13 files changed, 337 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff88f237/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff88f237/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff88f237/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ff88f237/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
--
diff --cc test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
index f9ae82e,4043ac8..5d246f4
--- a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
+++ b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
@@@ -57,10 -40,10 +57,10 @@@ public class StreamTransferTaskTes
  @Test
  public void testScheduleTimeout() throws Exception
  {
 -String ks = "Keyspace1";
 +String ks = KEYSPACE1;
  String cf = "Standard1";
  
- StreamSession session = new 
StreamSession(FBUtilities.getBroadcastAddress(), 0);
+ StreamSession session = new 
StreamSession(FBUtilities.getBroadcastAddress(), null, 0);
  ColumnFamilyStore cfs = Keyspace.open(ks).getColumnFamilyStore(cf);
  
  // create two sstables



[5/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-08-15 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
src/java/org/apache/cassandra/streaming/ConnectionHandler.java
src/java/org/apache/cassandra/streaming/StreamPlan.java
src/java/org/apache/cassandra/streaming/StreamResultFuture.java
src/java/org/apache/cassandra/streaming/StreamSession.java
src/java/org/apache/cassandra/tools/BulkLoader.java
test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de5bb585
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de5bb585
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de5bb585

Branch: refs/heads/cassandra-2.1
Commit: de5bb5854de5c02e13c3311e0e4f70dc44f5bb83
Parents: 54fbb0a 563cea1
Author: Yuki Morishita 
Authored: Fri Aug 15 13:24:31 2014 -0500
Committer: Yuki Morishita 
Committed: Fri Aug 15 13:24:31 2014 -0500

--
 CHANGES.txt |   1 +
 .../config/YamlConfigurationLoader.java |   7 +-
 .../cassandra/io/sstable/SSTableLoader.java |  22 ++-
 .../cassandra/streaming/ConnectionHandler.java  |  49 +--
 .../streaming/DefaultConnectionFactory.java |  75 ++
 .../streaming/StreamConnectionFactory.java  |  30 
 .../cassandra/streaming/StreamCoordinator.java  |  13 +-
 .../apache/cassandra/streaming/StreamPlan.java  |  17 ++-
 .../cassandra/streaming/StreamResultFuture.java |   2 +-
 .../cassandra/streaming/StreamSession.java  |  13 +-
 .../tools/BulkLoadConnectionFactory.java|  68 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 139 +--
 .../streaming/StreamTransferTaskTest.java   |   2 +-
 13 files changed, 337 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de5bb585/CHANGES.txt
--
diff --cc CHANGES.txt
index 191b187,e335484..ced91fd
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -171,7 -47,9 +171,8 @@@ Merged from 2.0
   * Backport CASSANDRA-6747 (CASSANDRA-7560)
   * Track max/min timestamps for range tombstones (CASSANDRA-7647)
   * Fix NPE when listing saved caches dir (CASSANDRA-7632)
+  * Fix sstableloader unable to connect encrypted node (CASSANDRA-7585)
  Merged from 1.2:
 - * Remove duplicates from StorageService.getJoiningNodes (CASSANDRA-7478)
   * Clone token map outside of hot gossip loops (CASSANDRA-7758)
   * Add stop method to EmbeddedCassandraService (CASSANDRA-7595)
   * Support connecting to ipv6 jmx with nodetool (CASSANDRA-7669)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de5bb585/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
--
diff --cc src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
index 78621f2,b520d07..0b62ff4
--- a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
+++ b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
@@@ -81,14 -69,18 +81,19 @@@ public class YamlConfigurationLoader im
  
  public Config loadConfig() throws ConfigurationException
  {
+ return loadConfig(getStorageConfigURL());
+ }
+ 
+ public Config loadConfig(URL url) throws ConfigurationException
+ {
+ InputStream input = null;
  try
  {
- URL url = getStorageConfigURL();
  logger.info("Loading settings from {}", url);
 -try
 +byte[] configBytes;
 +try (InputStream is = url.openStream())
  {
 -input = url.openStream();
 +configBytes = ByteStreams.toByteArray(is);
  }
  catch (IOException e)
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de5bb585/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
--
diff --cc src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
index bbb1277,85dc0e4..3d7eea7
--- a/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java
@@@ -157,7 -149,7 +157,7 @@@ public class SSTableLoader implements S
  client.init(keyspace);
  outputHandler.output("Established connection to initial hosts");
  
- StreamPlan plan = new StreamPlan("Bulk Load", 0, connectionsPerHost);
 -StreamPlan plan = new StreamPlan("Bulk 
Load").connectionFactory(client.getConnectionFactory());
++StreamPlan plan = new StreamPlan("Bulk Load", 0, 

[jira] [Commented] (CASSANDRA-7772) Windows - fsync-analog, flush data to disk

2014-08-15 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098916#comment-14098916
 ] 

Joshua McKenzie commented on CASSANDRA-7772:


Quite a few of the following links concern NTFS and the MFT from pre-Vista 
days, but the 3.1 NTFS format hasn't changed since xp and [win7 doesn't change 
much w/regards to the 
driver|http://technet.microsoft.com/en-us/library/ff383236(v=ws.10).aspx]

In the NTFS MFT, [file records have pointers to their parent 
records|https://ad-pdf.s3.amazonaws.com/wp.NT_Orphan_Files.en_us.pdf] and 
directory-type records also contain pointers to their children's MFT records.  
In the event of a power loss after FD.sync() before the MFT record for a 
directory is synced, on reboot during chkdsk for volume consistency [parent 
directories will repair links to their 
children|http://support.microsoft.com/kb/187941/en-us] assuming the child MFT 
attributes have been flushed to disk and it can rely on the pointers to their 
parents.  Since we process the FD.sync() in 
SequentialWriter.syncDataOnlyInternal(), the automated recovery process should 
take care of linking the directory MFT records correctly in almost all cases.

There remains a couple of potential scenarios that could require end-user 
interaction for data recovery - if the parent folder for the file we're writing 
doesn't exist in the MFT at all or the parent folder exists but is marked 
deleted in the MFT.  In either case, chkdsk will create a folder in the root of 
the volume and place the file in that folder - data isn't lost but this 
scenario would require ops intervention to get things consistent from a C* 
perspective.

While digging around on the topic, the closest I could come to an option for a 
Windows analog to fsync on a directory FD is:
# [CloseHandle on the file handle so directory attributes in the MFT are 
updated|http://technet.microsoft.com/en-us/library/cc781134(v=ws.10).aspx] 
(under "Within a directory entry for a file").  I toyed with the idea of 
closing and reopening the RAF in SequentialWriter as a mechanism to flush the 
directory MFT records but there's no reasonable way to confirm this approach 
actually works.  Procmon tracing shows MFT activity from the close() call but I 
haven't yet found a method to profile the contents of MFT changes in realtime 
and link them to application behavior.  Along with that, it looks like there's 
conditionals as to whether or not CloseHandle even updates the parent directory 
entry according to that technet article.
# [FlushFileBuffers against a volume 
handle|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx].
  This approach causes a volume-wide flush of all open handles to disk.  My 
expectation is that the performance of this would be prohibitive so I didn't 
pursue it further.

Running chkdsk after a forcible power interruption is automatic and working 
with recovering files in these scenarios should be Standard Operating Procedure 
for Windows server admins in the rare event that it requires intervention.

Given all of the above I'm comfortable closing this as Not a Problem.  Thoughts 
[~jbellis]?

> Windows - fsync-analog, flush data to disk
> --
>
> Key: CASSANDRA-7772
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7772
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
> Fix For: 3.0
>
>
> We currently use CLibrary fsync linux-native calls to flush to disk.  Given 
> the role this plays in our SequentialWriter and data integrity in general we 
> need some analog to this function on Windows.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7477) JSON to SSTable import failing

2014-08-15 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-7477:
---

Attachment: CASSANDRA-2.1.0-7477-v2.patch

[~thobbs] v2 adds a check for {{cfm.isCql3Table()}} and couple of unit-tests

> JSON to SSTable import failing
> --
>
> Key: CASSANDRA-7477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7477
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux Mint 17 64-bit | 16GiB | C* 2.1
>Reporter: Kishan Karunaratne
>Assignee: Mikhail Stepura
> Fix For: 2.1.0, 2.1.1
>
> Attachments: CASSANDRA-2.1.0-7477-v2.patch, 
> CASSANDRA-2.1.0-7477.patch, log2.log, schema.json
>
>
> Issue affects C* version >= 2.1. Commit found by using git bisect. The 
> previous commit to this one also fails, but due to other reasons (CCM server 
> won't start). This commit is the one that give the same error as 2.1 HEAD:
> {noformat}
> 02d1e7497a9930120fac367ce82a3b22940acafb is the first bad commit
> commit 02d1e7497a9930120fac367ce82a3b22940acafb
> Author: Brandon Williams 
> Date:   Mon Apr 21 14:42:29 2014 -0500
> Default flush dir to data dir.
> Patch by brandonwilliams, reviewed by yukim for CASSANDRA-7064
> :04 04 c50a123f305b73583ccbfa9c455efc4e4cee228f 
> 507a90290dccb8a929afadf1f833d926049c46ad Mconf
> {noformat}
> {noformat}
> $ PRINT_DEBUG=true nosetests -x -s -v json_tools_test.py 
> json_tools_test (json_tools_test.TestJson) ... cluster ccm directory: 
> /tmp/dtest-8WVBq9
> Starting cluster...
> Version: 2.1.0
> Getting CQLSH...
> Inserting data...
> Flushing and stopping cluster...
> Exporting to JSON file...
> -- test-users-ka-1-Data.db -
> Deleting cluster and creating new...
> Inserting data...
> Importing JSON file...
> Counting keys to import, please wait... (NOTE: to skip this use -n )
> Importing 2 keys...
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
> to org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:168)
>   at 
> org.apache.cassandra.tools.SSTableImport$JsonColumn.(SSTableImport.java:165)
>   at 
> org.apache.cassandra.tools.SSTableImport.addColumnsToCF(SSTableImport.java:242)
>   at 
> org.apache.cassandra.tools.SSTableImport.addToStandardCF(SSTableImport.java:225)
>   at 
> org.apache.cassandra.tools.SSTableImport.importSorted(SSTableImport.java:464)
>   at 
> org.apache.cassandra.tools.SSTableImport.importJson(SSTableImport.java:351)
>   at org.apache.cassandra.tools.SSTableImport.main(SSTableImport.java:575)
> ERROR: org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be 
> cast to org.apache.cassandra.db.composites.CellName
> Verifying import...
> data: [[u'gandalf', 1955, u'male', u'p@$$', u'WA']]
> FAIL
> removing ccm cluster test at: /tmp/dtest-8WVBq9
> ERROR
> ==
> ERROR: json_tools_test (json_tools_test.TestJson)
> --
> Traceback (most recent call last):
>   File "/home/kishan/git/cstar/cassandra-dtest/dtest.py", line 214, in 
> tearDown
> raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
> errors))
> AssertionError: Unexpected error in node1 node log: ['ERROR 
> [SSTableBatchOpen:1] 2014-06-30 13:56:01,032 CassandraDaemon.java:166 - 
> Exception in thread Thread[SSTableBatchOpen:1,5,main]\n']
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-8WVBq9
> dtest: DEBUG: Starting cluster...
> dtest: DEBUG: Version: 2.1.0
> dtest: DEBUG: Getting CQLSH...
> dtest: DEBUG: Inserting data...
> dtest: DEBUG: Flushing and stopping cluster...
> dtest: DEBUG: Exporting to JSON file...
> dtest: DEBUG: Deleting cluster and creating new...
> dtest: DEBUG: Inserting data...
> dtest: DEBUG: Importing JSON file...
> dtest: DEBUG: Verifying import...
> dtest: DEBUG: data: [[u'gandalf', 1955, u'male', u'p@$$', u'WA']]
> dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-8WVBq9
> - >> end captured logging << -
> ==
> FAIL: json_tools_test (json_tools_test.TestJson)
> --
> Traceback (most recent call last):
>   File "/home/kishan/git/cstar/cassandra-dtest/json_tools_test.py", line 91, 
> in json_tools_test
> [u'gandalf', 1955, u'male', u'p@$$', u'WA'] ] )
> AssertionError: Element counts were not equal:
> First has 0, Second has 1:  [u'frodo', 1985, u'male', u'pass@', u'CA']
> Firs

[jira] [Commented] (CASSANDRA-7740) Parsing of UDF body is broken

2014-08-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098942#comment-14098942
 ] 

Tyler Hobbs commented on CASSANDRA-7740:


Can you rebase the patch against current trunk?

> Parsing of UDF body is broken
> -
>
> Key: CASSANDRA-7740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7740
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7740.txt
>
>
> The parsing of function body introduced by CASSANDRA-7395 is somewhat broken. 
> It blindly parse everything up to {{END_BODY}}, which as 2 problems:
> # it parse function body as if it was part of the CQL syntax, so anything 
> that don't happen to be a valid CQL token won't even parse.
> # something like
> {noformat}
> CREATE FUNCTION foo() RETURNS text LANGUAGE JAVA BODY return "END_BODY"; 
> END_BODY;
> {noformat}
> will not parse correctly.
> I don't think we can accept random syntax like that. A better solution (which 
> is the one Postgresql uses) is to pass the function body as a normal string. 
> And in fact I'd be in favor of reusing Postgresql syntax (because why not), 
> that is to have:
> {noformat}
> CREATE FUNCTION foo() RETURNS text LANGUAGE JAVA AS 'return "END_BODY"';
> {noformat}
> One minor annoyance might be, for certain languages, the necessity to double 
> every quote inside the string. But in a separate ticket we could introduce 
> Postregsql solution of adding an [alternate syntax for string 
> constants|http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING].



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7740) Parsing of UDF body is broken

2014-08-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7740:
---

Reviewer: Tyler Hobbs

> Parsing of UDF body is broken
> -
>
> Key: CASSANDRA-7740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7740
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7740.txt
>
>
> The parsing of function body introduced by CASSANDRA-7395 is somewhat broken. 
> It blindly parse everything up to {{END_BODY}}, which as 2 problems:
> # it parse function body as if it was part of the CQL syntax, so anything 
> that don't happen to be a valid CQL token won't even parse.
> # something like
> {noformat}
> CREATE FUNCTION foo() RETURNS text LANGUAGE JAVA BODY return "END_BODY"; 
> END_BODY;
> {noformat}
> will not parse correctly.
> I don't think we can accept random syntax like that. A better solution (which 
> is the one Postgresql uses) is to pass the function body as a normal string. 
> And in fact I'd be in favor of reusing Postgresql syntax (because why not), 
> that is to have:
> {noformat}
> CREATE FUNCTION foo() RETURNS text LANGUAGE JAVA AS 'return "END_BODY"';
> {noformat}
> One minor annoyance might be, for certain languages, the necessity to double 
> every quote inside the string. But in a separate ticket we could introduce 
> Postregsql solution of adding an [alternate syntax for string 
> constants|http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING].



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7477) JSON to SSTable import failing

2014-08-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098946#comment-14098946
 ] 

Tyler Hobbs commented on CASSANDRA-7477:


Thanks for the tests!

+1 on the v2 patch.

> JSON to SSTable import failing
> --
>
> Key: CASSANDRA-7477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7477
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux Mint 17 64-bit | 16GiB | C* 2.1
>Reporter: Kishan Karunaratne
>Assignee: Mikhail Stepura
> Fix For: 2.1.0, 2.1.1
>
> Attachments: CASSANDRA-2.1.0-7477-v2.patch, 
> CASSANDRA-2.1.0-7477.patch, log2.log, schema.json
>
>
> Issue affects C* version >= 2.1. Commit found by using git bisect. The 
> previous commit to this one also fails, but due to other reasons (CCM server 
> won't start). This commit is the one that give the same error as 2.1 HEAD:
> {noformat}
> 02d1e7497a9930120fac367ce82a3b22940acafb is the first bad commit
> commit 02d1e7497a9930120fac367ce82a3b22940acafb
> Author: Brandon Williams 
> Date:   Mon Apr 21 14:42:29 2014 -0500
> Default flush dir to data dir.
> Patch by brandonwilliams, reviewed by yukim for CASSANDRA-7064
> :04 04 c50a123f305b73583ccbfa9c455efc4e4cee228f 
> 507a90290dccb8a929afadf1f833d926049c46ad Mconf
> {noformat}
> {noformat}
> $ PRINT_DEBUG=true nosetests -x -s -v json_tools_test.py 
> json_tools_test (json_tools_test.TestJson) ... cluster ccm directory: 
> /tmp/dtest-8WVBq9
> Starting cluster...
> Version: 2.1.0
> Getting CQLSH...
> Inserting data...
> Flushing and stopping cluster...
> Exporting to JSON file...
> -- test-users-ka-1-Data.db -
> Deleting cluster and creating new...
> Inserting data...
> Importing JSON file...
> Counting keys to import, please wait... (NOTE: to skip this use -n )
> Importing 2 keys...
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
> to org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:168)
>   at 
> org.apache.cassandra.tools.SSTableImport$JsonColumn.(SSTableImport.java:165)
>   at 
> org.apache.cassandra.tools.SSTableImport.addColumnsToCF(SSTableImport.java:242)
>   at 
> org.apache.cassandra.tools.SSTableImport.addToStandardCF(SSTableImport.java:225)
>   at 
> org.apache.cassandra.tools.SSTableImport.importSorted(SSTableImport.java:464)
>   at 
> org.apache.cassandra.tools.SSTableImport.importJson(SSTableImport.java:351)
>   at org.apache.cassandra.tools.SSTableImport.main(SSTableImport.java:575)
> ERROR: org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be 
> cast to org.apache.cassandra.db.composites.CellName
> Verifying import...
> data: [[u'gandalf', 1955, u'male', u'p@$$', u'WA']]
> FAIL
> removing ccm cluster test at: /tmp/dtest-8WVBq9
> ERROR
> ==
> ERROR: json_tools_test (json_tools_test.TestJson)
> --
> Traceback (most recent call last):
>   File "/home/kishan/git/cstar/cassandra-dtest/dtest.py", line 214, in 
> tearDown
> raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
> errors))
> AssertionError: Unexpected error in node1 node log: ['ERROR 
> [SSTableBatchOpen:1] 2014-06-30 13:56:01,032 CassandraDaemon.java:166 - 
> Exception in thread Thread[SSTableBatchOpen:1,5,main]\n']
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-8WVBq9
> dtest: DEBUG: Starting cluster...
> dtest: DEBUG: Version: 2.1.0
> dtest: DEBUG: Getting CQLSH...
> dtest: DEBUG: Inserting data...
> dtest: DEBUG: Flushing and stopping cluster...
> dtest: DEBUG: Exporting to JSON file...
> dtest: DEBUG: Deleting cluster and creating new...
> dtest: DEBUG: Inserting data...
> dtest: DEBUG: Importing JSON file...
> dtest: DEBUG: Verifying import...
> dtest: DEBUG: data: [[u'gandalf', 1955, u'male', u'p@$$', u'WA']]
> dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-8WVBq9
> - >> end captured logging << -
> ==
> FAIL: json_tools_test (json_tools_test.TestJson)
> --
> Traceback (most recent call last):
>   File "/home/kishan/git/cstar/cassandra-dtest/json_tools_test.py", line 91, 
> in json_tools_test
> [u'gandalf', 1955, u'male', u'p@$$', u'WA'] ] )
> AssertionError: Element counts were not equal:
> First has 0, Second has 1:  [u'frodo', 1985, u'male', u'pass@', u'CA']
> First has 0, Second has 1:  [u'sam',

[1/2] git commit: fix build

2014-08-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 de5bb5854 -> eab158462


fix build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/561f6ef5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/561f6ef5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/561f6ef5

Branch: refs/heads/cassandra-2.1
Commit: 561f6ef5cfb50878dff2ea3ceb182b48f400a23e
Parents: e9d0214
Author: Jake Luciani 
Authored: Fri Aug 15 15:15:09 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 15:15:09 2014 -0400

--
 .../apache/cassandra/io/util/BufferedRandomAccessFileTest.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/561f6ef5/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java 
b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
index cfabf62..7dbbdc2 100644
--- a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
+++ b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
@@ -530,6 +530,8 @@ public class BufferedRandomAccessFileTest
 {
 //see https://issues.apache.org/jira/browse/CASSANDRA-7756
 
+final FileCacheService.CacheKey cacheKey = new 
FileCacheService.CacheKey();
+
 final int THREAD_COUNT = 40;
 ExecutorService executorService = 
Executors.newFixedThreadPool(THREAD_COUNT);
 
@@ -550,8 +552,8 @@ public class BufferedRandomAccessFileTest
 RandomAccessReader r2 = RandomAccessReader.open(w2);
 
 
-FileCacheService.instance.put(r1);
-FileCacheService.instance.put(r2);
+FileCacheService.instance.put(cacheKey, r1);
+FileCacheService.instance.put(cacheKey, r2);
 
 final CountDownLatch finished = new CountDownLatch(THREAD_COUNT);
 final AtomicBoolean hadError = new AtomicBoolean(false);



[2/2] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-15 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eab15846
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eab15846
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eab15846

Branch: refs/heads/cassandra-2.1
Commit: eab158462441ba414792acc228051250be34ab24
Parents: de5bb58 561f6ef
Author: Jake Luciani 
Authored: Fri Aug 15 15:15:38 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 15:15:38 2014 -0400

--
 .../apache/cassandra/io/util/BufferedRandomAccessFileTest.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--




git commit: fix build

2014-08-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 e9d0214a1 -> 561f6ef5c


fix build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/561f6ef5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/561f6ef5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/561f6ef5

Branch: refs/heads/cassandra-2.1.0
Commit: 561f6ef5cfb50878dff2ea3ceb182b48f400a23e
Parents: e9d0214
Author: Jake Luciani 
Authored: Fri Aug 15 15:15:09 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 15:15:09 2014 -0400

--
 .../apache/cassandra/io/util/BufferedRandomAccessFileTest.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/561f6ef5/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java 
b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
index cfabf62..7dbbdc2 100644
--- a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
+++ b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
@@ -530,6 +530,8 @@ public class BufferedRandomAccessFileTest
 {
 //see https://issues.apache.org/jira/browse/CASSANDRA-7756
 
+final FileCacheService.CacheKey cacheKey = new 
FileCacheService.CacheKey();
+
 final int THREAD_COUNT = 40;
 ExecutorService executorService = 
Executors.newFixedThreadPool(THREAD_COUNT);
 
@@ -550,8 +552,8 @@ public class BufferedRandomAccessFileTest
 RandomAccessReader r2 = RandomAccessReader.open(w2);
 
 
-FileCacheService.instance.put(r1);
-FileCacheService.instance.put(r2);
+FileCacheService.instance.put(cacheKey, r1);
+FileCacheService.instance.put(cacheKey, r2);
 
 final CountDownLatch finished = new CountDownLatch(THREAD_COUNT);
 final AtomicBoolean hadError = new AtomicBoolean(false);



[2/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-15 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eab15846
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eab15846
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eab15846

Branch: refs/heads/trunk
Commit: eab158462441ba414792acc228051250be34ab24
Parents: de5bb58 561f6ef
Author: Jake Luciani 
Authored: Fri Aug 15 15:15:38 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 15:15:38 2014 -0400

--
 .../apache/cassandra/io/util/BufferedRandomAccessFileTest.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--




[1/3] git commit: fix build

2014-08-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk ff88f2376 -> 7879b2fe9


fix build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/561f6ef5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/561f6ef5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/561f6ef5

Branch: refs/heads/trunk
Commit: 561f6ef5cfb50878dff2ea3ceb182b48f400a23e
Parents: e9d0214
Author: Jake Luciani 
Authored: Fri Aug 15 15:15:09 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 15:15:09 2014 -0400

--
 .../apache/cassandra/io/util/BufferedRandomAccessFileTest.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/561f6ef5/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java 
b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
index cfabf62..7dbbdc2 100644
--- a/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
+++ b/test/unit/org/apache/cassandra/io/util/BufferedRandomAccessFileTest.java
@@ -530,6 +530,8 @@ public class BufferedRandomAccessFileTest
 {
 //see https://issues.apache.org/jira/browse/CASSANDRA-7756
 
+final FileCacheService.CacheKey cacheKey = new 
FileCacheService.CacheKey();
+
 final int THREAD_COUNT = 40;
 ExecutorService executorService = 
Executors.newFixedThreadPool(THREAD_COUNT);
 
@@ -550,8 +552,8 @@ public class BufferedRandomAccessFileTest
 RandomAccessReader r2 = RandomAccessReader.open(w2);
 
 
-FileCacheService.instance.put(r1);
-FileCacheService.instance.put(r2);
+FileCacheService.instance.put(cacheKey, r1);
+FileCacheService.instance.put(cacheKey, r2);
 
 final CountDownLatch finished = new CountDownLatch(THREAD_COUNT);
 final AtomicBoolean hadError = new AtomicBoolean(false);



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-15 Thread jake
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7879b2fe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7879b2fe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7879b2fe

Branch: refs/heads/trunk
Commit: 7879b2fe9a7d76145839c64da557c17195a7f750
Parents: ff88f23 eab1584
Author: Jake Luciani 
Authored: Fri Aug 15 15:16:36 2014 -0400
Committer: Jake Luciani 
Committed: Fri Aug 15 15:16:36 2014 -0400

--

--




git commit: Invalidate all caches on table drop

2014-08-15 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 561f6ef5c -> 23233b384


Invalidate all caches on table drop

patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for
CASSANDRA-7561


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23233b38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23233b38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23233b38

Branch: refs/heads/cassandra-2.1.0
Commit: 23233b384aede963c883a937adf94859edbd7f02
Parents: 561f6ef
Author: Aleksey Yeschenko 
Authored: Fri Aug 15 22:14:17 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Aug 15 22:16:58 2014 +0300

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 10 +++---
 src/java/org/apache/cassandra/service/CacheService.java | 11 +++
 3 files changed, 15 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23233b38/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dfe9c47..5b5283f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-rc6
+ * Invalidate all caches on table drop (CASSANDRA-7561)
  * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
  * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)
  * Add tracing entries about concurrent range requests (CASSANDRA-7599)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23233b38/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index a0860a7..4842285 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -372,8 +372,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 data.unreferenceSSTables();
 indexManager.invalidate();
 
-CacheService.instance.invalidateRowCacheForCf(metadata.cfId);
-CacheService.instance.invalidateKeyCacheForCf(metadata.cfId);
+invalidateCaches();
 }
 
 /**
@@ -2286,15 +2285,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 private void invalidateCaches()
 {
+CacheService.instance.invalidateKeyCacheForCf(metadata.cfId);
 CacheService.instance.invalidateRowCacheForCf(metadata.cfId);
-
 if (metadata.isCounter())
-for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
-if (key.cfId == metadata.cfId)
-CacheService.instance.counterCache.remove(key);
+CacheService.instance.invalidateCounterCacheForCf(metadata.cfId);
 }
 
-
 /**
  * @return true if @param key is contained in the row cache
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23233b38/src/java/org/apache/cassandra/service/CacheService.java
--
diff --git a/src/java/org/apache/cassandra/service/CacheService.java 
b/src/java/org/apache/cassandra/service/CacheService.java
index f51a166..1b93c2c 100644
--- a/src/java/org/apache/cassandra/service/CacheService.java
+++ b/src/java/org/apache/cassandra/service/CacheService.java
@@ -310,6 +310,17 @@ public class CacheService implements CacheServiceMBean
 }
 }
 
+public void invalidateCounterCacheForCf(UUID cfId)
+{
+Iterator counterCacheIterator = 
counterCache.getKeySet().iterator();
+while (counterCacheIterator.hasNext())
+{
+CounterCacheKey counterCacheKey = counterCacheIterator.next();
+if (counterCacheKey.cfId.equals(cfId))
+counterCacheIterator.remove();
+}
+}
+
 public void invalidateCounterCache()
 {
 counterCache.clear();



[2/2] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-15 Thread aleksey
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea686198
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea686198
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea686198

Branch: refs/heads/cassandra-2.1
Commit: ea686198fdecc1c251f05c443b2c0c5878dbbdd1
Parents: eab1584 23233b3
Author: Aleksey Yeschenko 
Authored: Fri Aug 15 22:20:29 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Aug 15 22:20:29 2014 +0300

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 10 +++---
 src/java/org/apache/cassandra/service/CacheService.java | 11 +++
 3 files changed, 15 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea686198/CHANGES.txt
--
diff --cc CHANGES.txt
index ced91fd,5b5283f..9e9b805
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,32 -1,5 +1,33 @@@
 +2.1.1
 + * Validate IPv6 wildcard addresses properly (CASSANDRA-7680)
 + * (cqlsh) Error when tracing query (CASSANDRA-7613)
 + * Avoid IOOBE when building SyntaxError message snippet (CASSANDRA-7569)
 + * SSTableExport uses correct validator to create string representation of 
partition
 +   keys (CASSANDRA-7498)
 + * Avoid NPEs when receiving type changes for an unknown keyspace 
(CASSANDRA-7689)
 + * Add support for custom 2i validation (CASSANDRA-7575)
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Add listen_interface and rpc_interface options (CASSANDRA-7417)
 + * Improve schema merge performance (CASSANDRA-7444)
 + * Adjust MT depth based on # of partition validating (CASSANDRA-5263)
 + * Optimise NativeCell comparisons (CASSANDRA-6755)
 + * Configurable client timeout for cqlsh (CASSANDRA-7516)
 + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111)
 +Merged from 2.0:
 + * (cqlsh) cqlsh should automatically disable tracing when selecting
 +   from system_traces (CASSANDRA-7641)
 + * (Hadoop) Add CqlOutputFormat (CASSANDRA-6927)
 + * Don't depend on cassandra config for nodetool ring (CASSANDRA-7508)
 + * (cqlsh) Fix failing cqlsh formatting tests (CASSANDRA-7703)
 + * Fix IncompatibleClassChangeError from hadoop2 (CASSANDRA-7229)
 + * Add 'nodetool sethintedhandoffthrottlekb' (CASSANDRA-7635)
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Catch errors when the JVM pulls the rug out from GCInspector 
(CASSANDRA-5345)
 + * cqlsh fails when version number parts are not int (CASSANDRA-7524)
 +
 +
  2.1.0-rc6
+  * Invalidate all caches on table drop (CASSANDRA-7561)
   * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
   * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)
   * Add tracing entries about concurrent range requests (CASSANDRA-7599)



[1/2] git commit: Invalidate all caches on table drop

2014-08-15 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 eab158462 -> ea686198f


Invalidate all caches on table drop

patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for
CASSANDRA-7561


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23233b38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23233b38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23233b38

Branch: refs/heads/cassandra-2.1
Commit: 23233b384aede963c883a937adf94859edbd7f02
Parents: 561f6ef
Author: Aleksey Yeschenko 
Authored: Fri Aug 15 22:14:17 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Aug 15 22:16:58 2014 +0300

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 10 +++---
 src/java/org/apache/cassandra/service/CacheService.java | 11 +++
 3 files changed, 15 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23233b38/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dfe9c47..5b5283f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-rc6
+ * Invalidate all caches on table drop (CASSANDRA-7561)
  * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
  * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)
  * Add tracing entries about concurrent range requests (CASSANDRA-7599)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23233b38/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index a0860a7..4842285 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -372,8 +372,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 data.unreferenceSSTables();
 indexManager.invalidate();
 
-CacheService.instance.invalidateRowCacheForCf(metadata.cfId);
-CacheService.instance.invalidateKeyCacheForCf(metadata.cfId);
+invalidateCaches();
 }
 
 /**
@@ -2286,15 +2285,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 private void invalidateCaches()
 {
+CacheService.instance.invalidateKeyCacheForCf(metadata.cfId);
 CacheService.instance.invalidateRowCacheForCf(metadata.cfId);
-
 if (metadata.isCounter())
-for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
-if (key.cfId == metadata.cfId)
-CacheService.instance.counterCache.remove(key);
+CacheService.instance.invalidateCounterCacheForCf(metadata.cfId);
 }
 
-
 /**
  * @return true if @param key is contained in the row cache
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23233b38/src/java/org/apache/cassandra/service/CacheService.java
--
diff --git a/src/java/org/apache/cassandra/service/CacheService.java 
b/src/java/org/apache/cassandra/service/CacheService.java
index f51a166..1b93c2c 100644
--- a/src/java/org/apache/cassandra/service/CacheService.java
+++ b/src/java/org/apache/cassandra/service/CacheService.java
@@ -310,6 +310,17 @@ public class CacheService implements CacheServiceMBean
 }
 }
 
+public void invalidateCounterCacheForCf(UUID cfId)
+{
+Iterator counterCacheIterator = 
counterCache.getKeySet().iterator();
+while (counterCacheIterator.hasNext())
+{
+CounterCacheKey counterCacheKey = counterCacheIterator.next();
+if (counterCacheKey.cfId.equals(cfId))
+counterCacheIterator.remove();
+}
+}
+
 public void invalidateCounterCache()
 {
 counterCache.clear();



[2/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-15 Thread aleksey
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea686198
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea686198
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea686198

Branch: refs/heads/trunk
Commit: ea686198fdecc1c251f05c443b2c0c5878dbbdd1
Parents: eab1584 23233b3
Author: Aleksey Yeschenko 
Authored: Fri Aug 15 22:20:29 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Aug 15 22:20:29 2014 +0300

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 10 +++---
 src/java/org/apache/cassandra/service/CacheService.java | 11 +++
 3 files changed, 15 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea686198/CHANGES.txt
--
diff --cc CHANGES.txt
index ced91fd,5b5283f..9e9b805
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,32 -1,5 +1,33 @@@
 +2.1.1
 + * Validate IPv6 wildcard addresses properly (CASSANDRA-7680)
 + * (cqlsh) Error when tracing query (CASSANDRA-7613)
 + * Avoid IOOBE when building SyntaxError message snippet (CASSANDRA-7569)
 + * SSTableExport uses correct validator to create string representation of 
partition
 +   keys (CASSANDRA-7498)
 + * Avoid NPEs when receiving type changes for an unknown keyspace 
(CASSANDRA-7689)
 + * Add support for custom 2i validation (CASSANDRA-7575)
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Add listen_interface and rpc_interface options (CASSANDRA-7417)
 + * Improve schema merge performance (CASSANDRA-7444)
 + * Adjust MT depth based on # of partition validating (CASSANDRA-5263)
 + * Optimise NativeCell comparisons (CASSANDRA-6755)
 + * Configurable client timeout for cqlsh (CASSANDRA-7516)
 + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111)
 +Merged from 2.0:
 + * (cqlsh) cqlsh should automatically disable tracing when selecting
 +   from system_traces (CASSANDRA-7641)
 + * (Hadoop) Add CqlOutputFormat (CASSANDRA-6927)
 + * Don't depend on cassandra config for nodetool ring (CASSANDRA-7508)
 + * (cqlsh) Fix failing cqlsh formatting tests (CASSANDRA-7703)
 + * Fix IncompatibleClassChangeError from hadoop2 (CASSANDRA-7229)
 + * Add 'nodetool sethintedhandoffthrottlekb' (CASSANDRA-7635)
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Catch errors when the JVM pulls the rug out from GCInspector 
(CASSANDRA-5345)
 + * cqlsh fails when version number parts are not int (CASSANDRA-7524)
 +
 +
  2.1.0-rc6
+  * Invalidate all caches on table drop (CASSANDRA-7561)
   * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
   * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)
   * Add tracing entries about concurrent range requests (CASSANDRA-7599)



[1/3] git commit: Invalidate all caches on table drop

2014-08-15 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7879b2fe9 -> 5233948d1


Invalidate all caches on table drop

patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for
CASSANDRA-7561


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23233b38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23233b38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23233b38

Branch: refs/heads/trunk
Commit: 23233b384aede963c883a937adf94859edbd7f02
Parents: 561f6ef
Author: Aleksey Yeschenko 
Authored: Fri Aug 15 22:14:17 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Aug 15 22:16:58 2014 +0300

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 10 +++---
 src/java/org/apache/cassandra/service/CacheService.java | 11 +++
 3 files changed, 15 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23233b38/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dfe9c47..5b5283f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-rc6
+ * Invalidate all caches on table drop (CASSANDRA-7561)
  * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
  * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)
  * Add tracing entries about concurrent range requests (CASSANDRA-7599)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23233b38/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index a0860a7..4842285 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -372,8 +372,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 data.unreferenceSSTables();
 indexManager.invalidate();
 
-CacheService.instance.invalidateRowCacheForCf(metadata.cfId);
-CacheService.instance.invalidateKeyCacheForCf(metadata.cfId);
+invalidateCaches();
 }
 
 /**
@@ -2286,15 +2285,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 private void invalidateCaches()
 {
+CacheService.instance.invalidateKeyCacheForCf(metadata.cfId);
 CacheService.instance.invalidateRowCacheForCf(metadata.cfId);
-
 if (metadata.isCounter())
-for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
-if (key.cfId == metadata.cfId)
-CacheService.instance.counterCache.remove(key);
+CacheService.instance.invalidateCounterCacheForCf(metadata.cfId);
 }
 
-
 /**
  * @return true if @param key is contained in the row cache
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23233b38/src/java/org/apache/cassandra/service/CacheService.java
--
diff --git a/src/java/org/apache/cassandra/service/CacheService.java 
b/src/java/org/apache/cassandra/service/CacheService.java
index f51a166..1b93c2c 100644
--- a/src/java/org/apache/cassandra/service/CacheService.java
+++ b/src/java/org/apache/cassandra/service/CacheService.java
@@ -310,6 +310,17 @@ public class CacheService implements CacheServiceMBean
 }
 }
 
+public void invalidateCounterCacheForCf(UUID cfId)
+{
+Iterator counterCacheIterator = 
counterCache.getKeySet().iterator();
+while (counterCacheIterator.hasNext())
+{
+CounterCacheKey counterCacheKey = counterCacheIterator.next();
+if (counterCacheKey.cfId.equals(cfId))
+counterCacheIterator.remove();
+}
+}
+
 public void invalidateCounterCache()
 {
 counterCache.clear();



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-15 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5233948d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5233948d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5233948d

Branch: refs/heads/trunk
Commit: 5233948d124824edd8483a649aa6fec82f2259b1
Parents: 7879b2f ea68619
Author: Aleksey Yeschenko 
Authored: Fri Aug 15 22:21:14 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Aug 15 22:21:14 2014 +0300

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 10 +++---
 src/java/org/apache/cassandra/service/CacheService.java | 11 +++
 3 files changed, 15 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5233948d/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5233948d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[jira] [Commented] (CASSANDRA-7743) Possible C* OOM issue during long running test

2014-08-15 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098977#comment-14098977
 ] 

T Jake Luciani commented on CASSANDRA-7743:
---

Looks good +1

> Possible C* OOM issue during long running test
> --
>
> Key: CASSANDRA-7743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Google Compute Engine, n1-standard-1
>Reporter: Pierre Laporte
>Assignee: Benedict
> Fix For: 2.1.0
>
>
> During a long running test, we ended up with a lot of 
> "java.lang.OutOfMemoryError: Direct buffer memory" errors on the Cassandra 
> instances.
> Here is an example of stacktrace from system.log :
> {code}
> ERROR [SharedPool-Worker-1] 2014-08-11 11:09:34,610 ErrorMessage.java:218 - 
> Unexpected exception during request
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_25]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.7.0_25]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) 
> ~[na:1.7.0_25]
> at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:434) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:179) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:168) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:98) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:251)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:107)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.channel.AdaptiveRecvByteBufAllocator$HandleImpl.allocate(AdaptiveRecvByteBufAllocator.java:104)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:112)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) 
> ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
> at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_25]
> {code}
> The test consisted of a 3-nodes cluster of n1-standard-1 GCE instances (1 
> vCPU, 3.75 GB RAM) running cassandra-2.1.0-rc5, and a n1-standard-2 instance 
> running the test.
> After ~2.5 days, several requests start to fail and we see the previous 
> stacktraces in the system.log file.
> The output from linux ‘free’ and ‘meminfo’ suggest that there is still memory 
> available.
> {code}
> $ free -m
> total  used   free sharedbuffers cached
> Mem:  3702   3532169  0161854
> -/+ buffers/cache:   2516   1185
> Swap:0  0  0
> $ head -n 4 /proc/meminfo
> MemTotal:3791292 kB
> MemFree:  173568 kB
> Buffers:  165608 kB
> Cached:   874752 kB
> {code}
> These errors do not affect all the queries we run. The cluster is still 
> responsive but is unable to display tracing information using cqlsh :
> {code}
> $ ./bin/nodetool --host 10.240.137.253 status duration_test
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  10.240.98.27925.17 KB  256 100.0%
> 41314169-eff5-465f-85ea-d501fd8f9c5e  RAC1
> UN  10.240.137.253  1.1 MB 256 100.0%
> c706f5f9-c5f3-4d5e-95e9-a8903823827e  RAC1
> UN  10.240.72.183   896.57 KB  256 

[jira] [Updated] (CASSANDRA-7526) Defining UDFs using scripting language directly from CQL

2014-08-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7526:
--

Reviewer: Tyler Hobbs

> Defining UDFs using scripting language directly from CQL
> 
>
> Key: CASSANDRA-7526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7526
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7526-full.txt, 7526-on-7562.txt
>
>
> In CASSANDRA-7395 we'll introduce the ability to define user functions by 
> dropping a java class server side. While this is a good first step and a good 
> option to have in any case, it would be nice to provide a simpler way to 
> define those functions directly from CQL. And while we probably don't want to 
> re-invent a new programming language inside CQL, we can reuse one. Typically, 
> with java 8, we could use nashorn. This would allow a syntax along the lines 
> of:
> {noformat}
> CREATE FUNCTION sum (a bigint, b bigint) bigint AS { return a + b; }
> {noformat}
> Note that in this, everything before the AS will be parsed by us, which we'll 
> probably want because we'll probably need to have the types of 
> arguments/return in practice anyway, and it's a good idea to reuse CQL types. 
> The expression after the AS will be given to Nashorn however.
> Please note that in theory we could ultimately support multiple language 
> after the AS. However, I'd like to focus on supporting just one for this 
> ticket and I'm keen on using javascript through Nashorn because as it's the 
> one that will ship with java from now on, it feels like a safe default.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7526) Defining UDFs using scripting language directly from CQL

2014-08-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098988#comment-14098988
 ] 

Jonathan Ellis commented on CASSANDRA-7526:
---

[~thobbs] to review

> Defining UDFs using scripting language directly from CQL
> 
>
> Key: CASSANDRA-7526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7526
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7526-full.txt, 7526-on-7562.txt
>
>
> In CASSANDRA-7395 we'll introduce the ability to define user functions by 
> dropping a java class server side. While this is a good first step and a good 
> option to have in any case, it would be nice to provide a simpler way to 
> define those functions directly from CQL. And while we probably don't want to 
> re-invent a new programming language inside CQL, we can reuse one. Typically, 
> with java 8, we could use nashorn. This would allow a syntax along the lines 
> of:
> {noformat}
> CREATE FUNCTION sum (a bigint, b bigint) bigint AS { return a + b; }
> {noformat}
> Note that in this, everything before the AS will be parsed by us, which we'll 
> probably want because we'll probably need to have the types of 
> arguments/return in practice anyway, and it's a good idea to reuse CQL types. 
> The expression after the AS will be given to Nashorn however.
> Please note that in theory we could ultimately support multiple language 
> after the AS. However, I'd like to focus on supporting just one for this 
> ticket and I'm keen on using javascript through Nashorn because as it's the 
> one that will ship with java from now on, it feels like a safe default.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-08-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7563:
--

Reviewer: Tyler Hobbs

> UserType, TupleType and collections in UDFs
> ---
>
> Key: CASSANDRA-7563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7563.txt
>
>
> * is Java Driver as a dependency required ?
> * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
> * CQL {{DROP TYPE}} must check UDFs
> * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7577) cqlsh: CTRL-R history search not working on OSX

2014-08-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7577:
--

Reviewer: Aleksey Yeschenko

> cqlsh: CTRL-R history search not working on OSX
> ---
>
> Key: CASSANDRA-7577
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7577
> Project: Cassandra
>  Issue Type: Bug
> Environment: OSX - plain Terminal program
> C* 2.0.x, 2.1, trunk
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7577-2.0.txt, 7577-2.1.txt
>
>
> _recursive-history-search_ via ctrl-R does not work in cqlsh. The history 
> itself works via cursor up/down.
> It works on Linux (and I guess on Windows with cygwin) but not on my Mac.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7577) cqlsh: CTRL-R history search not working on OSX

2014-08-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098989#comment-14098989
 ] 

Jonathan Ellis commented on CASSANDRA-7577:
---

[~iamaleksey] to review

> cqlsh: CTRL-R history search not working on OSX
> ---
>
> Key: CASSANDRA-7577
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7577
> Project: Cassandra
>  Issue Type: Bug
> Environment: OSX - plain Terminal program
> C* 2.0.x, 2.1, trunk
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7577-2.0.txt, 7577-2.1.txt
>
>
> _recursive-history-search_ via ctrl-R does not work in cqlsh. The history 
> itself works via cursor up/down.
> It works on Linux (and I guess on Windows with cygwin) but not on my Mac.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7466) Unit Test Suite Breaks when Run in a Single JVM

2014-08-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7466:
--

Reviewer: Aleksey Yeschenko  (was: Joshua McKenzie)

> Unit Test Suite Breaks when Run in a Single JVM
> ---
>
> Key: CASSANDRA-7466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7466
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tests
> Environment: MacOS 10.9.3, IntelliJ IDEA 13.1.3, Java 1.7.0_51
>Reporter: Caleb William Rackliffe
>Assignee: Caleb William Rackliffe
>Priority: Minor
>  Labels: unit-test
> Attachments: trunk-7466.txt
>
>
> Note: Instead of picking a version below, I'll simply note that I'm on 
> {{trunk}} at commit {{c027183ea4e901cf1d44e06704f6d78f84405bb4}}
> I pulled down the source and followed 
> http://wiki.apache.org/cassandra/RunningCassandraInIDEA to import C* as an 
> IDEA project. Everything in the tutorial works as it should, but when I tried 
> to run the unit tests in {{test/unit/org.apache.cassandra}}, the suite failed 
> a couple tests in, complaining that it couldn't find the {{system}} keyspace 
> in {{build/test/cassandra/data}}.
> tl;dr static initialization makes it hard to run the unit tests in the same 
> JVM
> The full story is that...
> 1.) When the first test in the suite is run, the {{system}} keyspace is 
> created on disk and in the in-memory schema.
> 2.) Many subsequent tests, like {{BlacklistingCompactionsTest}}, remove the 
> {{system}} keyspace directory (among other things) in {{defineSchema()}} with 
> a call to {{SchemaLoader.prepareServer()}}.
> 3.) While these tests create the keyspaces they require, they do *not* 
> recreate the system keyspace, and so they fail when they force a compaction 
> or perform any other action that goes looking for it.
> You can run the suite with IDEA's class/method forking, and you get a little 
> bit better results, but it still seems like this shouldn't be necessary.
> I guess there are two ways to fix it:
> 1.) We rebuild the system keyspace before for each test.
> 2.) We leave the system keyspace alone.
> I took a hack at #1 in the attached patch. It looks like it to fixes this 
> specific problem, but I'm not super-believable in this codebase yet...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7466) Unit Test Suite Breaks when Run in a Single JVM

2014-08-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098993#comment-14098993
 ] 

Jonathan Ellis commented on CASSANDRA-7466:
---

switching reviewer to [~iamaleksey]

> Unit Test Suite Breaks when Run in a Single JVM
> ---
>
> Key: CASSANDRA-7466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7466
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tests
> Environment: MacOS 10.9.3, IntelliJ IDEA 13.1.3, Java 1.7.0_51
>Reporter: Caleb William Rackliffe
>Assignee: Caleb William Rackliffe
>Priority: Minor
>  Labels: unit-test
> Attachments: trunk-7466.txt
>
>
> Note: Instead of picking a version below, I'll simply note that I'm on 
> {{trunk}} at commit {{c027183ea4e901cf1d44e06704f6d78f84405bb4}}
> I pulled down the source and followed 
> http://wiki.apache.org/cassandra/RunningCassandraInIDEA to import C* as an 
> IDEA project. Everything in the tutorial works as it should, but when I tried 
> to run the unit tests in {{test/unit/org.apache.cassandra}}, the suite failed 
> a couple tests in, complaining that it couldn't find the {{system}} keyspace 
> in {{build/test/cassandra/data}}.
> tl;dr static initialization makes it hard to run the unit tests in the same 
> JVM
> The full story is that...
> 1.) When the first test in the suite is run, the {{system}} keyspace is 
> created on disk and in the in-memory schema.
> 2.) Many subsequent tests, like {{BlacklistingCompactionsTest}}, remove the 
> {{system}} keyspace directory (among other things) in {{defineSchema()}} with 
> a call to {{SchemaLoader.prepareServer()}}.
> 3.) While these tests create the keyspaces they require, they do *not* 
> recreate the system keyspace, and so they fail when they force a compaction 
> or perform any other action that goes looking for it.
> You can run the suite with IDEA's class/method forking, and you get a little 
> bit better results, but it still seems like this shouldn't be necessary.
> I guess there are two ways to fix it:
> 1.) We rebuild the system keyspace before for each test.
> 2.) We leave the system keyspace alone.
> I took a hack at #1 in the attached patch. It looks like it to fixes this 
> specific problem, but I'm not super-believable in this codebase yet...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7716) cassandra-stress: provide better error messages

2014-08-15 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14098994#comment-14098994
 ] 

T Jake Luciani commented on CASSANDRA-7716:
---

This may no longer be useful due to CASSANDRA-7519.  I'll revisit once we 
commit that

> cassandra-stress: provide better error messages
> ---
>
> Key: CASSANDRA-7716
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7716
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: T Jake Luciani
>Priority: Trivial
> Fix For: 2.1.1
>
> Attachments: 7716.txt
>
>
> Just tried new stress tool.
> It would be great if the stress tool gives better error messages by telling 
> the user what option or config parameter/value caused an error.
> YAML parse errors are meaningful (gives code snippets etc).
> Examples are:
> {noformat}
> WARN  16:59:39 Setting caching options with deprecated syntax.
> Exception in thread "main" java.lang.NullPointerException
>   at java.util.regex.Matcher.getTextLength(Matcher.java:1234)
>   at java.util.regex.Matcher.reset(Matcher.java:308)
>   at java.util.regex.Matcher.(Matcher.java:228)
>   at java.util.regex.Pattern.matcher(Pattern.java:1088)
>   at 
> org.apache.cassandra.stress.settings.OptionDistribution.get(OptionDistribution.java:67)
>   at 
> org.apache.cassandra.stress.StressProfile.init(StressProfile.java:151)
>   at 
> org.apache.cassandra.stress.StressProfile.load(StressProfile.java:482)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommandUser.(SettingsCommandUser.java:53)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommandUser.build(SettingsCommandUser.java:114)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommand.get(SettingsCommand.java:134)
>   at 
> org.apache.cassandra.stress.settings.StressSettings.get(StressSettings.java:218)
>   at 
> org.apache.cassandra.stress.settings.StressSettings.parse(StressSettings.java:206)
>   at org.apache.cassandra.stress.Stress.main(Stress.java:58)
> {noformat}
> When table-definition is wrong:
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.SyntaxException: line 6:14 mismatched input 
> '(' expecting ')'
>   at org.apache.cassandra.config.CFMetaData.compile(CFMetaData.java:550)
>   at 
> org.apache.cassandra.stress.StressProfile.init(StressProfile.java:134)
>   at 
> org.apache.cassandra.stress.StressProfile.load(StressProfile.java:482)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommandUser.(SettingsCommandUser.java:53)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommandUser.build(SettingsCommandUser.java:114)
>   at 
> org.apache.cassandra.stress.settings.SettingsCommand.get(SettingsCommand.java:134)
>   at 
> org.apache.cassandra.stress.settings.StressSettings.get(StressSettings.java:218)
>   at 
> org.apache.cassandra.stress.settings.StressSettings.parse(StressSettings.java:206)
>   at org.apache.cassandra.stress.Stress.main(Stress.java:58)
> Caused by: org.apache.cassandra.exceptions.SyntaxException: line 6:14 
> mismatched input '(' expecting ')'
>   at 
> org.apache.cassandra.cql3.CqlParser.throwLastRecognitionError(CqlParser.java:273)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:456)
>   at org.apache.cassandra.config.CFMetaData.compile(CFMetaData.java:541)
>   ... 8 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7460) Send source sstable level when bootstrapping or replacing nodes

2014-08-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099001#comment-14099001
 ] 

Jonathan Ellis commented on CASSANDRA-7460:
---

switching reviewer to [~iamaleksey]

> Send source sstable level when bootstrapping or replacing nodes
> ---
>
> Key: CASSANDRA-7460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7460
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.0
>
> Attachments: 0001-wip-keep-sstable-level-when-bootstrapping.patch
>
>
> When replacing or bootstrapping a new node we can keep the source sstable 
> level to avoid doing alot of compaction after bootstrap



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7460) Send source sstable level when bootstrapping or replacing nodes

2014-08-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7460:
--

Reviewer: Aleksey Yeschenko  (was: Yuki Morishita)

> Send source sstable level when bootstrapping or replacing nodes
> ---
>
> Key: CASSANDRA-7460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7460
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.0
>
> Attachments: 0001-wip-keep-sstable-level-when-bootstrapping.patch
>
>
> When replacing or bootstrapping a new node we can keep the source sstable 
> level to avoid doing alot of compaction after bootstrap



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7781) UDF class methods are not verified to be static

2014-08-15 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-7781:
--

 Summary: UDF class methods are not verified to be static
 Key: CASSANDRA-7781
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7781
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 3.0


I added a test for this in CASSANDRA-7395, but apparently forgot that it was 
broken when I committed the patch..

We just need to check:

{code}
Modifiers.isStatic(method.getModifiers())
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7782) NPE autosaving cache

2014-08-15 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-7782:
---

 Summary: NPE autosaving cache
 Key: CASSANDRA-7782
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7782
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Priority: Minor
 Fix For: 2.1 rc6


With the machine just sitting idle for a while:

{noformat}
INFO  18:33:35 Writing Memtable-sstable_activity@1889719059(162 serialized 
bytes, 72 ops, 0%/0% of on/off-heap limit)
INFO  18:33:35 Completed flushing 
/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-3-Data.db
 (176 bytes) for commitlog position ReplayPosition(segmentId=1408116815479, 
position=129971)
ERROR 19:33:34 Exception in thread Thread[CompactionExecutor:12,1,main]
java.lang.NullPointerException: null
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:464)
 ~[main/:na]
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:452)
 ~[main/:na]
at 
org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1053)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_65]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_65]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_65]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
INFO  19:33:35 Enqueuing flush of sstable_activity: 1561 (0%) on-heap, 0 (0%) 
off-heap
INFO  19:33:35 Writing Memtable-sstable_activity@1705670040(162 serialized 
bytes, 72 ops, 0%/0% of on/off-heap limit)
INFO  19:33:35 Completed flushing 
/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-4-Data.db
 (177 bytes) for commitlog position ReplayPosition(segmentId=1408116815479, 
position=134711)
INFO  19:33:35 Compacting 
[SSTableReader(path='/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-4-Data.db'),
 
SSTableReader(path='/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-2-Data.db'),
 
SSTableReader(path='/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-3-Data.db'),
 
SSTableReader(path='/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-1-Data.db')]
{noformat}

Looks similar to CASSANDRA-7632



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7781) UDF class methods are not verified to be static

2014-08-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7781:
---

Attachment: 7781.txt

7781.txt checks that the method is static.  There's already a (failing) unit 
test to cover this.

> UDF class methods are not verified to be static
> ---
>
> Key: CASSANDRA-7781
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7781
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 7781.txt
>
>
> I added a test for this in CASSANDRA-7395, but apparently forgot that it was 
> broken when I committed the patch..
> We just need to check:
> {code}
> Modifiers.isStatic(method.getModifiers())
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7431) Hadoop integration does not perform reverse DNS lookup correctly on EC2

2014-08-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099012#comment-14099012
 ] 

Brandon Williams commented on CASSANDRA-7431:
-

This still doesn't make sense to me:

bq. However, when using linux tools such as "host" or "dig", the EC2 hostname 
is properly resolved from the EC2 instance, so there's some problem with Java's 
InetAddress.getHostname() and EC2.

Without patching the tools or the sytem, any program is going to ask the system 
to resolve it, and it's always going to follow the rules in /etc/nsswitch.conf 
and proceed from there (usually files, then dns for hosts.)  Before adding this 
I'd like to understand exactly what's different about EC2 here, or if this is 
just a resolution issue.

> Hadoop integration does not perform reverse DNS lookup correctly on EC2
> ---
>
> Key: CASSANDRA-7431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7431
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Attachments: 2.0-CASSANDRA-7431.txt
>
>
> The split assignment on AbstractColumnFamilyInputFormat:247 peforms a reverse 
> DNS lookup of Cassandra IPs in order to preserve locality in Hadoop (task 
> trackers are identified by hostnames).
> However, the reverse lookup of an EC2 IP does not yield the EC2 hostname of 
> that endpoint when running from an EC2 instance due to the use of 
> InetAddress.getHostname().
> In order to show this, consider the following piece of code:
> {code:title=DnsResolver.java|borderStyle=solid}
> public class DnsResolver {
> public static void main(String[] args) throws Exception {
> InetAddress namenodePublicAddress = InetAddress.getByName(args[0]);
> System.out.println("getHostAddress: " + 
> namenodePublicAddress.getHostAddress());
> System.out.println("getHostName: " + 
> namenodePublicAddress.getHostName());
> }
> }
> {code}
> When this code is run from my machine to perform reverse lookup of an EC2 IP, 
> the output is:
> {code:none}
> ➜  java DnsResolver 54.201.254.99
> getHostAddress: 54.201.254.99
> getHostName: ec2-54-201-254-99.compute-1.amazonaws.com
> {code}
> When this code is executed from inside an EC2 machine, the output is:
> {code:none}
> ➜  java DnsResolver 54.201.254.99
> getHostAddress: 54.201.254.99
> getHostName: 54.201.254.99
> {code}
> However, when using linux tools such as "host" or "dig", the EC2 hostname is 
> properly resolved from the EC2 instance, so there's some problem with Java's 
> InetAddress.getHostname() and EC2.
> Two consequences of this bug during AbstractColumnFamilyInputFormat split 
> definition are:
> 1) If the Hadoop cluster is configured to use EC2 public DNS, the locality 
> will be lost, because Hadoop will try to match the CFIF split location 
> (public IP) with the task tracker location (public DNS), so no matches will 
> be found.
> 2) If the Cassandra nodes' broadcast_address is set to public IPs, all hadoop 
> communication will be done via the public IP, what will incurr additional 
> transference charges. If the public IP is mapped to the EC2 DNS during split 
> definition, when the task is executed, ColumnFamilyRecordReader will resolve 
> the public DNS to the private IP of the instance, so there will be not 
> additional charges.
> A similar bug was filed in the WHIRR project: 
> https://issues.apache.org/jira/browse/WHIRR-128



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-7782) NPE autosaving cache

2014-08-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-7782:
---

Assignee: Marcus Eriksson

> NPE autosaving cache
> 
>
> Key: CASSANDRA-7782
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7782
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 2.1 rc6
>
>
> With the machine just sitting idle for a while:
> {noformat}
> INFO  18:33:35 Writing Memtable-sstable_activity@1889719059(162 serialized 
> bytes, 72 ops, 0%/0% of on/off-heap limit)
> INFO  18:33:35 Completed flushing 
> /srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-3-Data.db
>  (176 bytes) for commitlog position ReplayPosition(segmentId=1408116815479, 
> position=129971)
> ERROR 19:33:34 Exception in thread Thread[CompactionExecutor:12,1,main]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:464)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:452)
>  ~[main/:na]
> at 
> org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1053)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_65]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
> INFO  19:33:35 Enqueuing flush of sstable_activity: 1561 (0%) on-heap, 0 (0%) 
> off-heap
> INFO  19:33:35 Writing Memtable-sstable_activity@1705670040(162 serialized 
> bytes, 72 ops, 0%/0% of on/off-heap limit)
> INFO  19:33:35 Completed flushing 
> /srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-4-Data.db
>  (177 bytes) for commitlog position ReplayPosition(segmentId=1408116815479, 
> position=134711)
> INFO  19:33:35 Compacting 
> [SSTableReader(path='/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-4-Data.db'),
>  
> SSTableReader(path='/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-2-Data.db'),
>  
> SSTableReader(path='/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-3-Data.db'),
>  
> SSTableReader(path='/srv/cassandra/bin/../data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-1-Data.db')]
> {noformat}
> Looks similar to CASSANDRA-7632



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7384) Collect metrics on queries by consistency level

2014-08-15 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099025#comment-14099025
 ] 

sankalp kohli commented on CASSANDRA-7384:
--

What do you think Jonathan. We have also seen many cases where this will be 
useful to put it in the server. The client  sometime run in different 
environments where it is hard to collect metrics. This is also on the lines of 
C* as a Service model 
Convinced :)

> Collect metrics on queries by consistency level
> ---
>
> Key: CASSANDRA-7384
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7384
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Vishy Kasar
>Assignee: sankalp kohli
>Priority: Minor
> Fix For: 2.0.10
>
>
> We had cases where cassandra client users thought that they were doing 
> queries at one consistency level but turned out to be not correct. It will be 
> good to collect metrics on number of queries done at various consistency 
> level on the server. See the equivalent JIRA on java driver: 
> https://datastax-oss.atlassian.net/browse/JAVA-354



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7562) Java source code for UDFs

2014-08-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099042#comment-14099042
 ] 

Tyler Hobbs commented on CASSANDRA-7562:


Just a couple of whitespace nits:
* In UFMetadata, keep whitespace around operators ({{argumentNames==null}})
* In UDFunction.javaIdentifierPart(), keep whitespace around the operator in 
the for-loop ({{i Java source code for UDFs
> -
>
> Key: CASSANDRA-7562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7562
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7562.txt
>
>
> Purpose of this ticket to add support for Java source code for user defined 
> functions (CASSANDRA-7395)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/6] git commit: Handle CQL row marker in SSTableImport

2014-08-15 Thread mishail
Handle CQL row marker in SSTableImport

patch by Mikhail Stepura; reviewed by Tyler Hobbs for CASSANDRA-7477


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8137fce5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8137fce5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8137fce5

Branch: refs/heads/cassandra-2.1.0
Commit: 8137fce529fe56db571d288e1e179aff905368de
Parents: 23233b3
Author: Mikhail Stepura 
Authored: Thu Aug 14 16:31:17 2014 -0700
Committer: Mikhail Stepura 
Committed: Fri Aug 15 13:11:47 2014 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/tools/SSTableImport.java   | 12 +++-
 test/resources/CQLTable.json| 10 
 .../cassandra/tools/SSTableImportTest.java  | 62 +++-
 4 files changed, 81 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8137fce5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5b5283f..8714265 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-rc6
+ * json2sstable couldn't import JSON for CQL table (CASSANDRA-7477)
  * Invalidate all caches on table drop (CASSANDRA-7561)
  * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
  * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8137fce5/src/java/org/apache/cassandra/tools/SSTableImport.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableImport.java 
b/src/java/org/apache/cassandra/tools/SSTableImport.java
index 4e7bf06..6e1415f 100644
--- a/src/java/org/apache/cassandra/tools/SSTableImport.java
+++ b/src/java/org/apache/cassandra/tools/SSTableImport.java
@@ -142,7 +142,11 @@ public class SSTableImport
 }
 else
 {
-value = stringAsType((String) fields.get(1), 
meta.getValueValidator(comparator.cellFromByteBuffer(name)));
+assert meta.isCQL3Table() || name.hasRemaining() : "Cell 
name should not be empty";
+value = stringAsType((String) fields.get(1), 
+meta.getValueValidator(name.hasRemaining() 
+? comparator.cellFromByteBuffer(name)
+: 
meta.comparator.rowMarker(Composites.EMPTY)));
 }
 }
 }
@@ -215,8 +219,10 @@ public class SSTableImport
 cfamily.addAtom(new RangeTombstone(start, end, col.timestamp, 
col.localExpirationTime));
 continue;
 }
-
-CellName cname = cfm.comparator.cellFromByteBuffer(col.getName());
+
+assert cfm.isCQL3Table() || col.getName().hasRemaining() : "Cell 
name should not be empty";
+CellName cname = col.getName().hasRemaining() ? 
cfm.comparator.cellFromByteBuffer(col.getName()) 
+: cfm.comparator.rowMarker(Composites.EMPTY);
 
 if (col.isExpiring())
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8137fce5/test/resources/CQLTable.json
--
diff --git a/test/resources/CQLTable.json b/test/resources/CQLTable.json
new file mode 100644
index 000..af15f70
--- /dev/null
+++ b/test/resources/CQLTable.json
@@ -0,0 +1,10 @@
+[
+{"key": "0001",
+ "cells": [["","",1408056347831000],
+   ["v1","NY",1408056347831000],
+   ["v2","1980",1408056347831000]]},
+{"key": "0002",
+ "cells": [["","",1408056347812000],
+   ["v1","CA",1408056347812000],
+   ["v2","2014",1408056347812000]]}
+]

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8137fce5/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
--
diff --git a/test/unit/org/apache/cassandra/tools/SSTableImportTest.java 
b/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
index 2fdeaf4..308a184 100644
--- a/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
+++ b/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
@@ -18,7 +18,11 @@
 */
 package org.apache.cassandra.tools;
 
+import static org.hamcrest.CoreMatchers.is;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertThat;
+import static org.junit.matchers.JUnitMatchers.hasItem;
+
 import static org.apache.cassandra.io.sstable.SSTableUtils.tempSSTableFile;
 import static org.apache.cassandra.utils.ByteBufferUtil.hexToBytes;
 
@@ -27,16 +31,21 @@ import java.io.IOExcepti

[3/6] git commit: Handle CQL row marker in SSTableImport

2014-08-15 Thread mishail
Handle CQL row marker in SSTableImport

patch by Mikhail Stepura; reviewed by Tyler Hobbs for CASSANDRA-7477


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8137fce5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8137fce5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8137fce5

Branch: refs/heads/trunk
Commit: 8137fce529fe56db571d288e1e179aff905368de
Parents: 23233b3
Author: Mikhail Stepura 
Authored: Thu Aug 14 16:31:17 2014 -0700
Committer: Mikhail Stepura 
Committed: Fri Aug 15 13:11:47 2014 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/tools/SSTableImport.java   | 12 +++-
 test/resources/CQLTable.json| 10 
 .../cassandra/tools/SSTableImportTest.java  | 62 +++-
 4 files changed, 81 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8137fce5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5b5283f..8714265 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-rc6
+ * json2sstable couldn't import JSON for CQL table (CASSANDRA-7477)
  * Invalidate all caches on table drop (CASSANDRA-7561)
  * Skip strict endpoint selection for ranges if RF == nodes (CASSANRA-7765)
  * Fix Thrift range filtering without 2ary index lookups (CASSANDRA-7741)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8137fce5/src/java/org/apache/cassandra/tools/SSTableImport.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableImport.java 
b/src/java/org/apache/cassandra/tools/SSTableImport.java
index 4e7bf06..6e1415f 100644
--- a/src/java/org/apache/cassandra/tools/SSTableImport.java
+++ b/src/java/org/apache/cassandra/tools/SSTableImport.java
@@ -142,7 +142,11 @@ public class SSTableImport
 }
 else
 {
-value = stringAsType((String) fields.get(1), 
meta.getValueValidator(comparator.cellFromByteBuffer(name)));
+assert meta.isCQL3Table() || name.hasRemaining() : "Cell 
name should not be empty";
+value = stringAsType((String) fields.get(1), 
+meta.getValueValidator(name.hasRemaining() 
+? comparator.cellFromByteBuffer(name)
+: 
meta.comparator.rowMarker(Composites.EMPTY)));
 }
 }
 }
@@ -215,8 +219,10 @@ public class SSTableImport
 cfamily.addAtom(new RangeTombstone(start, end, col.timestamp, 
col.localExpirationTime));
 continue;
 }
-
-CellName cname = cfm.comparator.cellFromByteBuffer(col.getName());
+
+assert cfm.isCQL3Table() || col.getName().hasRemaining() : "Cell 
name should not be empty";
+CellName cname = col.getName().hasRemaining() ? 
cfm.comparator.cellFromByteBuffer(col.getName()) 
+: cfm.comparator.rowMarker(Composites.EMPTY);
 
 if (col.isExpiring())
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8137fce5/test/resources/CQLTable.json
--
diff --git a/test/resources/CQLTable.json b/test/resources/CQLTable.json
new file mode 100644
index 000..af15f70
--- /dev/null
+++ b/test/resources/CQLTable.json
@@ -0,0 +1,10 @@
+[
+{"key": "0001",
+ "cells": [["","",1408056347831000],
+   ["v1","NY",1408056347831000],
+   ["v2","1980",1408056347831000]]},
+{"key": "0002",
+ "cells": [["","",1408056347812000],
+   ["v1","CA",1408056347812000],
+   ["v2","2014",1408056347812000]]}
+]

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8137fce5/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
--
diff --git a/test/unit/org/apache/cassandra/tools/SSTableImportTest.java 
b/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
index 2fdeaf4..308a184 100644
--- a/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
+++ b/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
@@ -18,7 +18,11 @@
 */
 package org.apache.cassandra.tools;
 
+import static org.hamcrest.CoreMatchers.is;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertThat;
+import static org.junit.matchers.JUnitMatchers.hasItem;
+
 import static org.apache.cassandra.io.sstable.SSTableUtils.tempSSTableFile;
 import static org.apache.cassandra.utils.ByteBufferUtil.hexToBytes;
 
@@ -27,16 +31,21 @@ import java.io.IOException;
 impor

[6/6] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-15 Thread mishail
Merge branch 'cassandra-2.1' into trunk

Conflicts:
test/unit/org/apache/cassandra/tools/SSTableImportTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f0635da3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f0635da3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f0635da3

Branch: refs/heads/trunk
Commit: f0635da39ddfb68c856db58b698855b9f44e698e
Parents: 5233948 141b939
Author: Mikhail Stepura 
Authored: Fri Aug 15 13:26:40 2014 -0700
Committer: Mikhail Stepura 
Committed: Fri Aug 15 13:26:40 2014 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/tools/SSTableImport.java   | 12 +++-
 test/resources/CQLTable.json| 10 +++
 .../cassandra/tools/SSTableImportTest.java  | 72 ++--
 4 files changed, 87 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0635da3/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0635da3/src/java/org/apache/cassandra/tools/SSTableImport.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0635da3/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
--
diff --cc test/unit/org/apache/cassandra/tools/SSTableImportTest.java
index 5c4318a,38e5914..01becfe
--- a/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
+++ b/test/unit/org/apache/cassandra/tools/SSTableImportTest.java
@@@ -18,58 -18,39 +18,71 @@@
  */
  package org.apache.cassandra.tools;
  
 -import static org.hamcrest.CoreMatchers.is;
+ import static org.junit.Assert.assertEquals;
++import static org.hamcrest.CoreMatchers.is;
+ import static org.junit.Assert.assertThat;
+ import static org.junit.matchers.JUnitMatchers.hasItem;
+ 
+ import static org.apache.cassandra.io.sstable.SSTableUtils.tempSSTableFile;
+ import static org.apache.cassandra.utils.ByteBufferUtil.hexToBytes;
+ 
  import java.io.File;
  import java.io.IOException;
  import java.net.URI;
  import java.net.URISyntaxException;
  
 +import org.junit.BeforeClass;
+ import org.hamcrest.Description;
+ import org.hamcrest.Matcher;
  import org.junit.Test;
+ import org.junit.internal.matchers.TypeSafeMatcher;
  
  import org.apache.cassandra.SchemaLoader;
  import org.apache.cassandra.Util;
++import org.apache.cassandra.db.*;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.KSMetaData;
 +import org.apache.cassandra.db.ArrayBackedSortedColumns;
 +import org.apache.cassandra.db.BufferDeletedCell;
 +import org.apache.cassandra.db.Cell;
 +import org.apache.cassandra.db.ColumnFamily;
 +import org.apache.cassandra.db.CounterCell;
 +import org.apache.cassandra.db.DeletionInfo;
 +import org.apache.cassandra.db.ExpiringCell;
+ import org.apache.cassandra.cql3.QueryProcessor;
+ import org.apache.cassandra.cql3.UntypedResultSet;
+ import org.apache.cassandra.cql3.UntypedResultSet.Row;
 -import org.apache.cassandra.db.*;
  import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
  import org.apache.cassandra.db.filter.QueryFilter;
  import org.apache.cassandra.db.marshal.AsciiType;
  import org.apache.cassandra.db.marshal.BytesType;
 +import org.apache.cassandra.db.marshal.CounterColumnType;
 +import org.apache.cassandra.exceptions.ConfigurationException;
  import org.apache.cassandra.io.sstable.Descriptor;
  import org.apache.cassandra.io.sstable.SSTableReader;
 +import org.apache.cassandra.locator.SimpleStrategy;
 +import org.apache.thrift.TException;
  
- import static org.apache.cassandra.io.sstable.SSTableUtils.tempSSTableFile;
- import static org.apache.cassandra.utils.ByteBufferUtil.hexToBytes;
- import static org.junit.Assert.assertEquals;
- 
 -public class SSTableImportTest extends SchemaLoader
 +public class SSTableImportTest
  {
 +public static final String KEYSPACE1 = "SSTableImportTest";
 +public static final String CF_STANDARD = "Standard1";
 +public static final String CF_COUNTER = "Counter1";
++public static final String CQL_TABLE = "table1";
 +
 +@BeforeClass
 +public static void defineSchema() throws ConfigurationException, 
IOException, TException
 +{
 +SchemaLoader.prepareServer();
 +SchemaLoader.createKeyspace(KEYSPACE1,
 +SimpleStrategy.class,
 +KSMetaData.optsWithRF(1),
 +SchemaLoader.standardCFMD(KEYSPACE1, 
CF_STANDARD),
 +CFMetaData.denseCFMetaData(KEYSPACE1, 
CF_COUNTER, BytesType.instance).defaultValidator(Count

  1   2   >