[jira] [Updated] (CASSANDRA-11463) dtest failure in compaction_test.TestCompaction_with_LeveledCompactionStrategy.bloomfilter_size_test

2016-05-02 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-11463:

Reviewer: Paulo Motta

> dtest failure in 
> compaction_test.TestCompaction_with_LeveledCompactionStrategy.bloomfilter_size_test
> 
>
> Key: CASSANDRA-11463
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11463
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Marcus Eriksson
>  Labels: dtest
> Fix For: 2.1.x
>
>
> The final assertion {{self.assertGreaterEqual(bfSize, min_bf_size)}} is 
> failing with {{44728 not greater than or equal to 5}} on 2.1, pretty 
> consistently.
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/439/testReport/compaction_test/TestCompaction_with_LeveledCompactionStrategy/bloomfilter_size_test
> Failed on CassCI build cassandra-2.1_dtest #439



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9766) Bootstrap outgoing streaming speeds are much slower than during repair

2016-05-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268120#comment-15268120
 ] 

Stefania commented on CASSANDRA-9766:
-

It's looking much better without recycling {{BTreeSearchIterator}}:

{code}
grep ERROR 
build/test/logs/TEST-org.apache.cassandra.streaming.LongStreamingTest.log
ERROR [main] 2016-05-03 10:37:04,004 SLF4J: stderr
ERROR [main] 2016-05-03 10:37:34,737 Writer finished after 25 seconds
ERROR [main] 2016-05-03 10:37:34,738 File : 
/tmp/1462243029050-0/cql_keyspace/table1/ma-1-big-Data.db
ERROR [main] 2016-05-03 10:37:55,165 Finished Streaming in 20.41 seconds: 23.52 
Mb/sec
ERROR [main] 2016-05-03 10:38:15,054 Finished Streaming in 19.89 seconds: 24.14 
Mb/sec
ERROR [main] 2016-05-03 10:38:56,983 Finished Compacting in 41.93 seconds: 
23.09 Mb/sec
{code}

I would suggest leaving {{BTreeSearchIterator}} not recycled. I think it is 
quite dangerous to recycle this iterator, see for example 
[here|https://github.com/apache/cassandra/compare/trunk...tjake:faster-streaming#diff-81fd7ce7915c147ea84590e25f77ca47R361].
 I think we would extend the scope and risk of this patch significantly for 
very little gain but feel free to prove me wrong if you want to experiment with 
alternative recycling options. 

Regarding using our own {{FastThreadLocal}} vs. keeping dependencies to Netty, 
I'm really not sure. On one hand I don't want to cause additional work for no 
good reason and I don't particularly like duplicating code, but on the other 
hand the Netty internal classes, e.g. {{InternalThreadLocalMap}}, could change 
at any time. So we could have performance regressions by upgrading Netty for 
example. I'm happy either way.

Regarding ref. counting, you're quite right we don't need this, if an object is 
not recycled it will be GC-ed.

A few more points:

* Why do we need to allocate cells lazily in {{BTreeRow.Builder}}, do we really 
create many of these without ever adding cells to them?

* 
[{{dob.recycle()}}|https://github.com/apache/cassandra/compare/trunk...tjake:faster-streaming#diff-c06541855022eca5fd794dd24ff02f89R182]
 should be in a finally since {{serializeRowBody()}} can throw.

* I don't understand [this 
line|https://github.com/apache/cassandra/compare/trunk...tjake:faster-streaming#diff-ee37e803d70421ce823d42e02620d589R207]:
 when the object is recycled, the buffer should be null (from close()) and 
indexSamplesSerializedSize should be zero (from create()), so why do we need to 
set {{indexOffsets\[columnIndexCount\] = 0}} explicitly?

* {{ColumnIndex.create()}} is only called in BTW.append. It would be nice if we 
could somehow attach this object somewhere rather than constantly pushing it 
and popping it from the recycler stack. We could just store it in BTW if we 
could be sure that BTW.append is not called by multiple threads or maybe have a 
queue of these objects in BTW?

> Bootstrap outgoing streaming speeds are much slower than during repair
> --
>
> Key: CASSANDRA-9766
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9766
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.2. more details in the pdf attached 
>Reporter: Alexei K
>Assignee: T Jake Luciani
>  Labels: performance
> Fix For: 3.x
>
> Attachments: problem.pdf
>
>
> I have a cluster in Amazon cloud , its described in detail in the attachment. 
> What I've noticed is that we during bootstrap we never go above 12MB/sec 
> transmission speeds and also those speeds flat line almost like we're hitting 
> some sort of a limit ( this remains true for other tests that I've ran) 
> however during the repair we see much higher,variable sending rates. I've 
> provided network charts in the attachment as well . Is there an explanation 
> for this? Is something wrong with my configuration, or is it a possible bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: minor javadoc fixes

2016-05-02 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk df5f22843 -> fe7eee006


minor javadoc fixes


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fe7eee00
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fe7eee00
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fe7eee00

Branch: refs/heads/trunk
Commit: fe7eee00661272bb29608e8e2cf5dea6a7511cef
Parents: df5f228
Author: Dave Brosius 
Authored: Mon May 2 23:06:04 2016 -0400
Committer: Dave Brosius 
Committed: Mon May 2 23:06:04 2016 -0400

--
 .../apache/cassandra/auth/jmx/AuthenticationProxy.java  |  2 +-
 .../org/apache/cassandra/concurrent/ExecutorLocal.java  |  2 +-
 .../org/apache/cassandra/db/marshal/AbstractType.java   |  2 +-
 .../org/apache/cassandra/db/rows/RowDiffListener.java   |  2 +-
 src/java/org/apache/cassandra/db/view/ViewManager.java  |  4 ++--
 src/java/org/apache/cassandra/gms/Gossiper.java |  2 +-
 .../cassandra/io/sstable/format/SSTableReader.java  |  2 +-
 .../io/util/RewindableDataInputStreamPlus.java  | 12 ++--
 .../apache/cassandra/service/StorageServiceMBean.java   |  6 +++---
 9 files changed, 17 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe7eee00/src/java/org/apache/cassandra/auth/jmx/AuthenticationProxy.java
--
diff --git a/src/java/org/apache/cassandra/auth/jmx/AuthenticationProxy.java 
b/src/java/org/apache/cassandra/auth/jmx/AuthenticationProxy.java
index 67c9d8c..1fe6b63 100644
--- a/src/java/org/apache/cassandra/auth/jmx/AuthenticationProxy.java
+++ b/src/java/org/apache/cassandra/auth/jmx/AuthenticationProxy.java
@@ -76,7 +76,7 @@ public final class AuthenticationProxy implements 
JMXAuthenticator
  *
  * @param credentials optionally these credentials may be supplied by the 
JMX user.
  *Out of the box, the JDK's 
{@code}RMIServerImpl{@code} is capable
- *of supplying a two element String[], containing 
username & password.
+ *of supplying a two element String[], containing 
username and password.
  *If present, these credentials will be made available 
to configured
  *{@code}LoginModule{@code}s via 
{@code}JMXCallbackHandler{@code}.
  *

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe7eee00/src/java/org/apache/cassandra/concurrent/ExecutorLocal.java
--
diff --git a/src/java/org/apache/cassandra/concurrent/ExecutorLocal.java 
b/src/java/org/apache/cassandra/concurrent/ExecutorLocal.java
index 47826f3..367dc7c 100644
--- a/src/java/org/apache/cassandra/concurrent/ExecutorLocal.java
+++ b/src/java/org/apache/cassandra/concurrent/ExecutorLocal.java
@@ -27,7 +27,7 @@ public interface ExecutorLocal
 ExecutorLocal[] all = { Tracing.instance, ClientWarn.instance };
 
 /**
- * This is called when scheduling the task, and also before calling {@link 
ExecutorLocal#set(T)} when running on a
+ * This is called when scheduling the task, and also before calling {@link 
#set(Object)} when running on a
  * executor thread.
  *
  * @return The thread-local value that we want to copy across executor 
boundaries; may be null if not set.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe7eee00/src/java/org/apache/cassandra/db/marshal/AbstractType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/AbstractType.java 
b/src/java/org/apache/cassandra/db/marshal/AbstractType.java
index 7a67433..fc4ec3b 100644
--- a/src/java/org/apache/cassandra/db/marshal/AbstractType.java
+++ b/src/java/org/apache/cassandra/db/marshal/AbstractType.java
@@ -336,7 +336,7 @@ public abstract class AbstractType implements 
Comparator
  * Returns an AbstractType instance that is equivalent to this one, but 
with all nested UDTs explicitly frozen and
  * all collections in UDTs explicitly frozen.
  *
- * This is only necessary for 2.x -> 3.x schema migrations, and can be 
removed in Cassandra 4.0.
+ * This is only necessary for {@code 2.x -> 3.x} schema migrations, and 
can be removed in Cassandra 4.0.
  *
  * See CASSANDRA-11609
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe7eee00/src/java/org/apache/cassandra/db/rows/RowDiffListener.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/RowDiffListener.java 
b/src/java/org/apache/cassandra/db/rows/RowDiffListener.java
index f209bfc..8586a97 100644

[jira] [Updated] (CASSANDRA-11701) [windows] dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows

2016-05-02 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11701:
-
Tester: Russ Hatch
Status: Awaiting Feedback  (was: In Progress)

> [windows] dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows
> -
>
> Key: CASSANDRA-11701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11701
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest, windows
>
> looks to be an assertion problem, so could be test or cassandra related:
> e.g.:
> {noformat}
> 1 != 331
> {noformat}
> http://cassci.datastax.com/job/trunk_dtest_win32/404/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_skip_and_max_rows
> Failed on CassCI build trunk_dtest_win32 #404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11701) [windows] dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows

2016-05-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267921#comment-15267921
 ] 

Stefania edited comment on CASSANDRA-11701 at 5/3/16 2:00 AM:
--

It seems there was an exception in {{write_rows_to_csv}} that caused 
{{init_feeding_thread}} to be called simultaneously by 2 threads. This method 
should be protected by a lock, here is the patch:

||2.1||2.2||3.0||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/11701-2.1]|[patch|https://github.com/stef1927/cassandra/commits/11701-2.2]|[patch|https://github.com/stef1927/cassandra/commits/11701-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11701]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-2.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-dtest/]|

The 2.1 patch merges upwards without conflicts.

After protecting this method, we should be able to see what the initial 
exception was, [~rhatch] are you able to run this test a few times on Windows 
with the trunk patch applied?


was (Author: stefania):
It seems there was an exception in {{write_rows_to_csv}} that caused 
{{init_feeding_thread}} to be called simultaneously by 2 threads. This method 
should be protected by a lock, here is the patch:

||2.1||2.2||3.0||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/11701-2.1]|[patch|https://github.com/stef1927/cassandra/commits/11701-2.2]|[patch|https://github.com/stef1927/cassandra/commits/11701-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11701]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-2.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-dtest/]|

The 2.1 patch merges upwards without conflicts.

After protecting this method, we should be able to see what the initial 
exception was, [~rhatch] are you able to run this test a few times on Windows 
with the patch applied?

> [windows] dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows
> -
>
> Key: CASSANDRA-11701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11701
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest, windows
>
> looks to be an assertion problem, so could be test or cassandra related:
> e.g.:
> {noformat}
> 1 != 331
> {noformat}
> http://cassci.datastax.com/job/trunk_dtest_win32/404/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_skip_and_max_rows
> Failed on CassCI build trunk_dtest_win32 #404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11701) [windows] dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows

2016-05-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267921#comment-15267921
 ] 

Stefania commented on CASSANDRA-11701:
--

It seems there was an exception in {{write_rows_to_csv}} that caused 
{{init_feeding_thread}} to be called simultaneously by 2 threads. This method 
should be protected by a lock, here is the patch:

||2.1||2.2||3.0||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/11701-2.1]|[patch|https://github.com/stef1927/cassandra/commits/11701-2.2]|[patch|https://github.com/stef1927/cassandra/commits/11701-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11701]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-2.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11701-dtest/]|

The 2.1 patch merges upwards without conflicts.

After protecting this method, we should be able to see what the initial 
exception was, [~rhatch] are you able to run this test a few times on Windows 
with the patch applied?

> [windows] dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows
> -
>
> Key: CASSANDRA-11701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11701
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest, windows
>
> looks to be an assertion problem, so could be test or cassandra related:
> e.g.:
> {noformat}
> 1 != 331
> {noformat}
> http://cassci.datastax.com/job/trunk_dtest_win32/404/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_skip_and_max_rows
> Failed on CassCI build trunk_dtest_win32 #404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11701) [windows] dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows

2016-05-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267866#comment-15267866
 ] 

Stefania commented on CASSANDRA-11701:
--

Looks like it is a legit problem, not a test issue. The COPY TO failed because 
of this exception:

{code}
)  Traceback (most recent call last):(EE
)File "C:\tools\python2\lib\multiprocessing\process.py", line 258, in 
_bootstrap(EE
)  self.run()(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 1449, in run(EE
)  self.inner_run()(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 1469, in inner_run(EE
)  self.start_request(token_range, info)(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 1501, in start_request(EE
)  self.attach_callbacks(token_range, future, session)(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 1578, in attach_callbacks(EE
)  future.add_callbacks(callback=result_callback, errback=err_callback)(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\lib\cassandra-driver-internal-only-3.0.0-6af642d.zip\cassandra-driver-3.0.0-6af642d\cassandra\cluster.py",
 line 3246, in add_callbacks(EE
)  self.add_callback(callback, *callback_args, **(callback_kwargs or {}))(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\lib\cassandra-driver-internal-only-3.0.0-6af642d.zip\cassandra-driver-3.0.0-6af642d\cassandra\cluster.py",
 line 3202, in add_callback(EE
)  fn(self._final_result, *args, **kwargs)(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 1570, in result_callback(EE
)  self.write_rows_to_csv(token_range, rows, cql_types)(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 1596, in write_rows_to_csv(EE
)  self.report_error(e, token_range)(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 1486, in report_error(EE
)  self.send((token_range, Exception(msg)))(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 1489, in send(EE
)  self.outmsg.send(response)(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 124, in send(EE
)  self.init_feeding_thread()(EE
)File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\..\pylib\cqlshlib\copyutil.py",
 line 101, in init_feeding_thread(EE
)  raise RuntimeError("Feeding thread already initialized")(EE
)  RuntimeError: Feeding thread already initialized(EE
)  :2:Child process 7496 died with exit code 1(EE
)  :2:Child process 5236 died with exit code 1(EE
)  :2:Exported 3 ranges out of 33 total ranges, some records might be 
missing(EE
)  
{code}

It may be a pickling problems specific to Windows, I will take a look.

> [windows] dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows
> -
>
> Key: CASSANDRA-11701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11701
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> looks to be an assertion problem, so could be test or cassandra related:
> e.g.:
> {noformat}
> 1 != 331
> {noformat}
> http://cassci.datastax.com/job/trunk_dtest_win32/404/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_skip_and_max_rows
> Failed on CassCI build trunk_dtest_win32 #404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11701) [windows] dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows

2016-05-02 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-11701:


Assignee: Stefania  (was: DS Test Eng)

> [windows] dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows
> -
>
> Key: CASSANDRA-11701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11701
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest, windows
>
> looks to be an assertion problem, so could be test or cassandra related:
> e.g.:
> {noformat}
> 1 != 331
> {noformat}
> http://cassci.datastax.com/job/trunk_dtest_win32/404/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_skip_and_max_rows
> Failed on CassCI build trunk_dtest_win32 #404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11702) dtest failure in pushed_notifications_test.TestPushedNotifications.restart_node_localhost_test

2016-05-02 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11702:
--

 Summary: dtest failure in 
pushed_notifications_test.TestPushedNotifications.restart_node_localhost_test
 Key: CASSANDRA-11702
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11702
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: DS Test Eng


from offheap test job, failure. looks like it could be a routine timeout, but I 
think I saw the exact same timeout for this test on another job.

{noformat}
('Unable to connect to any servers', {})
{noformat}
http://cassci.datastax.com/job/trunk_offheap_dtest/178/testReport/pushed_notifications_test/TestPushedNotifications/restart_node_localhost_test

Failed on CassCI build trunk_offheap_dtest #178



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11701) [windows] dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows

2016-05-02 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11701:
--

 Summary: [windows] dtest failure in 
cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows
 Key: CASSANDRA-11701
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11701
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: DS Test Eng


looks to be an assertion problem, so could be test or cassandra related:

e.g.:
{noformat}
1 != 331
{noformat}

http://cassci.datastax.com/job/trunk_dtest_win32/404/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_skip_and_max_rows

Failed on CassCI build trunk_dtest_win32 #404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11700) [windows] dtest failure in replace_address_test.TestReplaceAddress.unsafe_replace_test

2016-05-02 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11700:
--

 Summary: [windows] dtest failure in 
replace_address_test.TestReplaceAddress.unsafe_replace_test
 Key: CASSANDRA-11700
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11700
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: DS Test Eng


this test has only passed once, and failed several times:
{noformat}
[Error 5] Access is denied: 
'd:\\temp\\dtest-lq2fge\\test\\node3\\data0\\system\\local-7ad54392bcdd35a684174e047860b377\\ma-13-big-Data.db'
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: d:\temp\dtest-lq2fge
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'start_rpc': 'true'}
dtest: DEBUG: Starting cluster with 3 nodes.
dtest: DEBUG: Inserting Data...
dtest: DEBUG: Stopping node 3.
{noformat}

http://cassci.datastax.com/job/trunk_dtest_win32/402/testReport/replace_address_test/TestReplaceAddress/unsafe_replace_test

Failed on CassCI build trunk_dtest_win32 #402



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11699) dtest failure in upgrade_crc_check_chance_test.TestCrcCheckChanceUpgrade.crc_check_chance_upgrade_test

2016-05-02 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11699:
--

 Summary: dtest failure in 
upgrade_crc_check_chance_test.TestCrcCheckChanceUpgrade.crc_check_chance_upgrade_test
 Key: CASSANDRA-11699
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11699
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: DS Test Eng


single failure, unsure if test or cassandra issue:
{noformat}
ERROR [BatchlogTasks:1] 2016-05-02 20:23:08,488 CassandraDaemon.java:195 - 
Exception in thread Thread[BatchlogTasks:1,5,main]
java.lang.NoClassDefFoundError: org/apache/cassandra/utils/JVMStabilityInspector
at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:122)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
~[na:1.8.0_45]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[na:1.8.0_45]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.lang.ClassNotFoundException: 
org.apache.cassandra.utils.JVMStabilityInspector
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) 
~[na:1.8.0_45]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_45]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) 
~[na:1.8.0_45]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_45]
... 8 common frames omitted
ERROR [BatchlogTasks:1] 2016-05-02 20:23:08,490 CassandraDaemon.java:195 - 
Exception in thread Thread[BatchlogTasks:1,5,main]
java.lang.NoClassDefFoundError: org/apache/cassandra/utils/JVMStabilityInspector
at 
org.apache.cassandra.service.CassandraDaemon$2.uncaughtException(CassandraDaemon.java:199)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.handleOrLog(DebuggableThreadPoolExecutor.java:244)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.logExceptionsAfterExecute(DebuggableThreadPoolExecutor.java:227)
 ~[main/:na]
at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.afterExecute(DebuggableScheduledThreadPoolExecutor.java:89)
 ~[main/:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
Caused by: java.lang.ClassNotFoundException: 
org.apache.cassandra.utils.JVMStabilityInspector
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) 
~[na:1.8.0_45]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_45]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) 
~[na:1.8.0_45]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_45]
... 7 common frames omitted
ERROR [OptionalTasks:1] 2016-05-02 20:23:08,577 CassandraDaemon.java:195 - 
Exception in thread Thread[OptionalTasks:1,5,main]
java.lang.NoClassDefFoundError: org/apache/cassandra/utils/JVMStabilityInspector
at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:122)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_45]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
~[na:1.8.0_45]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[na:1.8.0_45]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
ERROR [OptionalTasks:1] 2016-05-02 20:23:08,588 CassandraDaemon.java:195 - 
Exception in thread Thread[OptionalTasks:1,5,main]
java.lang.NoClassDefFoundError: org/apache/cassandra/utils/JVMStabilityInspector
at 

[jira] [Created] (CASSANDRA-11698) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-05-02 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11698:
--

 Summary: dtest failure in 
materialized_views_test.TestMaterializedViews.clustering_column_test
 Key: CASSANDRA-11698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11698
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: DS Test Eng


recent failure, test has flapped before a while back.

{noformat}
Expecting 2 users, got 1
{noformat}

http://cassci.datastax.com/job/cassandra-3.0_dtest/688/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test

Failed on CassCI build cassandra-3.0_dtest #688



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10962) Cassandra should not create snapshot at restart for compactions_in_progress

2016-05-02 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267586#comment-15267586
 ] 

Joel Knighton commented on CASSANDRA-10962:
---

CI looks good - logic in the patch looks good to me. It seems that the comment 
for {[truncateBlocking}} has the parameter descriptions switched; false means 
to skip the snapshot, and true means to take it. Another nit: the commit 
message would be more accurate as "Avoid creating..." as opposed to "Avoid 
re-creating...".

> Cassandra should not create snapshot at restart for compactions_in_progress
> ---
>
> Key: CASSANDRA-10962
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10962
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 14.04.3 LTS
>Reporter: FACORAT
>Assignee: Alex Petrov
>Priority: Minor
> Fix For: 2.2.x
>
>
> If auto_snapshot is set to true in cassandra.yaml, each time you restart 
> Cassandra, a snapshot is created for system.compactions_in_progress as the 
> table is truncated at cassandra start.
> However as datas in this table are temporary, Cassandra should not create 
> snapshot for this table (or maybe even for system.* tables). This will be 
> coherent with the fact that "nodetool listsnapshots" doesn't even list this 
> table.
> Exemple:
> {code}
> $ nodetool listsnapshots | grep compactions
> $ ls -lh 
> system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/snapshots/
> total 16K
> drwxr-xr-x 2 cassandra cassandra 4.0K Nov 30 13:12 
> 1448885530280-compactions_in_progress
> drwxr-xr-x 2 cassandra cassandra 4.0K Dec  7 15:36 
> 1449498977181-compactions_in_progress
> drwxr-xr-x 2 cassandra cassandra 4.0K Dec 14 18:20 
> 1450113621506-compactions_in_progress
> drwxr-xr-x 2 cassandra cassandra 4.0K Jan  4 12:53 
> 1451908396364-compactions_in_progress
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9766) Bootstrap outgoing streaming speeds are much slower than during repair

2016-05-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267533#comment-15267533
 ] 

Benedict commented on CASSANDRA-9766:
-

It certainly was the intention that the FastThreadLocal be used by C* back when 
I wrote it, although my intention had been to move it in-tree to maintain 
ourselves as our goals are a bit different to Netty's.  The Recycler is a good 
example - it's designed to permit low overhead but _concurrent_ recycling, with 
some guarantees about not behaving badly in the face of user misuse.  This is 
probably overkill for our use case, but it is somewhat general purpose and 
those guarantees might well be nice.  I have no strong opinion.

On the topic of BTreeSearchIterator: In 3.0, for most users the 
BTreeSearchIterator is likely to be a single very small object, and even for 
large partitions it will still be a very small number.  I would be surprised if 
recycling such small objects can easily be made a win, but if they really have 
such a large heap impact having a truly thread local queue with a small maximum 
number of elements per thread would be the only way to get close to 
non-recycled performance.

Reusing the Object[] for our builders is definitely a good thing to do though.

Disclaimer: I have not read the patch.

> Bootstrap outgoing streaming speeds are much slower than during repair
> --
>
> Key: CASSANDRA-9766
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9766
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.2. more details in the pdf attached 
>Reporter: Alexei K
>Assignee: T Jake Luciani
>  Labels: performance
> Fix For: 3.x
>
> Attachments: problem.pdf
>
>
> I have a cluster in Amazon cloud , its described in detail in the attachment. 
> What I've noticed is that we during bootstrap we never go above 12MB/sec 
> transmission speeds and also those speeds flat line almost like we're hitting 
> some sort of a limit ( this remains true for other tests that I've ran) 
> however during the repair we see much higher,variable sending rates. I've 
> provided network charts in the attachment as well . Is there an explanation 
> for this? Is something wrong with my configuration, or is it a possible bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8844) Change Data Capture (CDC)

2016-05-02 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266676#comment-15266676
 ] 

Joshua McKenzie edited comment on CASSANDRA-8844 at 5/2/16 9:15 PM:


The issue we're discussing here is less about policy for handling CDC failures, 
and more about that policy impacting both CDC and non-CDC writes unless we 
distinguish whether a Mutation contains CDC-enabled CF in them at write-time or 
not.

If we treat all Mutations equally, we would apply that policy to both CDC and 
non-CDC enabled writes, so CDC space being filled / backpressure would reject 
all writes on the node.

edit: On re-reading my comment, I want to make sure you don't think I'm 
dismissing the "CDC failure policy" portion of the discussion. We don't have 
that in the v1 spec but it should be a relatively easy addition after we get 
the basic framework in.


was (Author: joshuamckenzie):
The issue we're discussing here is less about policy for handling CDC failures, 
and more about that policy impacting both CDC and non-CDC writes unless we 
distinguish whether a Mutation contains CDC-enabled CF in them at write-time or 
not.

If we treat all Mutations equally, we would apply that policy to both CDC and 
non-CDC enabled writes, so CDC space being filled / backpressure would reject 
all writes on the node.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> 

[jira] [Updated] (CASSANDRA-11697) Improve Compaction Throughput

2016-05-02 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11697:
---
Component/s: Compaction

> Improve Compaction Throughput
> -
>
> Key: CASSANDRA-11697
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11697
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>
> The goal of this ticket is to improve/understand the bottlenecks during 
> compactions.  At a high level this will involve:
> * A test system for measuring compaction time for different workloads and 
> compaction strategies.
> * Profiling and analysis
> * Make improvements
> * Add throughput regression tests so we can track
> We have a lot of random tickets that relate to this so I'll link them to this 
> ticket 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11623) Compactions w/ Short Rows Spending Time in getOnDiskFilePointer

2016-05-02 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267513#comment-15267513
 ] 

T Jake Luciani commented on CASSANDRA-11623:


Related: CASSANDRA-11697

> Compactions w/ Short Rows Spending Time in getOnDiskFilePointer
> ---
>
> Key: CASSANDRA-11623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11623
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom Petracca
>Assignee: Tom Petracca
>Priority: Minor
> Fix For: 3.x
>
> Attachments: compactiontask_profile.png
>
>
> Been doing some performance tuning and profiling of my cassandra cluster and 
> noticed that compaction speeds for my tables that I know to have very short 
> rows were going particularly slowly.  Profiling shows a ton of time being 
> spent in BigTableWriter.getOnDiskFilePointer(), and attaching strace to a 
> CompactionTask shows that a majority of time is being spent lseek (called by 
> getOnDiskFilePointer), and not read or write.
> Going deeper it looks like we call getOnDiskFilePointer each row (sometimes 
> multiple times per row) in order to see if we've reached our expected sstable 
> size and should start a new writer.  This is pretty unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11697) Improve Compaction Throughput

2016-05-02 Thread T Jake Luciani (JIRA)
T Jake Luciani created CASSANDRA-11697:
--

 Summary: Improve Compaction Throughput
 Key: CASSANDRA-11697
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11697
 Project: Cassandra
  Issue Type: Improvement
Reporter: T Jake Luciani
Assignee: T Jake Luciani


The goal of this ticket is to improve/understand the bottlenecks during 
compactions.  At a high level this will involve:

* A test system for measuring compaction time for different workloads and 
compaction strategies.
* Profiling and analysis
* Make improvements
* Add throughput regression tests so we can track

We have a lot of random tickets that relate to this so I'll link them to this 
ticket 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11567) Update netty version

2016-05-02 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267489#comment-15267489
 ] 

T Jake Luciani commented on CASSANDRA-11567:


Thx added in {{df5f22843bc327d086a6d5a85253654e16ee9a8f}}

> Update netty version
> 
>
> Key: CASSANDRA-11567
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11567
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 3.6
>
>
> Mainly for prereq to CASSANDRA-11421. 
> Netty 4.0.34 -> 4.0.36.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: fix netty lisc version

2016-05-02 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7a7249ac4 -> df5f22843


fix netty lisc version


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df5f2284
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df5f2284
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df5f2284

Branch: refs/heads/trunk
Commit: df5f22843bc327d086a6d5a85253654e16ee9a8f
Parents: 7a7249a
Author: T Jake Luciani 
Authored: Mon May 2 16:52:45 2016 -0400
Committer: T Jake Luciani 
Committed: Mon May 2 16:52:45 2016 -0400

--
 lib/licenses/netty-all-4.0.34.Final.txt | 202 ---
 lib/licenses/netty-all-4.0.36.Final.txt | 202 +++
 2 files changed, 202 insertions(+), 202 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df5f2284/lib/licenses/netty-all-4.0.34.Final.txt
--
diff --git a/lib/licenses/netty-all-4.0.34.Final.txt 
b/lib/licenses/netty-all-4.0.34.Final.txt
deleted file mode 100644
index d645695..000
--- a/lib/licenses/netty-all-4.0.34.Final.txt
+++ /dev/null
@@ -1,202 +0,0 @@
-
- Apache License
-   Version 2.0, January 2004
-http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-  "License" shall mean the terms and conditions for use, reproduction,
-  and distribution as defined by Sections 1 through 9 of this document.
-
-  "Licensor" shall mean the copyright owner or entity authorized by
-  the copyright owner that is granting the License.
-
-  "Legal Entity" shall mean the union of the acting entity and all
-  other entities that control, are controlled by, or are under common
-  control with that entity. For the purposes of this definition,
-  "control" means (i) the power, direct or indirect, to cause the
-  direction or management of such entity, whether by contract or
-  otherwise, or (ii) ownership of fifty percent (50%) or more of the
-  outstanding shares, or (iii) beneficial ownership of such entity.
-
-  "You" (or "Your") shall mean an individual or Legal Entity
-  exercising permissions granted by this License.
-
-  "Source" form shall mean the preferred form for making modifications,
-  including but not limited to software source code, documentation
-  source, and configuration files.
-
-  "Object" form shall mean any form resulting from mechanical
-  transformation or translation of a Source form, including but
-  not limited to compiled object code, generated documentation,
-  and conversions to other media types.
-
-  "Work" shall mean the work of authorship, whether in Source or
-  Object form, made available under the License, as indicated by a
-  copyright notice that is included in or attached to the work
-  (an example is provided in the Appendix below).
-
-  "Derivative Works" shall mean any work, whether in Source or Object
-  form, that is based on (or derived from) the Work and for which the
-  editorial revisions, annotations, elaborations, or other modifications
-  represent, as a whole, an original work of authorship. For the purposes
-  of this License, Derivative Works shall not include works that remain
-  separable from, or merely link (or bind by name) to the interfaces of,
-  the Work and Derivative Works thereof.
-
-  "Contribution" shall mean any work of authorship, including
-  the original version of the Work and any modifications or additions
-  to that Work or Derivative Works thereof, that is intentionally
-  submitted to Licensor for inclusion in the Work by the copyright owner
-  or by an individual or Legal Entity authorized to submit on behalf of
-  the copyright owner. For the purposes of this definition, "submitted"
-  means any form of electronic, verbal, or written communication sent
-  to the Licensor or its representatives, including but not limited to
-  communication on electronic mailing lists, source code control systems,
-  and issue tracking systems that are managed by, or on behalf of, the
-  Licensor for the purpose of discussing and improving the Work, but
-  excluding communication that is conspicuously marked or otherwise
-  designated in writing by the copyright owner as "Not a Contribution."
-
-  "Contributor" shall mean Licensor and any individual or Legal Entity
-  on behalf of whom a Contribution has been received by Licensor and
-  subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject 

[jira] [Commented] (CASSANDRA-9766) Bootstrap outgoing streaming speeds are much slower than during repair

2016-05-02 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267478#comment-15267478
 ] 

T Jake Luciani commented on CASSANDRA-9766:
---

bq. Running LongStreamingTest on my laptop went from 24/25 seconds on trunk 
HEAD to 22/23 seconds with the patch applied.

Hmm, looks like the BtreeSearchIterator recycling is causing too high a CPU hit 
to be worth the GC savings.  I've pushed a quick commit which brings the test 
back down to 19 seconds for me, could you try it out and let me what you see? 
Without recycling BTreeSearchIterator accounts for >25% of the heap pressure :(

I think since the object is so hotly used it just causes too much contention on 
the recycler. It's important to avoid too much allocation but seems like in 
this case it's gone too far.  Perhaps we can avoid the recycler here and just 
keep a reusable BTreeSearchIterator in the SSTableWriter. 

bq. I would like to make sure this is justifiable and I would probably want the 
opinion of one more committer with more experience than me
The FastThreadLocal changes were optimization by [~benedict] from a [while 
back|https://github.com/netty/netty/pull/2504] plus some recycler changes.
since we already use netty and it's built to be used as a general library it 
seemed like a good place to start. 

bq. do we have a micro benchmark comparing Netty FastThreadLocal and the JDK 
ThreadLocal? 
The netty FastThreadLocal microbenchmarks show a significant throughput 
increase over jdk

{code}
BenchmarkMode  Cnt  Score  Error  
Units
FastThreadLocalBenchmark.fastThreadLocalthrpt   20  55452.027 ±  725.713  
ops/s
FastThreadLocalBenchmark.jdkThreadLocalGet  thrpt   20  35481.888 ± 1471.647  
ops/s
{code}

bq. Should we perhaps make recyclable objects ref counted, at least for 
debugging purposes when Ref.DEBUG_ENABLED is true?

The reason I didn't do this and one reason I like the Recycler is it's not 
strictly required to recycle every object. If we added ref counting it would 
force every code path to be properly cleaned up even when we don't care about 
recycling. 



> Bootstrap outgoing streaming speeds are much slower than during repair
> --
>
> Key: CASSANDRA-9766
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9766
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.2. more details in the pdf attached 
>Reporter: Alexei K
>Assignee: T Jake Luciani
>  Labels: performance
> Fix For: 3.x
>
> Attachments: problem.pdf
>
>
> I have a cluster in Amazon cloud , its described in detail in the attachment. 
> What I've noticed is that we during bootstrap we never go above 12MB/sec 
> transmission speeds and also those speeds flat line almost like we're hitting 
> some sort of a limit ( this remains true for other tests that I've ran) 
> however during the repair we see much higher,variable sending rates. I've 
> provided network charts in the attachment as well . Is there an explanation 
> for this? Is something wrong with my configuration, or is it a possible bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11602) Materialized View Doest Not Have Static Columns

2016-05-02 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-11602:
---
   Resolution: Fixed
 Reviewer: Sylvain Lebresne
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.0.6
   3.6
   Status: Resolved  (was: Ready to Commit)

Thanks for the review, [~slebresne]. The validation has been committed in 
[6d725af|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=6d725afaef5bc8604cf775ff2498f21530c0288c],
 with pulling that variable out.

> Materialized View Doest Not Have Static Columns
> ---
>
> Key: CASSANDRA-11602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11602
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ravishankar Rajendran
>Assignee: Carl Yeksigian
> Fix For: 3.6, 3.0.6
>
>
> {quote}
> CREATE TABLE "team" (teamname text, manager text, location text static, 
> PRIMARY KEY ((teamname), manager));
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull1', 
> 'Ricciardo11', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo12', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo13', 'Australian');
> select * From team;
> CREATE MATERIALIZED VIEW IF NOT EXISTS "teamMV" AS SELECT "teamname", 
> "manager", "location" FROM "team" WHERE "teamname" IS NOT NULL AND "manager" 
> is NOT NULL AND "location" is NOT NULL PRIMARY KEY("manager", "teamname");  
> select * from "teamMV";
> {quote}
> The teamMV does not have "location" column. Static columns are not getting 
> created in MV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-05-02 Thread carl
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7a7249ac
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7a7249ac
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7a7249ac

Branch: refs/heads/trunk
Commit: 7a7249ac4437689dc5091a121aa143ff7bfb97e0
Parents: e16d8a7 6d725af
Author: Carl Yeksigian 
Authored: Mon May 2 16:13:19 2016 -0400
Committer: Carl Yeksigian 
Committed: Mon May 2 16:13:19 2016 -0400

--
 CHANGES.txt |  1 +
 .../cql3/statements/CreateViewStatement.java| 15 ++---
 .../org/apache/cassandra/cql3/ViewTest.java | 22 ++--
 3 files changed, 29 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a7249ac/CHANGES.txt
--
diff --cc CHANGES.txt
index 1a3069c,f947568..4ff5b1a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,70 -1,5 +1,71 @@@
 -3.0.6
 +3.6
 + * Enhanced Compaction Logging (CASSANDRA-10805)
 + * Make prepared statement cache size configurable (CASSANDRA-11555)
 + * Integrated JMX authentication and authorization (CASSANDRA-10091)
 + * Add units to stress ouput (CASSANDRA-11352)
 + * Fix PER PARTITION LIMIT for single and multi partitions queries 
(CASSANDRA-11603)
 + * Add uncompressed chunk cache for RandomAccessReader (CASSANDRA-5863)
 + * Clarify ClusteringPrefix hierarchy (CASSANDRA-11213)
 + * Always perform collision check before joining ring (CASSANDRA-10134)
 + * SSTableWriter output discrepancy (CASSANDRA-11646)
 + * Fix potential timeout in NativeTransportService.testConcurrentDestroys 
(CASSANDRA-10756)
 + * Support large partitions on the 3.0 sstable format (CASSANDRA-11206)
 + * Add support to rebuild from specific range (CASSANDRA-10406)
 + * Optimize the overlapping lookup by calculating all the
 +   bounds in advance (CASSANDRA-11571)
 + * Support json/yaml output in noetool tablestats (CASSANDRA-5977)
 + * (stress) Add datacenter option to -node options (CASSANDRA-11591)
 + * Fix handling of empty slices (CASSANDRA-11513)
 + * Make number of cores used by cqlsh COPY visible to testing code 
(CASSANDRA-11437)
 + * Allow filtering on clustering columns for queries without secondary 
indexes (CASSANDRA-11310)
 + * Refactor Restriction hierarchy (CASSANDRA-11354)
 + * Eliminate allocations in R/W path (CASSANDRA-11421)
 + * Update Netty to 4.0.36 (CASSANDRA-11567)
 + * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)
 + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818)
 + * Support UDT in CQLSSTableWriter (CASSANDRA-10624)
 + * Support for non-frozen user-defined types, updating
 +   individual fields of user-defined types (CASSANDRA-7423)
 + * Make LZ4 compression level configurable (CASSANDRA-11051)
 + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
 + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)
 + * Improve field-checking and error reporting in cassandra.yaml 
(CASSANDRA-10649)
 + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507)
 + * More user friendly error when providing an invalid token to nodetool 
(CASSANDRA-9348)
 + * Add static column support to SASI index (CASSANDRA-11183)
 + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization 
(CASSANDRA-11434)
 + * Support LIKE operator in prepared statements (CASSANDRA-11456)
 + * Add a command to see if a Materialized View has finished building 
(CASSANDRA-9967)
 + * Log endpoint and port associated with streaming operation (CASSANDRA-8777)
 + * Print sensible units for all log messages (CASSANDRA-9692)
 + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096)
 + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372)
 + * Compress only inter-dc traffic by default (CASSANDRA-)
 + * Add metrics to track write amplification (CASSANDRA-11420)
 + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
 + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
 + * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
 + * Add auto import java.util for UDF code block (CASSANDRA-11392)
 + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
 + * sstablemetadata should print sstable min/max token (CASSANDRA-7159)
 + * Do not wrap CassandraException in TriggerExecutor (CASSANDRA-9421)
 + * COPY TO should have higher double precision (CASSANDRA-11255)
 + * Stress should exit with non-zero status after failure (CASSANDRA-10340)
 + * Add client to cqlsh SHOW_SESSION (CASSANDRA-8958)
 + * Fix nodetool tablestats keyspace level metrics 

[2/3] cassandra git commit: Disallow creating view with a static column

2016-05-02 Thread carl
Disallow creating view with a static column

patch by Carl Yeksigian; reviewed by Sylvain Lebresne for CASSANDRA-11602


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d725afa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d725afa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d725afa

Branch: refs/heads/trunk
Commit: 6d725afaef5bc8604cf775ff2498f21530c0288c
Parents: a1cc804
Author: Carl Yeksigian 
Authored: Mon May 2 16:12:00 2016 -0400
Committer: Carl Yeksigian 
Committed: Mon May 2 16:12:00 2016 -0400

--
 CHANGES.txt |  1 +
 .../cql3/statements/CreateViewStatement.java| 15 ++---
 .../org/apache/cassandra/cql3/ViewTest.java | 22 ++--
 3 files changed, 29 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d725afa/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 95db7bc..f947568 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * Disallow creating view with a static column (CASSANDRA-11602)
  * Reduce the amount of object allocations caused by the getFunctions methods 
(CASSANDRA-11593)
  * Potential error replaying commitlog with smallint/tinyint/date/time types 
(CASSANDRA-11618)
  * Fix queries with filtering on counter columns (CASSANDRA-11629)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d725afa/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
index 5af4887..45231b7 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
@@ -168,10 +168,7 @@ public class CreateViewStatement extends 
SchemaAlteringStatement
 if (cdef == null)
 throw new InvalidRequestException("Unknown column name 
detected in CREATE MATERIALIZED VIEW statement : "+identifier);
 
-if (cdef.isStatic())
-ClientWarn.instance.warn(String.format("Unable to include 
static column '%s' in Materialized View SELECT statement", identifier));
-else
-included.add(identifier);
+included.add(identifier);
 }
 
 Set targetPrimaryKeys = new HashSet<>();
@@ -246,10 +243,14 @@ public class CreateViewStatement extends 
SchemaAlteringStatement
 for (ColumnDefinition def : cfm.allColumns())
 {
 ColumnIdentifier identifier = def.name;
+boolean includeDef = included.isEmpty() || 
included.contains(identifier);
+
+if (includeDef && def.isStatic())
+{
+throw new InvalidRequestException(String.format("Unable to 
include static column '%s' which would be included by Materialized View SELECT 
* statement", identifier));
+}
 
-if ((included.isEmpty() || included.contains(identifier))
-&& !targetClusteringColumns.contains(identifier) && 
!targetPartitionKeys.contains(identifier)
-&& !def.isStatic())
+if (includeDef && !targetClusteringColumns.contains(identifier) && 
!targetPartitionKeys.contains(identifier))
 {
 includedColumns.add(identifier);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d725afa/test/unit/org/apache/cassandra/cql3/ViewTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/ViewTest.java 
b/test/unit/org/apache/cassandra/cql3/ViewTest.java
index 830f37f..ac4becb 100644
--- a/test/unit/org/apache/cassandra/cql3/ViewTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ViewTest.java
@@ -207,13 +207,31 @@ public class ViewTest extends CQLTester
 try
 {
 createView("mv_static", "CREATE MATERIALIZED VIEW %%s AS SELECT * 
FROM %s WHERE sval IS NOT NULL AND k IS NOT NULL AND c IS NOT NULL PRIMARY KEY 
(sval,k,c)");
-Assert.fail("MV on static should fail");
+Assert.fail("Use of static column in a MV primary key should 
fail");
 }
 catch (InvalidQueryException e)
 {
 }
 
-createView("mv_static", "CREATE MATERIALIZED VIEW %s AS SELECT * FROM 
%%s WHERE val IS NOT NULL AND k IS NOT NULL AND c IS NOT NULL PRIMARY KEY 
(val,k,c)");
+try
+{
+createView("mv_static", "CREATE MATERIALIZED VIEW %%s AS SELECT 
val, sval FROM %s WHERE 

[1/3] cassandra git commit: Disallow creating view with a static column

2016-05-02 Thread carl
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 a1cc804f5 -> 6d725afae
  refs/heads/trunk e16d8a7a6 -> 7a7249ac4


Disallow creating view with a static column

patch by Carl Yeksigian; reviewed by Sylvain Lebresne for CASSANDRA-11602


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d725afa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d725afa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d725afa

Branch: refs/heads/cassandra-3.0
Commit: 6d725afaef5bc8604cf775ff2498f21530c0288c
Parents: a1cc804
Author: Carl Yeksigian 
Authored: Mon May 2 16:12:00 2016 -0400
Committer: Carl Yeksigian 
Committed: Mon May 2 16:12:00 2016 -0400

--
 CHANGES.txt |  1 +
 .../cql3/statements/CreateViewStatement.java| 15 ++---
 .../org/apache/cassandra/cql3/ViewTest.java | 22 ++--
 3 files changed, 29 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d725afa/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 95db7bc..f947568 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * Disallow creating view with a static column (CASSANDRA-11602)
  * Reduce the amount of object allocations caused by the getFunctions methods 
(CASSANDRA-11593)
  * Potential error replaying commitlog with smallint/tinyint/date/time types 
(CASSANDRA-11618)
  * Fix queries with filtering on counter columns (CASSANDRA-11629)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d725afa/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
index 5af4887..45231b7 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java
@@ -168,10 +168,7 @@ public class CreateViewStatement extends 
SchemaAlteringStatement
 if (cdef == null)
 throw new InvalidRequestException("Unknown column name 
detected in CREATE MATERIALIZED VIEW statement : "+identifier);
 
-if (cdef.isStatic())
-ClientWarn.instance.warn(String.format("Unable to include 
static column '%s' in Materialized View SELECT statement", identifier));
-else
-included.add(identifier);
+included.add(identifier);
 }
 
 Set targetPrimaryKeys = new HashSet<>();
@@ -246,10 +243,14 @@ public class CreateViewStatement extends 
SchemaAlteringStatement
 for (ColumnDefinition def : cfm.allColumns())
 {
 ColumnIdentifier identifier = def.name;
+boolean includeDef = included.isEmpty() || 
included.contains(identifier);
+
+if (includeDef && def.isStatic())
+{
+throw new InvalidRequestException(String.format("Unable to 
include static column '%s' which would be included by Materialized View SELECT 
* statement", identifier));
+}
 
-if ((included.isEmpty() || included.contains(identifier))
-&& !targetClusteringColumns.contains(identifier) && 
!targetPartitionKeys.contains(identifier)
-&& !def.isStatic())
+if (includeDef && !targetClusteringColumns.contains(identifier) && 
!targetPartitionKeys.contains(identifier))
 {
 includedColumns.add(identifier);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d725afa/test/unit/org/apache/cassandra/cql3/ViewTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/ViewTest.java 
b/test/unit/org/apache/cassandra/cql3/ViewTest.java
index 830f37f..ac4becb 100644
--- a/test/unit/org/apache/cassandra/cql3/ViewTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ViewTest.java
@@ -207,13 +207,31 @@ public class ViewTest extends CQLTester
 try
 {
 createView("mv_static", "CREATE MATERIALIZED VIEW %%s AS SELECT * 
FROM %s WHERE sval IS NOT NULL AND k IS NOT NULL AND c IS NOT NULL PRIMARY KEY 
(sval,k,c)");
-Assert.fail("MV on static should fail");
+Assert.fail("Use of static column in a MV primary key should 
fail");
 }
 catch (InvalidQueryException e)
 {
 }
 
-createView("mv_static", "CREATE MATERIALIZED VIEW %s AS SELECT * FROM 
%%s WHERE val IS NOT NULL AND k IS NOT NULL AND c IS NOT NULL PRIMARY 

[jira] [Commented] (CASSANDRA-11623) Compactions w/ Short Rows Spending Time in getOnDiskFilePointer

2016-05-02 Thread Tom Petracca (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267316#comment-15267316
 ] 

Tom Petracca commented on CASSANDRA-11623:
--

Yea so I don't have particular numbers for my cluster.  In a small dev 
environment (which I wouldn't consider a realistic at scale test) I saw a 10% 
improvement in compaction throughput speeds for the specific table being 
compacted in the profiler image originally attached.

> Compactions w/ Short Rows Spending Time in getOnDiskFilePointer
> ---
>
> Key: CASSANDRA-11623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11623
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom Petracca
>Assignee: Tom Petracca
>Priority: Minor
> Fix For: 3.x
>
> Attachments: compactiontask_profile.png
>
>
> Been doing some performance tuning and profiling of my cassandra cluster and 
> noticed that compaction speeds for my tables that I know to have very short 
> rows were going particularly slowly.  Profiling shows a ton of time being 
> spent in BigTableWriter.getOnDiskFilePointer(), and attaching strace to a 
> CompactionTask shows that a majority of time is being spent lseek (called by 
> getOnDiskFilePointer), and not read or write.
> Going deeper it looks like we call getOnDiskFilePointer each row (sometimes 
> multiple times per row) in order to see if we've reached our expected sstable 
> size and should start a new writer.  This is pretty unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11692) dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test

2016-05-02 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11692:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test
> 
>
> Key: CASSANDRA-11692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11692
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest
>
> example failure:
> {noformat}
> Lists differ: ['/mnt/tmp/dtest-hXZ_VA/test/n... != 
> ['/mnt/tmp/dtest-hXZ_VA/test/n...
> First differing element 16:
> /mnt/tmp/dtest-hXZ_VA/test/node1/data2/keyspace1/standard1-483ee2700d5911e6b19a879d803a6aae/ma-3-big-CRC.db
> /mnt/tmp/dtest-hXZ_VA/test/node1/data2/keyspace1/standard1-483ee2700d5911e6b19a879d803a6aae/ma-5-big-CRC.db
> Diff is 5376 characters long. Set self.maxDiff to None to see it.
> {noformat}
> http://cassci.datastax.com/job/trunk_novnode_dtest/360/testReport/sstableutil_test/SSTableUtilTest/abortedcompaction_test
> Failed on CassCI build trunk_novnode_dtest #360



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11615) cassandra-stress blocks when connecting to a big cluster

2016-05-02 Thread Andy Tolbert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267306#comment-15267306
 ] 

Andy Tolbert commented on CASSANDRA-11615:
--

java-driver 3.0.1 has been released with this fix! :)

> cassandra-stress blocks when connecting to a big cluster
> 
>
> Key: CASSANDRA-11615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11615
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Eduard Tudenhoefner
>Assignee: Eduard Tudenhoefner
> Fix For: 3.0.x
>
> Attachments: 11615-3.0-2nd.patch, 11615-3.0.patch
>
>
> I had a *100* node cluster and was running 
> {code}
> cassandra-stress read n=100 no-warmup cl=LOCAL_QUORUM -rate 'threads=20' 
> 'limit=1000/s'
> {code}
> Based on the thread dump it looks like it's been blocked at 
> https://github.com/apache/cassandra/blob/cassandra-3.0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java#L96
> {code}
> "Thread-20" #245 prio=5 os_prio=0 tid=0x7f3781822000 nid=0x46c4 waiting 
> for monitor entry [0x7f36cc788000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> "Thread-19" #244 prio=5 os_prio=0 tid=0x7f378182 nid=0x46c3 waiting 
> for monitor entry [0x7f36cc889000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96)
> - waiting to lock <0x0005c003d920> (a 
> java.util.concurrent.ConcurrentHashMap)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
> at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
> at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> {code}
> I was trying the same with with a smaller cluster (50 nodes) and it was 
> working fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10805) Additional Compaction Logging

2016-05-02 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian resolved CASSANDRA-10805.

   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6

Thanks, [~krummas]. Commited as 
[e16d8a7|https://git-wip-us.apache.org/repos/asf/cassandra/?p=cassandra.git;a=commit;h=e16d8a7a667d50271a183a95be894126cb2a5414].

> Additional Compaction Logging
> -
>
> Key: CASSANDRA-10805
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10805
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Observability
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
>Priority: Minor
> Fix For: 3.6
>
>
> Currently, viewing the results of past compactions requires parsing the log 
> and looking at the compaction history system table, which doesn't have 
> information about, for example, flushed sstables not previously compacted.
> This is a proposal to extend the information captured for compaction. 
> Initially, this would be done through a JMX call, but if it proves to be 
> useful and not much overhead, it might be a feature that could be enabled for 
> the compaction strategy all the time.
> Initial log information would include:
> - The compaction strategy type controlling each column family
> - The set of sstables included in each compaction strategy
> - Information about flushes and compactions, including times and all involved 
> sstables
> - Information about sstables, including generation, size, and tokens
> - Any additional metadata the strategy wishes to add to a compaction or an 
> sstable, like the level of an sstable or the type of compaction being 
> performed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Enhanced Compaction Logging

2016-05-02 Thread carl
Repository: cassandra
Updated Branches:
  refs/heads/trunk 307890363 -> e16d8a7a6


Enhanced Compaction Logging

patch by Carl Yeksigian; reviewed by Marcus Eriksson for CASSANDRA-10805


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e16d8a7a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e16d8a7a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e16d8a7a

Branch: refs/heads/trunk
Commit: e16d8a7a667d50271a183a95be894126cb2a5414
Parents: 3078903
Author: Carl Yeksigian 
Authored: Mon May 2 15:01:39 2016 -0400
Committer: Carl Yeksigian 
Committed: Mon May 2 15:03:38 2016 -0400

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  13 +-
 .../compaction/AbstractCompactionStrategy.java  |  23 +-
 .../db/compaction/CompactionLogger.java | 342 +++
 .../compaction/CompactionStrategyManager.java   |  39 ++-
 .../cassandra/db/compaction/CompactionTask.java |   2 +
 .../DateTieredCompactionStrategy.java   |  31 ++
 .../DateTieredCompactionStrategyOptions.java|   8 +-
 .../compaction/LeveledCompactionStrategy.java   |  27 +-
 .../SizeTieredCompactionStrategy.java   |   1 +
 10 files changed, 475 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e16d8a7a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c802031..1a3069c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Enhanced Compaction Logging (CASSANDRA-10805)
  * Make prepared statement cache size configurable (CASSANDRA-11555)
  * Integrated JMX authentication and authorization (CASSANDRA-10091)
  * Add units to stress ouput (CASSANDRA-11352)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e16d8a7a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index b47cf85..6b841c2 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1163,12 +1163,13 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 memtable.cfs.replaceFlushed(memtable, sstables);
 reclaim(memtable);
-logger.debug("Flushed to {} ({} sstables, {}), biggest {}, 
smallest {}",
- sstables,
- sstables.size(),
- FBUtilities.prettyPrintMemory(totalBytesOnDisk),
- FBUtilities.prettyPrintMemory(maxBytesOnDisk),
- FBUtilities.prettyPrintMemory(minBytesOnDisk));
+
memtable.cfs.compactionStrategyManager.compactionLogger.flush(sstables);
+logger.debug("Flushed to {} ({} sstables, {}), biggest {}, 
smallest {}",
+ sstables,
+ sstables.size(),
+ FBUtilities.prettyPrintMemory(totalBytesOnDisk),
+ FBUtilities.prettyPrintMemory(maxBytesOnDisk),
+ FBUtilities.prettyPrintMemory(minBytesOnDisk));
 }
 
 private void reclaim(final Memtable memtable)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e16d8a7a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index 40f0ce2..668bc51 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -63,11 +63,13 @@ public abstract class AbstractCompactionStrategy
 // minimum interval needed to perform tombstone removal compaction in 
seconds, default 86400 or 1 day.
 protected static final long DEFAULT_TOMBSTONE_COMPACTION_INTERVAL = 86400;
 protected static final boolean 
DEFAULT_UNCHECKED_TOMBSTONE_COMPACTION_OPTION = false;
+protected static final boolean DEFAULT_LOG_ALL_OPTION = false;
 
 protected static final String TOMBSTONE_THRESHOLD_OPTION = 
"tombstone_threshold";
 protected static final String TOMBSTONE_COMPACTION_INTERVAL_OPTION = 
"tombstone_compaction_interval";
 // disable range overlap check when deciding if an SSTable is candidate 
for tombstone compaction (CASSANDRA-6563)
 protected static final String 

[jira] [Commented] (CASSANDRA-11258) Repair scheduling - Resource locking API

2016-05-02 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267278#comment-15267278
 ] 

Paulo Motta commented on CASSANDRA-11258:
-

Thanks for the update. The API definition and initial tests are looking good. 
Just a few more things:
- It seems we are still considering timeouts as failures to acquire the lock, 
but we must also check if the lock was acquired with a {{SERIAL}} read after a 
write timeout to avoid a resource being leased but no one using it.
- Can you add unit test cases manually removing or modifying rows on 
{{resource_lease}} and {{resource_lease_priority}} and make sure {{isValid}} 
and {{renew}} return false, as well as new locks cannot be acquired if the lock 
is held on {{resource_lease}} but not on {{resource_lease_priority}} (these 
would simulate nodes out of sync or manual tampering lease tables).

While it would be interesting to add dtests to make sure leases are working in 
a distributed environment, we would probably need to expose the 
{{LeaseFactory}} interface over JMX, but we want to keep this strictly internal 
to avoid external misuse. So it's probably better to move on and test this more 
extensively on dtests via scheduled repairs, for instance by triggering 
simultaneous scheduled repair requests and modifying the resource lease tables 
manually to test leases in scenarios with tampering, network partitions and 
nodes out of sync, so let's leave this for a future task.

After the previous points are addressed we can probably start building repair 
scheduling (CASSANDRA-11259) on top of it.

> Repair scheduling - Resource locking API
> 
>
> Key: CASSANDRA-11258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11258
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
>
> Create a resource locking API & implementation that is able to lock a 
> resource in a specified data center. It should handle priorities to avoid 
> node starvation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11514) trunk compaction performance regression

2016-05-02 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267232#comment-15267232
 ] 

Marcus Eriksson commented on CASSANDRA-11514:
-

What are we measuring here? Write performance or compaction performance? The 
graph looks like it is write perf?

> trunk compaction performance regression
> ---
>
> Key: CASSANDRA-11514
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11514
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: cstar_perf
>Reporter: Michael Shuler
>  Labels: performance
> Fix For: 3.x
>
> Attachments: trunk-compaction_dtcs-op_rate.png, 
> trunk-compaction_lcs-op_rate.png
>
>
> It appears that a commit between Mar 29-30 has resulted in a drop in 
> compaction performance. I attempted to get a log list of commits to post 
> here, but
> {noformat}
> git log trunk@{2016-03-29}..trunk@{2016-03-31}
> {noformat}
> appears to be incomplete, since reading through {{git log}} I see netty and 
> och were upgraded during this time period.
> !trunk-compaction_dtcs-op_rate.png!
> !trunk-compaction_lcs-op_rate.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11623) Compactions w/ Short Rows Spending Time in getOnDiskFilePointer

2016-05-02 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267221#comment-15267221
 ] 

Philip Thompson commented on CASSANDRA-11623:
-

[~krummas], we can test for changes in read/write perf, but at this time I 
don't think we have any tools for measuring performance of compaction. 

> Compactions w/ Short Rows Spending Time in getOnDiskFilePointer
> ---
>
> Key: CASSANDRA-11623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11623
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tom Petracca
>Assignee: Tom Petracca
>Priority: Minor
> Fix For: 3.x
>
> Attachments: compactiontask_profile.png
>
>
> Been doing some performance tuning and profiling of my cassandra cluster and 
> noticed that compaction speeds for my tables that I know to have very short 
> rows were going particularly slowly.  Profiling shows a ton of time being 
> spent in BigTableWriter.getOnDiskFilePointer(), and attaching strace to a 
> CompactionTask shows that a majority of time is being spent lseek (called by 
> getOnDiskFilePointer), and not read or write.
> Going deeper it looks like we call getOnDiskFilePointer each row (sometimes 
> multiple times per row) in order to see if we've reached our expected sstable 
> size and should start a new writer.  This is pretty unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11629) java.lang.UnsupportedOperationException when selecting rows with counters

2016-05-02 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267161#comment-15267161
 ] 

Alex Petrov commented on CASSANDRA-11629:
-

Thank you!

> java.lang.UnsupportedOperationException when selecting rows with counters
> -
>
> Key: CASSANDRA-11629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11629
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 16.04 LTS
> Cassandra 3.0.5 Community Edition
>Reporter: Arnd Hannemann
>Assignee: Alex Petrov
>  Labels: 3.0.5
> Fix For: 3.6, 3.0.6
>
>
> When selecting a non empty set of rows with counters a exception occurs:
> {code}
> WARN  [SharedPool-Worker-2] 2016-04-21 23:47:47,542 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2449)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.0.5.jar:3.0.5]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.5.jar:3.0.5]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: java.lang.UnsupportedOperationException: null
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCustom(AbstractType.java:172)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:158) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareForCQL(AbstractType.java:202)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.cql3.Operator.isSatisfiedBy(Operator.java:169) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:619)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:258)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:246)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:236)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1792)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
> at 
> 

cassandra git commit: CASSANDRA-10091 Follow up

2016-05-02 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/trunk b57e6bd84 -> 307890363


CASSANDRA-10091 Follow up

Fix a few cosmetic issues that made it through review:

* Remove unused method in AllowAllAuthorizer, added in error
* Revert change to enum sets in Permission.java
* Tidy imports in CassandraDaemon


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30789036
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30789036
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30789036

Branch: refs/heads/trunk
Commit: 30789036330e84f41630a80171fd24835cacaf2d
Parents: b57e6bd
Author: Sam Tunnicliffe 
Authored: Mon May 2 18:46:03 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Mon May 2 18:51:22 2016 +0100

--
 .../org/apache/cassandra/auth/AllowAllAuthorizer.java  |  5 -
 src/java/org/apache/cassandra/auth/Permission.java |  4 +---
 .../org/apache/cassandra/service/CassandraDaemon.java  | 13 +
 3 files changed, 2 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/30789036/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java
--
diff --git a/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java 
b/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java
index d054aba..3b40979 100644
--- a/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java
+++ b/src/java/org/apache/cassandra/auth/AllowAllAuthorizer.java
@@ -68,9 +68,4 @@ public class AllowAllAuthorizer implements IAuthorizer
 public void setup()
 {
 }
-
-public Set authorizeJMX(AuthenticatedUser parUser, IResource 
parResource)
-{
-return Permission.ALL;
-}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/30789036/src/java/org/apache/cassandra/auth/Permission.java
--
diff --git a/src/java/org/apache/cassandra/auth/Permission.java 
b/src/java/org/apache/cassandra/auth/Permission.java
index 0231270..d552280 100644
--- a/src/java/org/apache/cassandra/auth/Permission.java
+++ b/src/java/org/apache/cassandra/auth/Permission.java
@@ -63,9 +63,7 @@ public enum Permission
 // UDF permissions
 EXECUTE;  // required to invoke any user defined function or aggregate
 
-public static final Set ALL_DATA =
-ImmutableSet.copyOf(EnumSet.range(Permission.CREATE, 
Permission.EXECUTE));
 public static final Set ALL =
-ImmutableSet.copyOf(EnumSet.range(Permission.CREATE, 
Permission.EXECUTE));
+Sets.immutableEnumSet(EnumSet.range(Permission.CREATE, 
Permission.EXECUTE));
 public static final Set NONE = ImmutableSet.of();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/30789036/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index c8f8f34..2b797fe 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -24,27 +24,17 @@ import java.lang.management.MemoryPoolMXBean;
 import java.net.InetAddress;
 import java.net.URL;
 import java.net.UnknownHostException;
-import java.rmi.RemoteException;
-import java.rmi.registry.LocateRegistry;
-import java.rmi.server.RMIClientSocketFactory;
-import java.rmi.server.RMIServerSocketFactory;
-import java.util.*;
+import java.util.List;
 import java.util.concurrent.TimeUnit;
-import java.util.stream.Collectors;
 import javax.management.MBeanServer;
 import javax.management.ObjectName;
 import javax.management.StandardMBean;
 import javax.management.remote.JMXConnectorServer;
-import javax.management.remote.JMXServiceURL;
-import javax.management.remote.rmi.RMIConnectorServer;
-import javax.rmi.ssl.SslRMIClientSocketFactory;
-import javax.rmi.ssl.SslRMIServerSocketFactory;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.util.concurrent.Futures;
 import com.google.common.util.concurrent.ListenableFuture;
 import com.google.common.util.concurrent.Uninterruptibles;
-import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -52,7 +42,6 @@ import com.addthis.metrics3.reporter.config.ReporterConfig;
 import com.codahale.metrics.Meter;
 import com.codahale.metrics.MetricRegistryListener;
 import com.codahale.metrics.SharedMetricRegistries;
-import org.apache.cassandra.auth.jmx.AuthorizationProxy;
 import org.apache.cassandra.batchlog.LegacyBatchlogMigrator;
 import 

[jira] [Created] (CASSANDRA-11696) Incremental repairs can mark too many ranges as repaired

2016-05-02 Thread Joel Knighton (JIRA)
Joel Knighton created CASSANDRA-11696:
-

 Summary: Incremental repairs can mark too many ranges as repaired
 Key: CASSANDRA-11696
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11696
 Project: Cassandra
  Issue Type: Bug
Reporter: Joel Knighton
Assignee: Paulo Motta


Incremental repairs are tracked using a parent session - a subordinate repair 
session is created for each range in the repair. When a node participating in 
the repair receives a validation request, it will reference the sstables in the 
parent repair session. When all subordinate sessions conclude, each node 
anticompacts SSTables based on the parent repair session for the whole range of 
the repair, but these referenced SSTables may have only been present for the 
validation of some subset of the ranges because the SSTables were created 
concurrent with the parent repair session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11673) (2.1) dtest failure in bootstrap_test.TestBootstrap.test_cleanup

2016-05-02 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266869#comment-15266869
 ] 

Philip Thompson commented on CASSANDRA-11673:
-

+1. Do you want me to merge to dtest?

> (2.1) dtest failure in bootstrap_test.TestBootstrap.test_cleanup
> 
>
> Key: CASSANDRA-11673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11673
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Marcus Eriksson
>  Labels: dtest
>
> This test was originally waiting on CASSANDRA-11179, which I recently removed 
> the 'require' annotation from (since 11179 is committed). Not sure why 
> failing on 2.1 now, perhaps didn't get committed.
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/339/testReport/bootstrap_test/TestBootstrap/test_cleanup
> Failed on CassCI build cassandra-2.1_offheap_dtest #339



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-05-02 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266778#comment-15266778
 ] 

Robert Stupp commented on CASSANDRA-11537:
--

Hm. Not really. The process in general is fine. [~iamaleksey] wrote another, 
nicer [contributor's 
doc|http://www.mail-archive.com/dev@cassandra.apache.org/msg08672.html]. The 
[Tick Tock 
doc|http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/] is 
probably the best source for the general release scheduleI. It actually says, 
that 2.1 reaches EOL when 3.0 is released, 2.2 is in maintenance mode (critical 
fixes only) and 3.0 only gets bug fixes.

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266767#comment-15266767
 ] 

Benjamin Lerer commented on CASSANDRA-11679:


Sorry, my mistake. I did not read your test properly. I was expecting it to 
fail.

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-05-02 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266761#comment-15266761
 ] 

Edward Capriolo commented on CASSANDRA-11537:
-

[~snazy] The document: http://wiki.apache.org/cassandra/HowToContribute suggest:
You'll want to checkout out the branch corresponding to the lowest version in 
which you want to introduce the change. 

Does this documentation require update?



> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11555) Make prepared statement cache size configurable

2016-05-02 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11555:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Make prepared statement cache size configurable
> ---
>
> Key: CASSANDRA-11555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11555
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 3.6
>
>
> The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} 
> are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. 
> Sometimes applications may need more than that. Proposal is to make that 
> value configurable - probably also distinguish thrift and native CQL3 queries 
> (new applications don't need the thrift stuff).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11555) Make prepared statement cache size configurable

2016-05-02 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11555:
-
Fix Version/s: (was: 3.x)
   3.6
   Status: Patch Available  (was: Open)

> Make prepared statement cache size configurable
> ---
>
> Key: CASSANDRA-11555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11555
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 3.6
>
>
> The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} 
> are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. 
> Sometimes applications may need more than that. Proposal is to make that 
> value configurable - probably also distinguish thrift and native CQL3 queries 
> (new applications don't need the thrift stuff).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266759#comment-15266759
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 3:09 PM:
--

this junit test is failing in your local, right ?

No I didn't change anything and this junit test case is not failing in my 
Cassandra-2.1.13.

 


was (Author: varuna):
this junit test is failing in your local, right ?

No I didn't change anything and this junit test case is not failing in 
Cassandra-2.1.13.

 

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266759#comment-15266759
 ] 

Varun Barala commented on CASSANDRA-11679:
--

this junit test is failing in your local, right ?

No I didn't change anything and this junit test case is not failing in 
Cassandra-2.1.13.

 

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: fix license for OHC (rename from 0.4.2 to 0.4.3)

2016-05-02 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 b178d899d -> a1cc804f5
  refs/heads/trunk 9581209b3 -> b57e6bd84


fix license for OHC (rename from 0.4.2 to 0.4.3)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a1cc804f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a1cc804f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a1cc804f

Branch: refs/heads/cassandra-3.0
Commit: a1cc804f5d4298f497eb2d6c00d49c188a768a64
Parents: b178d89
Author: Robert Stupp 
Authored: Mon May 2 17:08:01 2016 +0200
Committer: Robert Stupp 
Committed: Mon May 2 17:09:26 2016 +0200

--
 lib/licenses/ohc-0.4.2.txt | 201 
 lib/licenses/ohc-0.4.3.txt | 201 
 2 files changed, 201 insertions(+), 201 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a1cc804f/lib/licenses/ohc-0.4.2.txt
--
diff --git a/lib/licenses/ohc-0.4.2.txt b/lib/licenses/ohc-0.4.2.txt
deleted file mode 100644
index eb6b5d3..000
--- a/lib/licenses/ohc-0.4.2.txt
+++ /dev/null
@@ -1,201 +0,0 @@
- Apache License
-   Version 2.0, January 2004
-http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-  "License" shall mean the terms and conditions for use, reproduction,
-  and distribution as defined by Sections 1 through 9 of this document.
-
-  "Licensor" shall mean the copyright owner or entity authorized by
-  the copyright owner that is granting the License.
-
-  "Legal Entity" shall mean the union of the acting entity and all
-  other entities that control, are controlled by, or are under common
-  control with that entity. For the purposes of this definition,
-  "control" means (i) the power, direct or indirect, to cause the
-  direction or management of such entity, whether by contract or
-  otherwise, or (ii) ownership of fifty percent (50%) or more of the
-  outstanding shares, or (iii) beneficial ownership of such entity.
-
-  "You" (or "Your") shall mean an individual or Legal Entity
-  exercising permissions granted by this License.
-
-  "Source" form shall mean the preferred form for making modifications,
-  including but not limited to software source code, documentation
-  source, and configuration files.
-
-  "Object" form shall mean any form resulting from mechanical
-  transformation or translation of a Source form, including but
-  not limited to compiled object code, generated documentation,
-  and conversions to other media types.
-
-  "Work" shall mean the work of authorship, whether in Source or
-  Object form, made available under the License, as indicated by a
-  copyright notice that is included in or attached to the work
-  (an example is provided in the Appendix below).
-
-  "Derivative Works" shall mean any work, whether in Source or Object
-  form, that is based on (or derived from) the Work and for which the
-  editorial revisions, annotations, elaborations, or other modifications
-  represent, as a whole, an original work of authorship. For the purposes
-  of this License, Derivative Works shall not include works that remain
-  separable from, or merely link (or bind by name) to the interfaces of,
-  the Work and Derivative Works thereof.
-
-  "Contribution" shall mean any work of authorship, including
-  the original version of the Work and any modifications or additions
-  to that Work or Derivative Works thereof, that is intentionally
-  submitted to Licensor for inclusion in the Work by the copyright owner
-  or by an individual or Legal Entity authorized to submit on behalf of
-  the copyright owner. For the purposes of this definition, "submitted"
-  means any form of electronic, verbal, or written communication sent
-  to the Licensor or its representatives, including but not limited to
-  communication on electronic mailing lists, source code control systems,
-  and issue tracking systems that are managed by, or on behalf of, the
-  Licensor for the purpose of discussing and improving the Work, but
-  excluding communication that is conspicuously marked or otherwise
-  designated in writing by the copyright owner as "Not a Contribution."
-
-  "Contributor" shall mean Licensor and any individual or Legal Entity
-  on behalf of whom a Contribution has been received by Licensor and
-  subsequently incorporated within the Work.
-
-   2. Grant of 

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-05-02 Thread snazy
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b57e6bd8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b57e6bd8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b57e6bd8

Branch: refs/heads/trunk
Commit: b57e6bd846d202c466ca21ec68efc65bf838106a
Parents: 9581209 a1cc804
Author: Robert Stupp 
Authored: Mon May 2 17:09:37 2016 +0200
Committer: Robert Stupp 
Committed: Mon May 2 17:09:37 2016 +0200

--
 lib/licenses/ohc-0.4.2.txt | 201 
 lib/licenses/ohc-0.4.3.txt | 201 
 2 files changed, 201 insertions(+), 201 deletions(-)
--




[2/3] cassandra git commit: fix license for OHC (rename from 0.4.2 to 0.4.3)

2016-05-02 Thread snazy
fix license for OHC (rename from 0.4.2 to 0.4.3)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a1cc804f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a1cc804f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a1cc804f

Branch: refs/heads/trunk
Commit: a1cc804f5d4298f497eb2d6c00d49c188a768a64
Parents: b178d89
Author: Robert Stupp 
Authored: Mon May 2 17:08:01 2016 +0200
Committer: Robert Stupp 
Committed: Mon May 2 17:09:26 2016 +0200

--
 lib/licenses/ohc-0.4.2.txt | 201 
 lib/licenses/ohc-0.4.3.txt | 201 
 2 files changed, 201 insertions(+), 201 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a1cc804f/lib/licenses/ohc-0.4.2.txt
--
diff --git a/lib/licenses/ohc-0.4.2.txt b/lib/licenses/ohc-0.4.2.txt
deleted file mode 100644
index eb6b5d3..000
--- a/lib/licenses/ohc-0.4.2.txt
+++ /dev/null
@@ -1,201 +0,0 @@
- Apache License
-   Version 2.0, January 2004
-http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-  "License" shall mean the terms and conditions for use, reproduction,
-  and distribution as defined by Sections 1 through 9 of this document.
-
-  "Licensor" shall mean the copyright owner or entity authorized by
-  the copyright owner that is granting the License.
-
-  "Legal Entity" shall mean the union of the acting entity and all
-  other entities that control, are controlled by, or are under common
-  control with that entity. For the purposes of this definition,
-  "control" means (i) the power, direct or indirect, to cause the
-  direction or management of such entity, whether by contract or
-  otherwise, or (ii) ownership of fifty percent (50%) or more of the
-  outstanding shares, or (iii) beneficial ownership of such entity.
-
-  "You" (or "Your") shall mean an individual or Legal Entity
-  exercising permissions granted by this License.
-
-  "Source" form shall mean the preferred form for making modifications,
-  including but not limited to software source code, documentation
-  source, and configuration files.
-
-  "Object" form shall mean any form resulting from mechanical
-  transformation or translation of a Source form, including but
-  not limited to compiled object code, generated documentation,
-  and conversions to other media types.
-
-  "Work" shall mean the work of authorship, whether in Source or
-  Object form, made available under the License, as indicated by a
-  copyright notice that is included in or attached to the work
-  (an example is provided in the Appendix below).
-
-  "Derivative Works" shall mean any work, whether in Source or Object
-  form, that is based on (or derived from) the Work and for which the
-  editorial revisions, annotations, elaborations, or other modifications
-  represent, as a whole, an original work of authorship. For the purposes
-  of this License, Derivative Works shall not include works that remain
-  separable from, or merely link (or bind by name) to the interfaces of,
-  the Work and Derivative Works thereof.
-
-  "Contribution" shall mean any work of authorship, including
-  the original version of the Work and any modifications or additions
-  to that Work or Derivative Works thereof, that is intentionally
-  submitted to Licensor for inclusion in the Work by the copyright owner
-  or by an individual or Legal Entity authorized to submit on behalf of
-  the copyright owner. For the purposes of this definition, "submitted"
-  means any form of electronic, verbal, or written communication sent
-  to the Licensor or its representatives, including but not limited to
-  communication on electronic mailing lists, source code control systems,
-  and issue tracking systems that are managed by, or on behalf of, the
-  Licensor for the purpose of discussing and improving the Work, but
-  excluding communication that is conspicuously marked or otherwise
-  designated in writing by the copyright owner as "Not a Contribution."
-
-  "Contributor" shall mean Licensor and any individual or Legal Entity
-  on behalf of whom a Contribution has been received by Licensor and
-  subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-  this License, each Contributor hereby grants to You a perpetual,
-  

[jira] [Reopened] (CASSANDRA-11567) Update netty version

2016-05-02 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reopened CASSANDRA-11567:
-

lib/licenses/netty-all-4.0.34.Final.txt needs to be updated.

> Update netty version
> 
>
> Key: CASSANDRA-11567
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11567
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 3.6
>
>
> Mainly for prereq to CASSANDRA-11421. 
> Netty 4.0.34 -> 4.0.36.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11555) Make prepared statement cache size configurable

2016-05-02 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266751#comment-15266751
 ] 

Robert Stupp commented on CASSANDRA-11555:
--

Thanks!
Committed as 9581209b35922a0758c6bd158d4336a17cfe86aa to trunk

> Make prepared statement cache size configurable
> ---
>
> Key: CASSANDRA-11555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11555
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 3.x
>
>
> The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} 
> are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. 
> Sometimes applications may need more than that. Proposal is to make that 
> value configurable - probably also distinguish thrift and native CQL3 queries 
> (new applications don't need the thrift stuff).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Make prepared statement cache size configurable

2016-05-02 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk 3513fbcfb -> 9581209b3


Make prepared statement cache size configurable

patch by Robert Stupp; reviewed by Benjamin Lerer for CASSANDRA-11555


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9581209b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9581209b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9581209b

Branch: refs/heads/trunk
Commit: 9581209b35922a0758c6bd158d4336a17cfe86aa
Parents: 3513fbc
Author: Robert Stupp 
Authored: Mon May 2 17:03:56 2016 +0200
Committer: Robert Stupp 
Committed: Mon May 2 17:03:56 2016 +0200

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml | 28 
 .../org/apache/cassandra/config/Config.java | 11 +++
 .../cassandra/config/DatabaseDescriptor.java| 45 
 .../apache/cassandra/cql3/QueryProcessor.java   | 72 +++-
 5 files changed, 125 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9581209b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c49249c..c802031 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Make prepared statement cache size configurable (CASSANDRA-11555)
  * Integrated JMX authentication and authorization (CASSANDRA-10091)
  * Add units to stress ouput (CASSANDRA-11352)
  * Fix PER PARTITION LIMIT for single and multi partitions queries 
(CASSANDRA-11603)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9581209b/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 48bad2c..9eb55e1 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -215,6 +215,34 @@ disk_failure_policy: stop
 # ignore: ignore fatal errors and let the batches fail
 commit_failure_policy: stop
 
+# Maximum size of the native protocol prepared statement cache
+#
+# Valid values are either "auto" (omitting the value) or a value greater 0.
+#
+# Note that specifying a too large value will result in long running GCs and 
possbily
+# out-of-memory errors. Keep the value at a small fraction of the heap.
+#
+# If you constantly see "prepared statements discarded in the last minute 
because
+# cache limit reached" messages, the first step is to investigate the root 
cause
+# of these messages and check whether prepared statements are used correctly -
+# i.e. use bind markers for variable parts.
+#
+# Do only change the default value, if you really have more prepared 
statements than
+# fit in the cache. In most cases it is not neccessary to change this value.
+# Constantly re-preparing statements is a performance penalty.
+#
+# Default value ("auto") is 1/256th of the heap or 10MB, whichever is greater
+prepared_statements_cache_size_mb:
+
+# Maximum size of the Thrift prepared statement cache
+#
+# If you do not use Thrift at all, it is safe to leave this value at "auto".
+#
+# See description of 'prepared_statements_cache_size_mb' above for more 
information.
+#
+# Default value ("auto") is 1/256th of the heap or 10MB, whichever is greater
+thrift_prepared_statements_cache_size_mb:
+
 # Maximum size of the key cache in memory.
 #
 # Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9581209b/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 809966d..02635bf 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -288,6 +288,17 @@ public class Config
 
 public int windows_timer_interval = 0;
 
+/**
+ * Size of the CQL prepared statements cache in MB.
+ * Defaults to 1/256th of the heap size or 10MB, whichever is greater.
+ */
+public Long prepared_statements_cache_size_mb = null;
+/**
+ * Size of the Thrift prepared statements cache in MB.
+ * Defaults to 1/256th of the heap size or 10MB, whichever is greater.
+ */
+public Long thrift_prepared_statements_cache_size_mb = null;
+
 public boolean enable_user_defined_functions = false;
 public boolean enable_scripted_user_defined_functions = false;
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9581209b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git 

[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266748#comment-15266748
 ] 

Benjamin Lerer commented on CASSANDRA-11679:


I tried you unit test with cassandra 2.1.HEAD and cassandra 2.1.13 with the 
java driver 2.1.7.1 and it run fine. I get the right number of rows.
Did you change any configuration setting?

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-5863) In process (uncompressed) page cache

2016-05-02 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reopened CASSANDRA-5863:


caffeine 2.2.6 needs to be added as a dependency in the build.xml.

> In process (uncompressed) page cache
> 
>
> Key: CASSANDRA-5863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: T Jake Luciani
>Assignee: Branimir Lambov
>  Labels: performance
> Fix For: 3.6
>
>
> Currently, for every read, the CRAR reads each compressed chunk into a 
> byte[], sends it to ICompressor, gets back another byte[] and verifies a 
> checksum.  
> This process is where the majority of time is spent in a read request.  
> Before compression, we would have zero-copy of data and could respond 
> directly from the page-cache.
> It would be useful to have some kind of Chunk cache that could speed up this 
> process for hot data, possibly off heap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11602) Materialized View Doest Not Have Static Columns

2016-05-02 Thread Ravishankar Rajendran (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266727#comment-15266727
 ] 

Ravishankar Rajendran commented on CASSANDRA-11602:
---

Really? Is Sarcasm the only response you had? Cant customers find and report 
design faults? Or is it only allowed to code contributors? Anyway, now that I 
know there is not going to be any rational arguments in the conversation, I am 
choosing to quit. Thanks. 

> Materialized View Doest Not Have Static Columns
> ---
>
> Key: CASSANDRA-11602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11602
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ravishankar Rajendran
>Assignee: Carl Yeksigian
> Fix For: 3.0.x, 3.x
>
>
> {quote}
> CREATE TABLE "team" (teamname text, manager text, location text static, 
> PRIMARY KEY ((teamname), manager));
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull1', 
> 'Ricciardo11', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo12', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo13', 'Australian');
> select * From team;
> CREATE MATERIALIZED VIEW IF NOT EXISTS "teamMV" AS SELECT "teamname", 
> "manager", "location" FROM "team" WHERE "teamname" IS NOT NULL AND "manager" 
> is NOT NULL AND "location" is NOT NULL PRIMARY KEY("manager", "teamname");  
> select * from "teamMV";
> {quote}
> The teamMV does not have "location" column. Static columns are not getting 
> created in MV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-05-02 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266676#comment-15266676
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


The issue we're discussing here is less about policy for handling CDC failures, 
and more about that policy impacting both CDC and non-CDC writes unless we 
distinguish whether a Mutation contains CDC-enabled CF in them at write-time or 
not.

If we treat all Mutations equally, we would apply that policy to both CDC and 
non-CDC enabled writes, so CDC space being filled / backpressure would reject 
all writes on the node.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named with a predictable naming schema, making it 
> triivial to process them in order.
> - Daemons should be able to checkpoint their work, and resume from where they 
> left off. This means they would have to leave some file artifact in the CDC 
> log's directory.
> - A sophisticated daemon should be able to be written that could 
> -- Catch up, in written-order, even when it is multiple logfiles behind in 
> 

[jira] [Updated] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11679:
-
Description: 
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform a query get All distinctKeys It 
returns 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows (because the 
default fetch size is 5000). In `cqlsh` it returns 503 rows (because cqlsh uses 
fetch size=100).

Some Additional and useful information :- 
---
Cassandra-2.1.13  (C)* version
Consistency level: ONE 
local machine(ubuntu 14.04)

Table Schema:-
--

{code:xml}
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


{code}

query :-

{code:xml}
SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
{code}

  was:
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform a query get All distinctKeys It 
return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

Some Additional and useful information :- 
---
Cassandra-2.1.13  (C)* version
Consistency level: ONE 
local machine(ubuntu 14.04)

Table Schema:-
--

{code:xml}
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


{code}

query :-

{code:xml}
SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
{code}


> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 

[jira] [Updated] (CASSANDRA-11349) MerkleTree mismatch when multiple range tombstones exists for the same partition and interval

2016-05-02 Thread Fabien Rousseau (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabien Rousseau updated CASSANDRA-11349:

Attachment: 11349-2.1-v3.patch

> MerkleTree mismatch when multiple range tombstones exists for the same 
> partition and interval
> -
>
> Key: CASSANDRA-11349
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11349
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fabien Rousseau
>Assignee: Stefan Podkowinski
>  Labels: repair
> Fix For: 2.1.x, 2.2.x
>
> Attachments: 11349-2.1-v2.patch, 11349-2.1-v3.patch, 11349-2.1.patch
>
>
> We observed that repair, for some of our clusters, streamed a lot of data and 
> many partitions were "out of sync".
> Moreover, the read repair mismatch ratio is around 3% on those clusters, 
> which is really high.
> After investigation, it appears that, if two range tombstones exists for a 
> partition for the same range/interval, they're both included in the merkle 
> tree computation.
> But, if for some reason, on another node, the two range tombstones were 
> already compacted into a single range tombstone, this will result in a merkle 
> tree difference.
> Currently, this is clearly bad because MerkleTree differences are dependent 
> on compactions (and if a partition is deleted and created multiple times, the 
> only way to ensure that repair "works correctly"/"don't overstream data" is 
> to major compact before each repair... which is not really feasible).
> Below is a list of steps allowing to easily reproduce this case:
> {noformat}
> ccm create test -v 2.1.13 -n 2 -s
> ccm node1 cqlsh
> CREATE KEYSPACE test_rt WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> USE test_rt;
> CREATE TABLE IF NOT EXISTS table1 (
> c1 text,
> c2 text,
> c3 float,
> c4 float,
> PRIMARY KEY ((c1), c2)
> );
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 2);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> # now flush only one of the two nodes
> ccm node1 flush 
> ccm node1 cqlsh
> USE test_rt;
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 3);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> ccm node1 repair
> # now grep the log and observe that there was some inconstencies detected 
> between nodes (while it shouldn't have detected any)
> ccm node1 showlog | grep "out of sync"
> {noformat}
> Consequences of this are a costly repair, accumulating many small SSTables 
> (up to thousands for a rather short period of time when using VNodes, the 
> time for compaction to absorb those small files), but also an increased size 
> on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11555) Make prepared statement cache size configurable

2016-05-02 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266687#comment-15266687
 ] 

Benjamin Lerer commented on CASSANDRA-11555:


+1.
Sorry, did not notice the private. In my opinion we should expose them as 
public at some point.

> Make prepared statement cache size configurable
> ---
>
> Key: CASSANDRA-11555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11555
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 3.x
>
>
> The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} 
> are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. 
> Sometimes applications may need more than that. Proposal is to make that 
> value configurable - probably also distinguish thrift and native CQL3 queries 
> (new applications don't need the thrift stuff).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11349) MerkleTree mismatch when multiple range tombstones exists for the same partition and interval

2016-05-02 Thread Fabien Rousseau (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266682#comment-15266682
 ] 

Fabien Rousseau commented on CASSANDRA-11349:
-

Great.

I just created a new patch (11349-2.1-v3.patch) where the 'update' method is 
empty in the validation tracker (in fact, it was a left-over from previous 
attempts and should have been empty ).

The main difference for example between the "update" method from the regular 
compaction, and the addRangeTombstone from the validation compaction is the 
returned value. In the latter case, it's wether the RT is superseded/shadowed 
by another previously met tombstone.
To be honest, I did not managed to factorize both of them without compromising 
readability even if they share some similarities.

I'm a bit skeptical with ValidationCompactionTracker extending 
RegularCompactionTracker because RegularCompaction has more fields 
(unwrittenTombstones, atomCount) which would not be used by the 
ValidationCompactionTracker (and it feels odd to have unused fields).
Doing it the other side, ie RegularCompactionTracker extending 
ValidationCompactionTracker, seemed a better fit (RegularCompaction reuses the 
comparator and openedTombstones), adds more fields, but there is not much to 
win: only the isDeleted method is in common...
Thus the interface did not seem a bad choice: implementations are less coupled 
(and could diverge more in the future if needed).
But this can be changed if needed (I just wanted to explain design choices and 
am not opposed to inheritance)

I agree that this way of doing is a "leaky abstraction". Nevertheless, the main 
idea is to have a patch doing minimal architectural changes to the current code 
base (did not want to refactor anything) to avoid introducing bugs. Moreover, 
because the 3.X and 3.0.X are not affected, this will stay in the 2.1.X and 
2.2.X branches (and won't be technical debt).
Anyway, it's more a pragmatic solution than an elegant one (and evidently, I am 
open to a more elegant solution).

> MerkleTree mismatch when multiple range tombstones exists for the same 
> partition and interval
> -
>
> Key: CASSANDRA-11349
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11349
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fabien Rousseau
>Assignee: Stefan Podkowinski
>  Labels: repair
> Fix For: 2.1.x, 2.2.x
>
> Attachments: 11349-2.1-v2.patch, 11349-2.1.patch
>
>
> We observed that repair, for some of our clusters, streamed a lot of data and 
> many partitions were "out of sync".
> Moreover, the read repair mismatch ratio is around 3% on those clusters, 
> which is really high.
> After investigation, it appears that, if two range tombstones exists for a 
> partition for the same range/interval, they're both included in the merkle 
> tree computation.
> But, if for some reason, on another node, the two range tombstones were 
> already compacted into a single range tombstone, this will result in a merkle 
> tree difference.
> Currently, this is clearly bad because MerkleTree differences are dependent 
> on compactions (and if a partition is deleted and created multiple times, the 
> only way to ensure that repair "works correctly"/"don't overstream data" is 
> to major compact before each repair... which is not really feasible).
> Below is a list of steps allowing to easily reproduce this case:
> {noformat}
> ccm create test -v 2.1.13 -n 2 -s
> ccm node1 cqlsh
> CREATE KEYSPACE test_rt WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> USE test_rt;
> CREATE TABLE IF NOT EXISTS table1 (
> c1 text,
> c2 text,
> c3 float,
> c4 float,
> PRIMARY KEY ((c1), c2)
> );
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 2);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> # now flush only one of the two nodes
> ccm node1 flush 
> ccm node1 cqlsh
> USE test_rt;
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 3);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> ccm node1 repair
> # now grep the log and observe that there was some inconstencies detected 
> between nodes (while it shouldn't have detected any)
> ccm node1 showlog | grep "out of sync"
> {noformat}
> Consequences of this are a costly repair, accumulating many small SSTables 
> (up to thousands for a rather short period of time when using VNodes, the 
> time for compaction to absorb those small files), but also an increased size 
> on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:28 PM:
--

I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  pk1 text," +
"  pk2 text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((pk1,pk2))" +
" );");


String insertQuery = "insert into junit.magic (" +
"pk1," +
"pk2," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'pk1',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct pk1,pk2 from junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  pk1 text," +
"  pk2 text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((pk1,pk2))" +
" );");


String insertQuery = 

[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:23 PM:
--

I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  pk1 text," +
"  pk2 text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((pk1,pk2))" +
" );");


String insertQuery = "insert into junit.magic (" +
"pk1," +
"pk2," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct pk1,pk2 from junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
com.datastax.driver.core.LererCheckTest

{code}

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
>  

[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-05-02 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266674#comment-15266674
 ] 

Joel Knighton commented on CASSANDRA-11537:
---

If rebasing on trunk, it's worth noting that the semantics of 
{{StorageService.isInitialized}} were cleaned up in 
[CASSANDRA-10134|https://issues.apache.org/jira/browse/CASSANDRA-10134?focusedCommentId=15253719=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15253719]
 such that it actually matches the description of the {{startupDone}} flag 
Sylvain suggested above.

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:22 PM:
--

I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
com.datastax.driver.core.LererCheckTest

{code}


was (Author: varuna):
I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 

[jira] [Commented] (CASSANDRA-11555) Make prepared statement cache size configurable

2016-05-02 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266672#comment-15266672
 ] 

Robert Stupp commented on CASSANDRA-11555:
--

Tests look good.
I've applied the first issue. {{FileUtils.MB}} is a {{private static double}}, 
so I left it as it is.
Would you be fine with [this 
log|https://github.com/apache/cassandra/compare/trunk...snazy:11555-pstmt-cache-config-trunk#diff-9c19942eca6c858baad84e942b3c7e21R133]
 message, too?

> Make prepared statement cache size configurable
> ---
>
> Key: CASSANDRA-11555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11555
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 3.x
>
>
> The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} 
> are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. 
> Sometimes applications may need more than that. Proposal is to make that 
> value configurable - probably also distinguish thrift and native CQL3 queries 
> (new applications don't need the thrift stuff).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:19 PM:
--

I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
   

[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-05-02 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1525#comment-1525
 ] 

Robert Stupp commented on CASSANDRA-11537:
--

Hmm. The branch you provided is against 2.1, which is EOL. 2.2 and 3.0 are in 
maintenance mode. I'd categorize this patch as a new feature (not a bug fix) 
and new features need to go into trunk.
Can you rebase against trunk?

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:15 PM:
--

I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
 

[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:14 PM:
--

I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
product.magic498;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code-xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +

[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala commented on CASSANDRA-11679:
--

I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code-xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
product.magic498;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = 

[jira] [Commented] (CASSANDRA-11602) Materialized View Doest Not Have Static Columns

2016-05-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266656#comment-15266656
 ] 

Sylvain Lebresne commented on CASSANDRA-11602:
--

Sure. We'll be eagerly waiting on your better and not fundamentally wrong 
design. In the meantime though, let's start by using this ticket to fix the 
validation.

> Materialized View Doest Not Have Static Columns
> ---
>
> Key: CASSANDRA-11602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11602
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ravishankar Rajendran
>Assignee: Carl Yeksigian
> Fix For: 3.0.x, 3.x
>
>
> {quote}
> CREATE TABLE "team" (teamname text, manager text, location text static, 
> PRIMARY KEY ((teamname), manager));
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull1', 
> 'Ricciardo11', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo12', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo13', 'Australian');
> select * From team;
> CREATE MATERIALIZED VIEW IF NOT EXISTS "teamMV" AS SELECT "teamname", 
> "manager", "location" FROM "team" WHERE "teamname" IS NOT NULL AND "manager" 
> is NOT NULL AND "location" is NOT NULL PRIMARY KEY("manager", "teamname");  
> select * from "teamMV";
> {quote}
> The teamMV does not have "location" column. Static columns are not getting 
> created in MV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11639) guide_8099.md is a bit outdated

2016-05-02 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-11639.
--
Resolution: Won't Fix

That guide was never meant to be an up-to-date document of the internals, which 
is probably why it doesn't do a good job at it, and it's very outdated indeed.  
This was only meant as a summary of the change in CASSANDRA-8099 to make it 
easier to both reviewer and the rest of C* devs. It has served its purpose but 
we're past that now so I removed that guide. I don't think maintaining that 
kind of document is really worth the effort. It's not good as an introduction 
to new developpers since it was strongly assuming you're familiar with the 
pre-8099 code and it's way too specific on particular classes and details that 
we'll never be able to keep that up to date.


> guide_8099.md is a bit outdated
> ---
>
> Key: CASSANDRA-11639
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11639
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Wei Deng
>
> The guide_8099.md document that comes with the 8099 commit (dated Sep 1, 
> 2014) is a bit outdated.
> For example, in many places of the Sep 1, 2014 
> [commit|https://github.com/apache/cassandra/tree/a991b64811f4d6adb6c7b31c0df52288eb06cf19]
>  we no longer use {{Atom}} to represent the unit when dealing with Row or 
> RangeTombstoneMarker and we use {{Unfiltered}} instead. However, you can 
> still find many references of {{Atom}} in this document. Also, AtomIterator 
> is no longer a thing and it's still referenced later in the document.
> Another example of the outdated information is: we have removed all of the 
> "flyweight" pattern due to CASSANDRA-9705 in C* 3.0 GA, but this document 
> still talks about flyweight under a separate section.
> One more example: CASSANDRA-8933 turns out is not a problem out of pure luck, 
> but we still have a section called "Short Reads" in this doc.
> Another suggestion in the new revision of guide_8099.md: it appears that some 
> very useful information regarding the 3.0 storage format is hidden in the 
> code comments, especially in some of the Serializer classes (e.g. 
> UnfilteredSerializer, UnfilterRowIteratorSerializer). If we could integrate 
> information from those javadoc comments into here, it will be very helpful 
> (so that people don't have to look in two places).
> As this guide_8099.md is considered as a key document to an important area of 
> under-the-hood Cassandra implementation, to help new developers to navigate 
> this storage engine area better and to avoid unnecessary confusions, we 
> should strive to keep it as updated as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Remove outdated reviewer guide for 8099 (see CASSANDRA-11639)

2016-05-02 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 89464ead4 -> b178d899d
  refs/heads/trunk 4edd9ed5e -> 3513fbcfb


Remove outdated reviewer guide for 8099 (see CASSANDRA-11639)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b178d899
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b178d899
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b178d899

Branch: refs/heads/cassandra-3.0
Commit: b178d899d9cc07b4b4cbfd941a01073563e6774d
Parents: 89464ea
Author: Sylvain Lebresne 
Authored: Mon May 2 16:08:24 2016 +0200
Committer: Sylvain Lebresne 
Committed: Mon May 2 16:08:31 2016 +0200

--
 guide_8099.md | 376 -
 1 file changed, 376 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b178d899/guide_8099.md
--
diff --git a/guide_8099.md b/guide_8099.md
deleted file mode 100644
index 627b9dc..000
--- a/guide_8099.md
+++ /dev/null
@@ -1,376 +0,0 @@
-Overview of CASSANDRA-8099 changes
-==
-
-The goal of this document is to provide an overview of the main changes done in
-CASSANDRA-8099 so it's easier to dive into the patch. This assumes knowledge of
-the pre-existing code.
-
-CASSANDRA-8099 refactors the abstractions used by the storage engine and as
-such impact most of the code of said engine. The changes can be though of as
-following two main guidelines:
-1. the new abstractions are much more iterator-based (i.e. it tries harder to
-   avoid materializing everything in memory),
-2. and they are closer to the CQL representation of things (i.e. the storage
-   engine is aware of more structure, making it able to optimize accordingly).
-Note that while those changes have heavy impact on the actual code, the basic
-mechanisms of the read and write paths are largely unchanged.
-
-In the following, I'll start by describe the new abstractions introduced by the
-patch. I'll then provide a quick reference of existing class to what it becomes
-in the patch, after which I'll discuss how the refactor handles a number of
-more specific points. Lastly, the patch introduces some change to the on-wire
-and on-disk format so I'll discuss those quickly.
-
-
-Main new abstractions
--
-
-### Atom: Row and RangeTombstoneMarker
-
-Where the existing storage engine is mainly handling cells, the new engine
-groups cells into rows, and rows becomes the more central building block. A
-`Row` is identified by the value of it's clustering columns which are stored in
-a `Clustering` object (see below), and it associate a number of cells to each
-of its non-PK non-static columns (we'll discuss static columns more
-specifically later).
-
-The patch distinguishes 2 kind of columns: simple and complex ones. The
-_simple_ columns can have only 1 cell associated to them (or none), while the
-_complex_ ones will have an arbitrary number of cells associated.  Currently,
-the complex columns are only the non frozen collections (but we'll have
-non-frozen udt at some point and who knows what in the future).
-
-Like before, we also have to deal with range tombstones. However, instead of
-dealing with full range tombstones, we generally deal with
-`RangeTombstoneMarker` which is just one of the bound of the range tombstone
-(so that a range tombstone is composed of 2 "marker" in practice, its start and
-its end). I'll discuss the reasoning for this a bit more later. A
-`RangeTombstoneMarker` is identified by a `Slice.Bound` (which is to RT markers
-what the `Clustering` is to `Row`) and simply store its deletion information.
-
-The engine thus mainly work with rows and range tombstone markers, and they are
-both grouped under the common `Atom` interface. An "unfiltered" is thus just 
that:
-either a row or a range tombstone marker.
-
-> Side Note: the "Atom" naming is pretty bad. I've reused it mainly because it
-> plays a similar role to the existing OnDiskAtom, but it's arguably crappy now
-> because a row is definitively not "indivisible". Anyway, renaming suggestions
-> are more than welcome. The only alternative I've come up so far are "Block"
-> or "Element" but I'm not entirely convinced by either.
-
-### ClusteringPrefix, Clustering, Slice.Bound and ClusteringComparator
-
-Atoms are sorted (within a partition). They are ordered by their
-`ClusteringPrefix`, which is mainly a common interface for the `Clustering` of
-`Row`, and the `Slice.Bound` of `RangeTombstoneMarker`. More generally, a
-`ClusteringPrefix` is a prefix of the clustering values for the clustering
-columns of the table involved, with a `Clustering` being the special case where
-all values 

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-05-02 Thread slebresne
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3513fbcf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3513fbcf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3513fbcf

Branch: refs/heads/trunk
Commit: 3513fbcfb9c721b4569c9e4e3d8427d0351fa4b4
Parents: 4edd9ed b178d89
Author: Sylvain Lebresne 
Authored: Mon May 2 16:08:39 2016 +0200
Committer: Sylvain Lebresne 
Committed: Mon May 2 16:08:39 2016 +0200

--
 guide_8099.md | 376 -
 1 file changed, 376 deletions(-)
--




[2/3] cassandra git commit: Remove outdated reviewer guide for 8099 (see CASSANDRA-11639)

2016-05-02 Thread slebresne
Remove outdated reviewer guide for 8099 (see CASSANDRA-11639)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b178d899
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b178d899
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b178d899

Branch: refs/heads/trunk
Commit: b178d899d9cc07b4b4cbfd941a01073563e6774d
Parents: 89464ea
Author: Sylvain Lebresne 
Authored: Mon May 2 16:08:24 2016 +0200
Committer: Sylvain Lebresne 
Committed: Mon May 2 16:08:31 2016 +0200

--
 guide_8099.md | 376 -
 1 file changed, 376 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b178d899/guide_8099.md
--
diff --git a/guide_8099.md b/guide_8099.md
deleted file mode 100644
index 627b9dc..000
--- a/guide_8099.md
+++ /dev/null
@@ -1,376 +0,0 @@
-Overview of CASSANDRA-8099 changes
-==
-
-The goal of this document is to provide an overview of the main changes done in
-CASSANDRA-8099 so it's easier to dive into the patch. This assumes knowledge of
-the pre-existing code.
-
-CASSANDRA-8099 refactors the abstractions used by the storage engine and as
-such impact most of the code of said engine. The changes can be though of as
-following two main guidelines:
-1. the new abstractions are much more iterator-based (i.e. it tries harder to
-   avoid materializing everything in memory),
-2. and they are closer to the CQL representation of things (i.e. the storage
-   engine is aware of more structure, making it able to optimize accordingly).
-Note that while those changes have heavy impact on the actual code, the basic
-mechanisms of the read and write paths are largely unchanged.
-
-In the following, I'll start by describe the new abstractions introduced by the
-patch. I'll then provide a quick reference of existing class to what it becomes
-in the patch, after which I'll discuss how the refactor handles a number of
-more specific points. Lastly, the patch introduces some change to the on-wire
-and on-disk format so I'll discuss those quickly.
-
-
-Main new abstractions
--
-
-### Atom: Row and RangeTombstoneMarker
-
-Where the existing storage engine is mainly handling cells, the new engine
-groups cells into rows, and rows becomes the more central building block. A
-`Row` is identified by the value of it's clustering columns which are stored in
-a `Clustering` object (see below), and it associate a number of cells to each
-of its non-PK non-static columns (we'll discuss static columns more
-specifically later).
-
-The patch distinguishes 2 kind of columns: simple and complex ones. The
-_simple_ columns can have only 1 cell associated to them (or none), while the
-_complex_ ones will have an arbitrary number of cells associated.  Currently,
-the complex columns are only the non frozen collections (but we'll have
-non-frozen udt at some point and who knows what in the future).
-
-Like before, we also have to deal with range tombstones. However, instead of
-dealing with full range tombstones, we generally deal with
-`RangeTombstoneMarker` which is just one of the bound of the range tombstone
-(so that a range tombstone is composed of 2 "marker" in practice, its start and
-its end). I'll discuss the reasoning for this a bit more later. A
-`RangeTombstoneMarker` is identified by a `Slice.Bound` (which is to RT markers
-what the `Clustering` is to `Row`) and simply store its deletion information.
-
-The engine thus mainly work with rows and range tombstone markers, and they are
-both grouped under the common `Atom` interface. An "unfiltered" is thus just 
that:
-either a row or a range tombstone marker.
-
-> Side Note: the "Atom" naming is pretty bad. I've reused it mainly because it
-> plays a similar role to the existing OnDiskAtom, but it's arguably crappy now
-> because a row is definitively not "indivisible". Anyway, renaming suggestions
-> are more than welcome. The only alternative I've come up so far are "Block"
-> or "Element" but I'm not entirely convinced by either.
-
-### ClusteringPrefix, Clustering, Slice.Bound and ClusteringComparator
-
-Atoms are sorted (within a partition). They are ordered by their
-`ClusteringPrefix`, which is mainly a common interface for the `Clustering` of
-`Row`, and the `Slice.Bound` of `RangeTombstoneMarker`. More generally, a
-`ClusteringPrefix` is a prefix of the clustering values for the clustering
-columns of the table involved, with a `Clustering` being the special case where
-all values are provided. A `Slice.Bound` can be a true prefix however, having
-only some of the clustering values. Further, a `Slice.Bound` can be either a

[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-05-02 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266651#comment-15266651
 ] 

Edward Capriolo commented on CASSANDRA-11537:
-

This should be good. 
https://github.com/apache/cassandra/compare/cassandra-2.1...edwardcapriolo:exception-on-startup?expand=1

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>  Labels: lhf
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11602) Materialized View Doest Not Have Static Columns

2016-05-02 Thread Ravishankar Rajendran (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266650#comment-15266650
 ] 

Ravishankar Rajendran commented on CASSANDRA-11602:
---

I do not recommend having anything inefficient. My claim is that, if it is 
inefficient, then there is something fundamentally wrong in the design.
We cannot keep building on deaign that does not help create efficient code.

> Materialized View Doest Not Have Static Columns
> ---
>
> Key: CASSANDRA-11602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11602
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ravishankar Rajendran
>Assignee: Carl Yeksigian
> Fix For: 3.0.x, 3.x
>
>
> {quote}
> CREATE TABLE "team" (teamname text, manager text, location text static, 
> PRIMARY KEY ((teamname), manager));
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull1', 
> 'Ricciardo11', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo12', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo13', 'Australian');
> select * From team;
> CREATE MATERIALIZED VIEW IF NOT EXISTS "teamMV" AS SELECT "teamname", 
> "manager", "location" FROM "team" WHERE "teamname" IS NOT NULL AND "manager" 
> is NOT NULL AND "location" is NOT NULL PRIMARY KEY("manager", "teamname");  
> select * from "teamMV";
> {quote}
> The teamMV does not have "location" column. Static columns are not getting 
> created in MV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266641#comment-15266641
 ] 

Benjamin Lerer edited comment on CASSANDRA-11679 at 5/2/16 1:55 PM:


I could not reproduce the problem that you described.
Could you run the following code using your java driver and tell me if you get 
more than 490 rows?
{code}
session = cluster.connect();
session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = 
{'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
session.execute("USE test");
session.execute("DROP TABLE IF EXISTS test");
session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c int, 
PRIMARY KEY((a, b)))");

PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, c) 
VALUES (?, ?, ?);");
for (int i = 0; i < 10; i++)
for (int j = 0; j < 49; j++)
session.execute(prepare.bind(i, j, i + j));

ResultSet rs = session.execute(new SimpleStatement("SELECT DISTINCT a, 
b FROM test")
  .setFetchSize(100));
int count = 0;
for (Row row : rs)
{
count++;
}
System.out.println(count);
{code}


was (Author: blerer):
I could not reproduce the problem that you described.
Could you run the following code using your java driver and tell me if you get 
more than 490 rows?
{code}
session = cluster.connect();
session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = 
{'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
session.execute("USE test");
session.execute("DROP TABLE IF EXISTS test");
session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c int, 
PRIMARY KEY((a, b), c))");

PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, c) 
VALUES (?, ?, ?);");
for (int i = 0; i < 10; i++)
for (int j = 0; j < 49; j++)
session.execute(prepare.bind(i, j, i + j));

ResultSet rs = session.execute(new SimpleStatement("SELECT DISTINCT a, 
b FROM test")
  .setFetchSize(100));
int count = 0;
for (Row row : rs)
{
count++;
}
System.out.println(count);
{code}

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266641#comment-15266641
 ] 

Benjamin Lerer commented on CASSANDRA-11679:


I could not reproduce the problem that you describe.
Could you run the following code using your java driver and tell me if you get 
more than 490 rows?
{code}
session = cluster.connect();
session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = 
{'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
session.execute("USE test");
session.execute("DROP TABLE IF EXISTS test");
session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c int, 
PRIMARY KEY((a, b), c))");

PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, c) 
VALUES (?, ?, ?);");
for (int i = 0; i < 10; i++)
for (int j = 0; j < 49; j++)
session.execute(prepare.bind(i, j, i + j));

ResultSet rs = session.execute(new SimpleStatement("SELECT DISTINCT a, 
b FROM test")
  .setFetchSize(100));
int count = 0;
for (Row row : rs)
{
count++;
}
System.out.println(count);
{code}

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266641#comment-15266641
 ] 

Benjamin Lerer edited comment on CASSANDRA-11679 at 5/2/16 1:55 PM:


I could not reproduce the problem that you described.
Could you run the following code using your java driver and tell me if you get 
more than 490 rows?
{code}
session = cluster.connect();
session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = 
{'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
session.execute("USE test");
session.execute("DROP TABLE IF EXISTS test");
session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c int, 
PRIMARY KEY((a, b), c))");

PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, c) 
VALUES (?, ?, ?);");
for (int i = 0; i < 10; i++)
for (int j = 0; j < 49; j++)
session.execute(prepare.bind(i, j, i + j));

ResultSet rs = session.execute(new SimpleStatement("SELECT DISTINCT a, 
b FROM test")
  .setFetchSize(100));
int count = 0;
for (Row row : rs)
{
count++;
}
System.out.println(count);
{code}


was (Author: blerer):
I could not reproduce the problem that you describe.
Could you run the following code using your java driver and tell me if you get 
more than 490 rows?
{code}
session = cluster.connect();
session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = 
{'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
session.execute("USE test");
session.execute("DROP TABLE IF EXISTS test");
session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c int, 
PRIMARY KEY((a, b), c))");

PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, c) 
VALUES (?, ?, ?);");
for (int i = 0; i < 10; i++)
for (int j = 0; j < 49; j++)
session.execute(prepare.bind(i, j, i + j));

ResultSet rs = session.execute(new SimpleStatement("SELECT DISTINCT a, 
b FROM test")
  .setFetchSize(100));
int count = 0;
for (Row row : rs)
{
count++;
}
System.out.println(count);
{code}

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11475) MV code refactor

2016-05-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266630#comment-15266630
 ] 

Sylvain Lebresne commented on CASSANDRA-11475:
--

bq. I think {{ViewMutationGenerator}} is more explanatory; 
generateViewMutations and build don't really describe to me how the API is 
supposed to be used. I would have expected {{generateViewMutations}} to return 
the {{PartitionUpdate}} s; I guess {{addRow}} and {{generateMutations}} would 
be more descriptive to me.

Sgtm, made that change.

> MV code refactor
> 
>
> Key: CASSANDRA-11475
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11475
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.x
>
>
> While working on CASSANDRA-5546 I run into a problem with TTLs on MVs, which 
> looking more closely is a bug of the MV code. But one thing leading to 
> another I reviewed a good portion of the MV code and found the following 
> correction problems:
> * If a base row is TTLed then even if an update remove that TTL the view 
> entry remained TTLed and expires, leading to an inconsistency.
> * Due to calling the wrong ctor for {{LivenessInfo}}, when a TTL was set on 
> the base table, the view entry was living twice as long as the TTL. Again 
> leading to a temporary inconsistency.
> * When reading existing data to compute view updates, all deletion 
> informations are completely ignored (the code uses a {{PartitionIterator}} 
> instead of an {{UnfilteredPartitionIterator}}). This is a serious issue since 
> it means some deletions could be totally ignored as far as views are 
> concerned especially when messages are delivered to a replica out of order. 
> I'll note that while the 2 previous points are relatively easy to fix, I 
> didn't find an easy and clean way to fix this one on the current code.
> Further, I think the MV code in general has inefficiencies/code complexities 
> that should be avoidable:
> * {{TemporalRow.Set}} is buffering both everything read and a pretty much 
> complete copy of the updates. That's a potentially high memory requirement. 
> We shouldn't have to copy the updates and we shouldn't buffer all reads but 
> rather work incrementally.
> * {{TemporalRow}}/{{TemporalRow.Set}}/{{TemporalCell}} classes are somewhat 
> re-inventing the wheel. They are really just storing both an update we're 
> doing and the corresponding existing data, but we already have 
> {{Row}}/{{Partition}}/{{Cell}} for that. In practice, those {{Temporal*}} 
> class generates a lot of allocations that we could avoid.
> * The code from CASSANDRA-10060 to avoid multiple reads of the base table 
> with multiple views doesn't work when the update has partition/range 
> tombstones because the code uses {{TemporalRow.Set.setTombstonedExisting()}} 
> to trigger reuse, but the {{TemporalRow.Set.withNewViewPrimaryKey()}} method 
> is used between view and it does not preseve the {{hasTombstonedExisting}} 
> flag.  But that oversight, which is trivial to fix, is kind of a good thing 
> since if you fix it, you're left with a correction problem.
>   The read done when there is a partition deletion depends on the view itself 
> (if there is clustering filters in particular) and so reusing that read for 
> other views is wrong. Which makes that whole reuse code really dodgy imo: the 
> read for existing data is in {{View.java}}, suggesting that it depends on the 
> view (which again, it does at least for partition deletion), but it shouldn't 
> if we're going to reuse the result across multiple views.
> * Even ignoring the previous point, we still potentially read the base table 
> twice if the update mix both row updates and partition/range deletions, 
> potentially re-reading the same values.
> * It's probably more minor but the reading code is using {{QueryPager}}, 
> which is probably an artifact of the initial version of the code being 
> pre-8099, but it's not necessary anymore (the reads are local and locally 
> we're completely iterator based), adding, especially when we do page. I'll 
> note that despite using paging, the current code still buffers everything in 
> {{TemporalRow.Set}} anyway .
> Overall, I suspect trying to fix the problems above (particularly the fact 
> that existing deletion infos are ignored) is only going to add complexity 
> with the current code and we'd still have to fix the inefficiencies. So I 
> propose a refactor of that code which does 2 main things:
> # it removes all of {{TemporalRow}} and related classes. Instead, it directly 
> uses the existing {{Row}} (with all its deletion infos) and the update being 
> applied to it and compute the updates for the view from that. I submit that 
> this is more clear/simple, but this also avoid copying every cell of 

[jira] [Commented] (CASSANDRA-11555) Make prepared statement cache size configurable

2016-05-02 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266528#comment-15266528
 ] 

Benjamin Lerer commented on CASSANDRA-11555:


I am +1 if the tests are fine.
The 2 following nits can be fixed on commit:
* In the {{cassandra.yaml}} there are typos in: {{Default value ("auto") is 
1/256th of the the heap or 10MB, whichever is greater}} the {{the}} is repeated 
twice.
* I would use {{FileUtils.MB}} rather than {{1024 * 1024}} as it makes the code 
more readable in my opinion.

> Make prepared statement cache size configurable
> ---
>
> Key: CASSANDRA-11555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11555
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 3.x
>
>
> The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} 
> are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. 
> Sometimes applications may need more than that. Proposal is to make that 
> value configurable - probably also distinguish thrift and native CQL3 queries 
> (new applications don't need the thrift stuff).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11593) getFunctions methods produce too much garbage

2016-05-02 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11593:
---
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.0.6
   3.6
   Status: Resolved  (was: Ready to Commit)

Committed into 3.0 at 89464ead48278f8e3ecfeaeaf9571714978b4f72 and merged into 
trunk.

> getFunctions methods produce too much garbage
> -
>
> Key: CASSANDRA-11593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11593
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.6, 3.0.6
>
> Attachments: 11593-3.0.txt, 11593-trunk.txt
>
>
> While profiling an heavy write workload on a single machine, I discover that 
> calls to {{getFunctions}} were producing a lot of garbage.
> Internally, the getFunctions method use {{Iterators}} or {{Iterables}} 
> functions that creates new immutable collections at each level.
> As {{getFunctions}} is called for each SELECT, INSERT, UPDATE or DELETE 
> through the {{checkAccess}} method the impact is not neglectable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11593) getFunctions methods produce too much garbage

2016-05-02 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11593:
---
Attachment: 11593-trunk.txt
11593-3.0.txt

> getFunctions methods produce too much garbage
> -
>
> Key: CASSANDRA-11593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11593
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
> Attachments: 11593-3.0.txt, 11593-trunk.txt
>
>
> While profiling an heavy write workload on a single machine, I discover that 
> calls to {{getFunctions}} were producing a lot of garbage.
> Internally, the getFunctions method use {{Iterators}} or {{Iterables}} 
> functions that creates new immutable collections at each level.
> As {{getFunctions}} is called for each SELECT, INSERT, UPDATE or DELETE 
> through the {{checkAccess}} method the impact is not neglectable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch cassandra-3.0 into trunk

2016-05-02 Thread blerer
Merge branch cassandra-3.0 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4edd9ed5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4edd9ed5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4edd9ed5

Branch: refs/heads/trunk
Commit: 4edd9ed5e98dca9c8ebf52fa40a1f8abb0031689
Parents: 7b0c716 89464ea
Author: Benjamin Lerer 
Authored: Mon May 2 14:17:48 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon May 2 14:20:31 2016 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/cql3/AbstractConditions.java  |  5 ++-
 .../apache/cassandra/cql3/AbstractMarker.java   |  4 +--
 .../org/apache/cassandra/cql3/Attributes.java   | 15 -
 .../apache/cassandra/cql3/ColumnCondition.java  | 11 +++
 .../apache/cassandra/cql3/ColumnConditions.java |  9 ++
 .../org/apache/cassandra/cql3/Conditions.java   |  8 +++--
 src/java/org/apache/cassandra/cql3/Json.java|  4 +--
 src/java/org/apache/cassandra/cql3/Lists.java   |  4 +--
 src/java/org/apache/cassandra/cql3/Maps.java|  8 ++---
 .../org/apache/cassandra/cql3/Operation.java|  7 ++--
 .../org/apache/cassandra/cql3/Operations.java   |  9 ++
 src/java/org/apache/cassandra/cql3/Sets.java|  6 ++--
 src/java/org/apache/cassandra/cql3/Term.java|  5 ++-
 src/java/org/apache/cassandra/cql3/Terms.java   | 22 +++--
 src/java/org/apache/cassandra/cql3/Tuples.java  |  4 +--
 .../org/apache/cassandra/cql3/UserTypes.java|  4 +--
 .../cql3/functions/AbstractFunction.java|  5 ++-
 .../cassandra/cql3/functions/Function.java  |  2 +-
 .../cassandra/cql3/functions/FunctionCall.java  |  7 ++--
 .../cassandra/cql3/functions/UDAggregate.java   | 18 ++-
 .../restrictions/MultiColumnRestriction.java| 18 +--
 .../cql3/restrictions/Restriction.java  |  8 ++---
 .../cql3/restrictions/RestrictionSet.java   | 18 +--
 .../restrictions/RestrictionSetWrapper.java |  4 +--
 .../restrictions/SingleColumnRestriction.java   | 34 +---
 .../restrictions/StatementRestrictions.java |  9 +++---
 .../cassandra/cql3/restrictions/TermSlice.java  | 19 ---
 .../cql3/restrictions/TokenFilter.java  | 18 +--
 .../cql3/restrictions/TokenRestriction.java |  8 ++---
 .../selection/AbstractFunctionSelector.java |  7 ++--
 .../cassandra/cql3/selection/Selection.java |  7 ++--
 .../cassandra/cql3/selection/Selector.java  |  5 ++-
 .../cql3/selection/SelectorFactories.java   |  9 ++
 .../cql3/statements/BatchStatement.java |  4 +--
 .../statements/CreateAggregateStatement.java|  6 ++--
 .../cql3/statements/ModificationStatement.java  | 15 ++---
 .../cql3/statements/SelectStatement.java| 19 ---
 38 files changed, 165 insertions(+), 201 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4edd9ed5/CHANGES.txt
--
diff --cc CHANGES.txt
index 984ad55,95db7bc..c49249c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,68 -1,5 +1,69 @@@
 -3.0.6
 +3.6
 + * Integrated JMX authentication and authorization (CASSANDRA-10091)
 + * Add units to stress ouput (CASSANDRA-11352)
 + * Fix PER PARTITION LIMIT for single and multi partitions queries 
(CASSANDRA-11603)
 + * Add uncompressed chunk cache for RandomAccessReader (CASSANDRA-5863)
 + * Clarify ClusteringPrefix hierarchy (CASSANDRA-11213)
 + * Always perform collision check before joining ring (CASSANDRA-10134)
 + * SSTableWriter output discrepancy (CASSANDRA-11646)
 + * Fix potential timeout in NativeTransportService.testConcurrentDestroys 
(CASSANDRA-10756)
 + * Support large partitions on the 3.0 sstable format (CASSANDRA-11206)
 + * Add support to rebuild from specific range (CASSANDRA-10406)
 + * Optimize the overlapping lookup by calculating all the
 +   bounds in advance (CASSANDRA-11571)
 + * Support json/yaml output in noetool tablestats (CASSANDRA-5977)
 + * (stress) Add datacenter option to -node options (CASSANDRA-11591)
 + * Fix handling of empty slices (CASSANDRA-11513)
 + * Make number of cores used by cqlsh COPY visible to testing code 
(CASSANDRA-11437)
 + * Allow filtering on clustering columns for queries without secondary 
indexes (CASSANDRA-11310)
 + * Refactor Restriction hierarchy (CASSANDRA-11354)
 + * Eliminate allocations in R/W path (CASSANDRA-11421)
 + * Update Netty to 4.0.36 (CASSANDRA-11567)
 + * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)
 + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818)
 + * Support UDT in CQLSSTableWriter (CASSANDRA-10624)
 + * Support for non-frozen user-defined types, updating
 +   

[1/2] cassandra git commit: Reduce the amount of object allocations caused by the getFunctions methods

2016-05-02 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7b0c7164a -> 4edd9ed5e


Reduce the amount of object allocations caused by the getFunctions methods

patch by Benjamin Lerer; reviewed by Sam Tunnicliffe for CASSANDRA-11593


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/89464ead
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/89464ead
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/89464ead

Branch: refs/heads/trunk
Commit: 89464ead48278f8e3ecfeaeaf9571714978b4f72
Parents: de228c7
Author: Benjamin Lerer 
Authored: Mon May 2 14:15:19 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon May 2 14:15:19 2016 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/cql3/AbstractConditions.java  |  5 ++--
 .../apache/cassandra/cql3/AbstractMarker.java   |  4 +--
 .../org/apache/cassandra/cql3/Attributes.java   | 15 --
 .../apache/cassandra/cql3/ColumnCondition.java  | 11 +++
 .../apache/cassandra/cql3/ColumnConditions.java |  9 ++
 .../org/apache/cassandra/cql3/Conditions.java   |  8 --
 src/java/org/apache/cassandra/cql3/Json.java|  4 +--
 src/java/org/apache/cassandra/cql3/Lists.java   |  4 +--
 src/java/org/apache/cassandra/cql3/Maps.java|  8 ++
 .../org/apache/cassandra/cql3/Operation.java|  7 +++--
 .../org/apache/cassandra/cql3/Operations.java   |  9 ++
 src/java/org/apache/cassandra/cql3/Sets.java|  6 ++--
 src/java/org/apache/cassandra/cql3/Term.java|  5 ++--
 src/java/org/apache/cassandra/cql3/Terms.java   | 22 +++---
 src/java/org/apache/cassandra/cql3/Tuples.java  |  4 +--
 .../org/apache/cassandra/cql3/UserTypes.java|  4 +--
 .../cql3/functions/AbstractFunction.java|  5 ++--
 .../cassandra/cql3/functions/Function.java  |  2 +-
 .../cassandra/cql3/functions/FunctionCall.java  |  7 ++---
 .../cassandra/cql3/functions/UDAggregate.java   | 18 ++--
 .../ForwardingPrimaryKeyRestrictions.java   |  4 +--
 .../restrictions/MultiColumnRestriction.java| 18 ++--
 .../restrictions/PrimaryKeyRestrictionSet.java  |  4 +--
 .../cql3/restrictions/Restriction.java  |  8 +++---
 .../cql3/restrictions/RestrictionSet.java   | 18 ++--
 .../cql3/restrictions/Restrictions.java |  9 +++---
 .../restrictions/SingleColumnRestriction.java   | 30 +---
 .../restrictions/StatementRestrictions.java |  9 +++---
 .../cassandra/cql3/restrictions/TermSlice.java  | 19 +
 .../cql3/restrictions/TokenRestriction.java |  8 +++---
 .../selection/AbstractFunctionSelector.java |  7 ++---
 .../cassandra/cql3/selection/Selection.java |  7 ++---
 .../cassandra/cql3/selection/Selector.java  |  5 ++--
 .../cql3/selection/SelectorFactories.java   |  9 ++
 .../cql3/statements/BatchStatement.java |  4 +--
 .../statements/CreateAggregateStatement.java|  6 ++--
 .../cql3/statements/ModificationStatement.java  | 15 +++---
 .../cql3/statements/SelectStatement.java| 15 --
 39 files changed, 158 insertions(+), 195 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/89464ead/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1f715f4..95db7bc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * Reduce the amount of object allocations caused by the getFunctions methods 
(CASSANDRA-11593)
  * Potential error replaying commitlog with smallint/tinyint/date/time types 
(CASSANDRA-11618)
  * Fix queries with filtering on counter columns (CASSANDRA-11629)
  * Improve tombstone printing in sstabledump (CASSANDRA-11655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/89464ead/src/java/org/apache/cassandra/cql3/AbstractConditions.java
--
diff --git a/src/java/org/apache/cassandra/cql3/AbstractConditions.java 
b/src/java/org/apache/cassandra/cql3/AbstractConditions.java
index 71e3595..530d2b1 100644
--- a/src/java/org/apache/cassandra/cql3/AbstractConditions.java
+++ b/src/java/org/apache/cassandra/cql3/AbstractConditions.java
@@ -17,7 +17,7 @@
  */
 package org.apache.cassandra.cql3;
 
-import java.util.Collections;
+import java.util.List;
 
 import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.cql3.functions.Function;
@@ -28,9 +28,8 @@ import org.apache.cassandra.cql3.functions.Function;
  */
 abstract class AbstractConditions implements Conditions
 {
-public Iterable getFunctions()
+public void addFunctionsTo(List functions)
 {
-return Collections.emptyList();
 }
 
 public Iterable getColumns()


cassandra git commit: Reduce the amount of object allocations caused by the getFunctions methods

2016-05-02 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 de228c77b -> 89464ead4


Reduce the amount of object allocations caused by the getFunctions methods

patch by Benjamin Lerer; reviewed by Sam Tunnicliffe for CASSANDRA-11593


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/89464ead
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/89464ead
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/89464ead

Branch: refs/heads/cassandra-3.0
Commit: 89464ead48278f8e3ecfeaeaf9571714978b4f72
Parents: de228c7
Author: Benjamin Lerer 
Authored: Mon May 2 14:15:19 2016 +0200
Committer: Benjamin Lerer 
Committed: Mon May 2 14:15:19 2016 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/cql3/AbstractConditions.java  |  5 ++--
 .../apache/cassandra/cql3/AbstractMarker.java   |  4 +--
 .../org/apache/cassandra/cql3/Attributes.java   | 15 --
 .../apache/cassandra/cql3/ColumnCondition.java  | 11 +++
 .../apache/cassandra/cql3/ColumnConditions.java |  9 ++
 .../org/apache/cassandra/cql3/Conditions.java   |  8 --
 src/java/org/apache/cassandra/cql3/Json.java|  4 +--
 src/java/org/apache/cassandra/cql3/Lists.java   |  4 +--
 src/java/org/apache/cassandra/cql3/Maps.java|  8 ++
 .../org/apache/cassandra/cql3/Operation.java|  7 +++--
 .../org/apache/cassandra/cql3/Operations.java   |  9 ++
 src/java/org/apache/cassandra/cql3/Sets.java|  6 ++--
 src/java/org/apache/cassandra/cql3/Term.java|  5 ++--
 src/java/org/apache/cassandra/cql3/Terms.java   | 22 +++---
 src/java/org/apache/cassandra/cql3/Tuples.java  |  4 +--
 .../org/apache/cassandra/cql3/UserTypes.java|  4 +--
 .../cql3/functions/AbstractFunction.java|  5 ++--
 .../cassandra/cql3/functions/Function.java  |  2 +-
 .../cassandra/cql3/functions/FunctionCall.java  |  7 ++---
 .../cassandra/cql3/functions/UDAggregate.java   | 18 ++--
 .../ForwardingPrimaryKeyRestrictions.java   |  4 +--
 .../restrictions/MultiColumnRestriction.java| 18 ++--
 .../restrictions/PrimaryKeyRestrictionSet.java  |  4 +--
 .../cql3/restrictions/Restriction.java  |  8 +++---
 .../cql3/restrictions/RestrictionSet.java   | 18 ++--
 .../cql3/restrictions/Restrictions.java |  9 +++---
 .../restrictions/SingleColumnRestriction.java   | 30 +---
 .../restrictions/StatementRestrictions.java |  9 +++---
 .../cassandra/cql3/restrictions/TermSlice.java  | 19 +
 .../cql3/restrictions/TokenRestriction.java |  8 +++---
 .../selection/AbstractFunctionSelector.java |  7 ++---
 .../cassandra/cql3/selection/Selection.java |  7 ++---
 .../cassandra/cql3/selection/Selector.java  |  5 ++--
 .../cql3/selection/SelectorFactories.java   |  9 ++
 .../cql3/statements/BatchStatement.java |  4 +--
 .../statements/CreateAggregateStatement.java|  6 ++--
 .../cql3/statements/ModificationStatement.java  | 15 +++---
 .../cql3/statements/SelectStatement.java| 15 --
 39 files changed, 158 insertions(+), 195 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/89464ead/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1f715f4..95db7bc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * Reduce the amount of object allocations caused by the getFunctions methods 
(CASSANDRA-11593)
  * Potential error replaying commitlog with smallint/tinyint/date/time types 
(CASSANDRA-11618)
  * Fix queries with filtering on counter columns (CASSANDRA-11629)
  * Improve tombstone printing in sstabledump (CASSANDRA-11655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/89464ead/src/java/org/apache/cassandra/cql3/AbstractConditions.java
--
diff --git a/src/java/org/apache/cassandra/cql3/AbstractConditions.java 
b/src/java/org/apache/cassandra/cql3/AbstractConditions.java
index 71e3595..530d2b1 100644
--- a/src/java/org/apache/cassandra/cql3/AbstractConditions.java
+++ b/src/java/org/apache/cassandra/cql3/AbstractConditions.java
@@ -17,7 +17,7 @@
  */
 package org.apache.cassandra.cql3;
 
-import java.util.Collections;
+import java.util.List;
 
 import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.cql3.functions.Function;
@@ -28,9 +28,8 @@ import org.apache.cassandra.cql3.functions.Function;
  */
 abstract class AbstractConditions implements Conditions
 {
-public Iterable getFunctions()
+public void addFunctionsTo(List functions)
 {
-return Collections.emptyList();
 }
 
 public Iterable 

[jira] [Commented] (CASSANDRA-11349) MerkleTree mismatch when multiple range tombstones exists for the same partition and interval

2016-05-02 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266474#comment-15266474
 ] 

Stefan Podkowinski commented on CASSANDRA-11349:


I'm not sure introducing a new tracker interface is the best way to handle 
this. It took me a while to actually figure out the differences between the 
{{update}} implementations in both trackers, since for most parts it's sharing 
the same copied code. It would probably be better to have ValidationTracker 
subclass RegularCompactionTracker, add {{remove/addUnwrittenTombstone}} 
implemented empty for validation.

The {{addRangeTombstone}} semantics also look like a case of leaky abstractions 
to me. It's adding nothing at all for regular compaction, but serves as early 
exit path for validation. 

Good news is that the dtests and unit tests seem to pass with the patch. :)

> MerkleTree mismatch when multiple range tombstones exists for the same 
> partition and interval
> -
>
> Key: CASSANDRA-11349
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11349
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fabien Rousseau
>Assignee: Stefan Podkowinski
>  Labels: repair
> Fix For: 2.1.x, 2.2.x
>
> Attachments: 11349-2.1-v2.patch, 11349-2.1.patch
>
>
> We observed that repair, for some of our clusters, streamed a lot of data and 
> many partitions were "out of sync".
> Moreover, the read repair mismatch ratio is around 3% on those clusters, 
> which is really high.
> After investigation, it appears that, if two range tombstones exists for a 
> partition for the same range/interval, they're both included in the merkle 
> tree computation.
> But, if for some reason, on another node, the two range tombstones were 
> already compacted into a single range tombstone, this will result in a merkle 
> tree difference.
> Currently, this is clearly bad because MerkleTree differences are dependent 
> on compactions (and if a partition is deleted and created multiple times, the 
> only way to ensure that repair "works correctly"/"don't overstream data" is 
> to major compact before each repair... which is not really feasible).
> Below is a list of steps allowing to easily reproduce this case:
> {noformat}
> ccm create test -v 2.1.13 -n 2 -s
> ccm node1 cqlsh
> CREATE KEYSPACE test_rt WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> USE test_rt;
> CREATE TABLE IF NOT EXISTS table1 (
> c1 text,
> c2 text,
> c3 float,
> c4 float,
> PRIMARY KEY ((c1), c2)
> );
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 2);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> # now flush only one of the two nodes
> ccm node1 flush 
> ccm node1 cqlsh
> USE test_rt;
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 3);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> ccm node1 repair
> # now grep the log and observe that there was some inconstencies detected 
> between nodes (while it shouldn't have detected any)
> ccm node1 showlog | grep "out of sync"
> {noformat}
> Consequences of this are a costly repair, accumulating many small SSTables 
> (up to thousands for a rather short period of time when using VNodes, the 
> time for compaction to absorb those small files), but also an increased size 
> on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11555) Make prepared statement cache size configurable

2016-05-02 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266463#comment-15266463
 ] 

Robert Stupp commented on CASSANDRA-11555:
--

Applied comments (did not squash), rebased and triggered CI.

> Make prepared statement cache size configurable
> ---
>
> Key: CASSANDRA-11555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11555
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 3.x
>
>
> The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} 
> are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. 
> Sometimes applications may need more than that. Proposal is to make that 
> value configurable - probably also distinguish thrift and native CQL3 queries 
> (new applications don't need the thrift stuff).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10091) Integrated JMX authn & authz

2016-05-02 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-10091:

 Labels: doc-impacting  (was: )
Component/s: Observability
 Lifecycle
 Configuration

I've opened CASSANDRA-11695 as a followup to move JMX configuration settings to 
yaml, as really there's no need to manage them through system properties now 
we're always constructing the connector server programatically.

> Integrated JMX authn & authz
> 
>
> Key: CASSANDRA-10091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10091
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration, Lifecycle, Observability
>Reporter: Jan Karlsson
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: doc-impacting
> Fix For: 3.6
>
>
> It would be useful to authenticate with JMX through Cassandra's internal 
> authentication. This would reduce the overhead of keeping passwords in files 
> on the machine and would consolidate passwords to one location. It would also 
> allow the possibility to handle JMX permissions in Cassandra.
> It could be done by creating our own JMX server and setting custom classes 
> for the authenticator and authorizer. We could then add some parameters where 
> the user could specify what authenticator and authorizer to use in case they 
> want to make their own.
> This could also be done by creating a premain method which creates a jmx 
> server. This would give us the feature without changing the Cassandra code 
> itself. However I believe this would be a good feature to have in Cassandra.
> I am currently working on a solution which creates a JMX server and uses a 
> custom authenticator and authorizer. It is currently build as a premain, 
> however it would be great if we could put this in Cassandra instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11695) Move JMX connection config to cassandra.yaml

2016-05-02 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-11695:
---

 Summary: Move JMX connection config to cassandra.yaml
 Key: CASSANDRA-11695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11695
 Project: Cassandra
  Issue Type: Improvement
  Components: Configuration
Reporter: Sam Tunnicliffe
Priority: Minor
 Fix For: 3.x


Since CASSANDRA-10091, we always construct the JMX connector server 
programatically, so we could move its configuration from cassandra-env to yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10091) Integrated JMX authn & authz

2016-05-02 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-10091:

   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Patch Available)

Committed to trunk in {{7b0c7164aa22c156811a5d1a001c43d099aad8e4}}.In the final 
CI runs, the only failures are dtests with known issues.

||branch||testall||dtest||
|[10091-trunk|https://github.com/beobal/cassandra/tree/10091-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10091-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10091-trunk-dtest]|


> Integrated JMX authn & authz
> 
>
> Key: CASSANDRA-10091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10091
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.6
>
>
> It would be useful to authenticate with JMX through Cassandra's internal 
> authentication. This would reduce the overhead of keeping passwords in files 
> on the machine and would consolidate passwords to one location. It would also 
> allow the possibility to handle JMX permissions in Cassandra.
> It could be done by creating our own JMX server and setting custom classes 
> for the authenticator and authorizer. We could then add some parameters where 
> the user could specify what authenticator and authorizer to use in case they 
> want to make their own.
> This could also be done by creating a premain method which creates a jmx 
> server. This would give us the feature without changing the Cassandra code 
> itself. However I believe this would be a good feature to have in Cassandra.
> I am currently working on a solution which creates a JMX server and uses a 
> custom authenticator and authorizer. It is currently build as a premain, 
> however it would be great if we could put this in Cassandra instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Integrated JMX Authentication and Authorization

2016-05-02 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/trunk ad7e36b8a -> 7b0c7164a


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b0c7164/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java 
b/src/java/org/apache/cassandra/tools/NodeTool.java
index d7cda95..8640b58 100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@ -309,7 +309,7 @@ public class NodeTool
 nodeClient = new NodeProbe(host, parseInt(port));
 else
 nodeClient = new NodeProbe(host, parseInt(port), username, 
password);
-} catch (IOException e)
+} catch (IOException | SecurityException e)
 {
 Throwable rootCause = Throwables.getRootCause(e);
 System.err.println(format("nodetool: Failed to connect to 
'%s:%s' - %s: '%s'.", host, port, rootCause.getClass().getSimpleName(), 
rootCause.getMessage()));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b0c7164/src/java/org/apache/cassandra/utils/JMXServerUtils.java
--
diff --git a/src/java/org/apache/cassandra/utils/JMXServerUtils.java 
b/src/java/org/apache/cassandra/utils/JMXServerUtils.java
new file mode 100644
index 000..b0e44a2
--- /dev/null
+++ b/src/java/org/apache/cassandra/utils/JMXServerUtils.java
@@ -0,0 +1,299 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.utils;
+
+import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Proxy;
+import java.net.InetAddress;
+import java.rmi.NoSuchObjectException;
+import java.rmi.Remote;
+import java.rmi.RemoteException;
+import java.rmi.registry.LocateRegistry;
+import java.rmi.server.RMIClientSocketFactory;
+import java.rmi.server.RMIServerSocketFactory;
+import java.rmi.server.UnicastRemoteObject;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.management.remote.*;
+import javax.management.remote.rmi.RMIConnectorServer;
+import javax.management.remote.rmi.RMIJRMPServerImpl;
+import javax.rmi.ssl.SslRMIClientSocketFactory;
+import javax.rmi.ssl.SslRMIServerSocketFactory;
+import javax.security.auth.Subject;
+
+import com.google.common.collect.ImmutableMap;
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.sun.jmx.remote.internal.RMIExporter;
+import com.sun.jmx.remote.security.JMXPluggableAuthenticator;
+import org.apache.cassandra.auth.jmx.AuthenticationProxy;
+import sun.rmi.server.UnicastServerRef2;
+
+public class JMXServerUtils
+{
+private static final Logger logger = 
LoggerFactory.getLogger(JMXServerUtils.class);
+
+
+/**
+ * Creates a server programmatically. This allows us to set parameters 
which normally are
+ * inaccessable.
+ */
+public static JMXConnectorServer createJMXServer(int port, boolean local)
+throws IOException
+{
+Map env = new HashMap<>();
+
+String urlTemplate = 
"service:jmx:rmi://%1$s/jndi/rmi://%1$s:%2$d/jmxrmi";
+String url;
+String host;
+InetAddress serverAddress;
+if (local)
+{
+serverAddress = InetAddress.getLoopbackAddress();
+host = serverAddress.getHostAddress();
+System.setProperty("java.rmi.server.hostname", host);
+}
+else
+{
+// if the java.rmi.server.hostname property is set, we'll take its 
value
+// and use that when creating the RMIServerSocket to which we bind 
the RMI
+// registry. This allows us to effectively restrict to a single 
interface
+// if required. See 
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4880793
+// for more detail. If the hostname property is not set, the 
registry will
+// be bound to the wildcard address
+ 

[2/2] cassandra git commit: Integrated JMX Authentication and Authorization

2016-05-02 Thread samt
Integrated JMX Authentication and Authorization

Patch by Jan Karlsson and Sam Tunnicliffe; reviewed by Jake Luciani for
CASSANDRA-10091


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7b0c7164
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7b0c7164
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7b0c7164

Branch: refs/heads/trunk
Commit: 7b0c7164aa22c156811a5d1a001c43d099aad8e4
Parents: ad7e36b
Author: Sam Tunnicliffe 
Authored: Wed Feb 24 09:31:44 2016 +
Committer: Sam Tunnicliffe 
Committed: Mon May 2 11:44:48 2016 +0100

--
 CHANGES.txt |   1 +
 NEWS.txt|  10 +-
 conf/cassandra-env.ps1  |  32 +-
 conf/cassandra-env.sh   |  43 +-
 conf/cassandra-jaas.config  |   4 +
 doc/cql3/CQL.textile|  30 +-
 pylib/cqlshlib/cql3handling.py  |   6 +
 src/antlr/Lexer.g   |   2 +
 src/antlr/Parser.g  |  38 ++
 .../cassandra/auth/AllowAllAuthorizer.java  |   5 +
 .../cassandra/auth/CassandraAuthorizer.java |  12 +-
 .../cassandra/auth/CassandraLoginModule.java| 257 +
 .../cassandra/auth/CassandraPrincipal.java  | 134 +
 .../org/apache/cassandra/auth/JMXResource.java  | 183 ++
 .../org/apache/cassandra/auth/Permission.java   |   4 +-
 .../org/apache/cassandra/auth/Resources.java|   2 +
 .../cassandra/auth/jmx/AuthenticationProxy.java | 157 +
 .../cassandra/auth/jmx/AuthorizationProxy.java  | 512 +
 .../cassandra/service/CassandraDaemon.java  |  63 +-
 .../apache/cassandra/service/StartupChecks.java |   2 +-
 .../cassandra/service/StorageService.java   |   7 +
 .../org/apache/cassandra/tools/NodeTool.java|   2 +-
 .../apache/cassandra/utils/JMXServerUtils.java  | 299 ++
 .../utils/RMIServerSocketFactoryImpl.java   |  12 +-
 test/resources/auth/cassandra-test-jaas.conf|   4 +
 .../apache/cassandra/auth/StubAuthorizer.java   | 120 
 .../auth/jmx/AuthorizationProxyTest.java| 574 +++
 .../apache/cassandra/auth/jmx/JMXAuthTest.java  | 279 +
 .../cql3/validation/entities/UFAuthTest.java| 100 +---
 .../service/RMIServerSocketFactoryImplTest.java |   2 +-
 30 files changed, 2745 insertions(+), 151 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b0c7164/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d557846..984ad55 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Integrated JMX authentication and authorization (CASSANDRA-10091)
  * Add units to stress ouput (CASSANDRA-11352)
  * Fix PER PARTITION LIMIT for single and multi partitions queries 
(CASSANDRA-11603)
  * Add uncompressed chunk cache for RandomAccessReader (CASSANDRA-5863)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b0c7164/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 77d3dfd..7a29924 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -18,8 +18,14 @@ using the provided 'sstableupgrade' tool.
 
 New features
 
-   - JSON timestamps are now in UTC and contain the timezone information, see
- CASSANDRA-11137 for more details.
+   - JMX connections can now use the same auth mechanisms as CQL clients. New 
options
+ in cassandra-env.(sh|ps1) enable JMX authentication and authorization to 
be delegated
+ to the IAuthenticator and IAuthorizer configured in cassandra.yaml. The 
default settings
+ still only expose JMX locally, and use the JVM's own security mechanisms 
when remote
+ connections are permitted. For more details on how to enable the new 
options, see the
+ comments in cassandra-env.sh. A new class of IResource, JMXResource, is 
provided for
+ the purposes of GRANT/REVOKE via CQL. See CASSANDRA-10091 for more 
details.
+   - JSON timestamps are now in UTC and contain the timezone information, see 
CASSANDRA-11137 for more details.
- Collision checks are performed when joining the token ring, regardless of 
whether
  the node should bootstrap. Additionally, replace_address can legitimately 
be used
  without bootstrapping to help with recovery of nodes with partially 
failed disks.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b0c7164/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index d0a9a24..9373ba6 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1

[jira] [Commented] (CASSANDRA-8273) Allow filtering queries can return stale data

2016-05-02 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266427#comment-15266427
 ] 

Robert Stupp commented on CASSANDRA-8273:
-

{{ALLOW FILTERING}} cannot provide _any_ consistency guarantee, since even 
{{ONE}} could end up on a replica with stale data - but that's the expected 
behavior for CL {{ONE}}.

I think the point that may confuse people, is that {{ALLOW FILTERING}} + 
{{QUORUM}} doesn't really respect the CL.

Maybe it's worth to add a warning, if CL > {{ONE}} is used with {{ALLOW 
FILTERING}}. WDYT?

With CL {{ONE}} + {{ALLOW FILTERING}} _and_ partition (or token) restriction, 
it shouldn't make a (big) difference where the filtering takes place. But for 
filtering queries w/ 2i (or no partition/token restriction) it could make a 
huge difference.

Maybe we can use replica filtering for CL {{ONE}} and coordinator filtering 
otherwise. OTOH, as you said, it makes the code path more complex.

(PS: Strike my last comment.)

> Allow filtering queries can return stale data
> -
>
> Key: CASSANDRA-8273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8273
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>
> Data filtering is done replica side. That means that a single replica with 
> stale data may make the whole query return that stale data.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v1 text, v2 int);
> CREATE INDEX ON test(v1);
> INSERT INTO test(k, v1, v2) VALUES (0, 'foo', 1);
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v2 = 2 WHERE k = 0;
> SELECT * FROM test WHERE v1 = 'foo' AND v2 = 1;
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned. Let's 
> note that this is a problem related to filtering, not 2ndary indexes.
> This issue share similarity with CASSANDRA-8272 but contrarily to that former 
> issue, I'm not sure how to fix it. Obviously, moving the filtering to the 
> coordinator would remove that problem, but doing so would, on top of not 
> being trivial to implmenent, have serious performance impact since we can't 
> know in advance how much data will be filtered and we may have to redo query 
> to replica multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11555) Make prepared statement cache size configurable

2016-05-02 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266417#comment-15266417
 ] 

Benjamin Lerer commented on CASSANDRA-11555:


I think that it is reasonable to have the cache size being: {{1/256th of the 
the heap or 10MB, whichever is greater}} but in this case the comments in 
{{Config}} and in {{DatabaseDescriptor}} are misleading. It also seems that the 
code in {{DatabaseDescriptor}} is not accurate, unless I misunderstood 
something.  {code}Math.min(Math.max(1, (int) (Runtime.getRuntime().maxMemory() 
/ 1024 / 1024 / 256)), 10){code} should be {code}Math.max((int) 
(Runtime.getRuntime().maxMemory() / FileUtils.MB /256)), 10){code}


> Make prepared statement cache size configurable
> ---
>
> Key: CASSANDRA-11555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11555
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 3.x
>
>
> The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} 
> are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. 
> Sometimes applications may need more than that. Proposal is to make that 
> value configurable - probably also distinguish thrift and native CQL3 queries 
> (new applications don't need the thrift stuff).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix bad merge

2016-05-02 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk b6cc43e87 -> ad7e36b8a


Fix bad merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad7e36b8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad7e36b8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad7e36b8

Branch: refs/heads/trunk
Commit: ad7e36b8ad52c56ea33376faf57afecbb2e58630
Parents: b6cc43e
Author: Sylvain Lebresne 
Authored: Mon May 2 12:28:17 2016 +0200
Committer: Sylvain Lebresne 
Committed: Mon May 2 12:28:17 2016 +0200

--
 test/unit/org/apache/cassandra/db/CellTest.java | 18 --
 1 file changed, 8 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad7e36b8/test/unit/org/apache/cassandra/db/CellTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/CellTest.java 
b/test/unit/org/apache/cassandra/db/CellTest.java
index e7e90a1..1249989 100644
--- a/test/unit/org/apache/cassandra/db/CellTest.java
+++ b/test/unit/org/apache/cassandra/db/CellTest.java
@@ -53,8 +53,6 @@ public class CellTest
  
.addRegularColumn("m", MapType.getInstance(IntegerType.instance, 
IntegerType.instance, true))
  .build();
 
-private static final CFMetaData fakeMetadata = 
CFMetaData.createFake("fakeKS", "fakeTable");
-
 @BeforeClass
 public static void defineSchema() throws ConfigurationException
 {
@@ -64,8 +62,8 @@ public class CellTest
 
 private static ColumnDefinition fakeColumn(String name, AbstractType 
type)
 {
-return new ColumnDefinition(fakeMetadata.ksName,
-fakeMetadata.cfName,
+return new ColumnDefinition("fakeKs",
+"fakeTable",
 ColumnIdentifier.getInterned(name, false),
 type,
 ColumnDefinition.NO_POSITION,
@@ -127,8 +125,8 @@ public class CellTest
 
 // Valid cells
 c = fakeColumn("c", Int32Type.instance);
-assertValid(BufferCell.live(fakeMetadata, c, 0, 
ByteBufferUtil.EMPTY_BYTE_BUFFER));
-assertValid(BufferCell.live(fakeMetadata, c, 0, 
ByteBufferUtil.bytes(4)));
+assertValid(BufferCell.live(c, 0, ByteBufferUtil.EMPTY_BYTE_BUFFER));
+assertValid(BufferCell.live(c, 0, ByteBufferUtil.bytes(4)));
 
 assertValid(BufferCell.expiring(c, 0, 4, 4, 
ByteBufferUtil.EMPTY_BYTE_BUFFER));
 assertValid(BufferCell.expiring(c, 0, 4, 4, ByteBufferUtil.bytes(4)));
@@ -137,11 +135,11 @@ public class CellTest
 
 // Invalid value (we don't all empty values for smallint)
 c = fakeColumn("c", ShortType.instance);
-assertInvalid(BufferCell.live(fakeMetadata, c, 0, 
ByteBufferUtil.EMPTY_BYTE_BUFFER));
+assertInvalid(BufferCell.live(c, 0, ByteBufferUtil.EMPTY_BYTE_BUFFER));
 // But this should be valid even though the underlying value is an 
empty BB (catches bug #11618)
 assertValid(BufferCell.tombstone(c, 0, 4));
 // And of course, this should be valid with a proper value
-assertValid(BufferCell.live(fakeMetadata, c, 0, 
ByteBufferUtil.bytes((short)4)));
+assertValid(BufferCell.live(c, 0, ByteBufferUtil.bytes((short)4)));
 
 // Invalid ttl
 assertInvalid(BufferCell.expiring(c, 0, -4, 4, 
ByteBufferUtil.bytes(4)));
@@ -151,9 +149,9 @@ public class CellTest
 
 c = fakeColumn("c", MapType.getInstance(Int32Type.instance, 
Int32Type.instance, true));
 // Valid cell path
-assertValid(BufferCell.live(fakeMetadata, c, 0, 
ByteBufferUtil.bytes(4), CellPath.create(ByteBufferUtil.bytes(4;
+assertValid(BufferCell.live(c, 0, ByteBufferUtil.bytes(4), 
CellPath.create(ByteBufferUtil.bytes(4;
 // Invalid cell path (int values should be 0 or 4 bytes)
-assertInvalid(BufferCell.live(fakeMetadata, c, 0, 
ByteBufferUtil.bytes(4), CellPath.create(ByteBufferUtil.bytes((long)4;
+assertInvalid(BufferCell.live(c, 0, ByteBufferUtil.bytes(4), 
CellPath.create(ByteBufferUtil.bytes((long)4;
 }
 
 @Test



[jira] [Commented] (CASSANDRA-8273) Allow filtering queries can return stale data

2016-05-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266352#comment-15266352
 ] 

Sylvain Lebresne commented on CASSANDRA-8273:
-

I don't really understand the reasoning of that. If you can live with stale 
data, why would you use {{QUORUM}} in the first place? Imo, either we're able 
to fix this somehow, or we should consider moving filtering coordinator side 
always. Haven't yet had time to put too much though on how to do the former 
though so far.

> Allow filtering queries can return stale data
> -
>
> Key: CASSANDRA-8273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8273
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>
> Data filtering is done replica side. That means that a single replica with 
> stale data may make the whole query return that stale data.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v1 text, v2 int);
> CREATE INDEX ON test(v1);
> INSERT INTO test(k, v1, v2) VALUES (0, 'foo', 1);
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v2 = 2 WHERE k = 0;
> SELECT * FROM test WHERE v1 = 'foo' AND v2 = 1;
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned. Let's 
> note that this is a problem related to filtering, not 2ndary indexes.
> This issue share similarity with CASSANDRA-8272 but contrarily to that former 
> issue, I'm not sure how to fix it. Obviously, moving the filtering to the 
> coordinator would remove that problem, but doing so would, on top of not 
> being trivial to implmenent, have serious performance impact since we can't 
> know in advance how much data will be filtered and we may have to redo query 
> to replica multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11618) Removing an element from map corrupts commitlog

2016-05-02 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11618:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.0.6
   3.6
Reproduced In: 3.5, 3.0.0, 3.0 beta 2, 3.0 beta 1, 3.0 alpha 1, 3.6, 3.0.6  
(was: 3.0 alpha 1, 3.0 beta 1, 3.0 beta 2, 3.0.0, 3.5, 3.6, 3.0.6)
   Status: Resolved  (was: Ready to Commit)

Committed, thanks.

> Removing an element from map corrupts commitlog
> ---
>
> Key: CASSANDRA-11618
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11618
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Fully updated CentOS 7.2 64-bit (ami-1f5dfe6c: "CentOS 
> 7.2 x86_64 with cloud-init (HVM)") on Amazon's eu-west-1c t2.micro + OpenJDK 
> 1.8.0_77
> Arch Linux 32-bit + OpenJDK 8.u77-1
>Reporter: Artem Chudinov
>Assignee: Sylvain Lebresne
>Priority: Critical
>  Labels: commitlog, regression, serializers
> Fix For: 3.6, 3.0.6
>
> Attachments: cassandra-trunk-0541597e7-failedlaunch.log, 
> cassandra-trunk-0541597e7-mutation279088679553718337dat
>
>
> 2.2.6 has no this bug.
> I've tried 3.0 alpha 1, 3.0 beta 1, 3.0 beta 2, 3.0.0, 3.0.6, 3.5, 
> datastax-ddc 3.5.0 (from repo), and trunk (3.6) - all of them have this bug. 
> I've found that the error is thrown since 
> d12d2d496540c698f30e9b528b66e8f6636842d3, which is included in 3.0 beta 1 
> (but *not* in the alpha 1).
> Cassandra 3.0 alpha 1 does not throw the error, but forgets about the changes 
> after shutting down.
> Only after rm ./data/commitlog/* , Cassandra starts fine.
> By the way, map works fine.
> Steps to reproduce:
> {code}
> $ ant build
> $ ./bin/cassandra
> $ ./bin/cqlsh
> {code}
> {code:sql}
> CREATE KEYSPACE bugs
> WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}
> AND durable_writes = true;
> CREATE TABLE bugs.bug1 (
> id int,
> m  map, -- key can be any type
> PRIMARY KEY (id)
> );
> INSERT INTO bugs.bug1 (id, m) VALUES (1, {0: 4, 4: 3});
> UPDATE bugs.bug1 SET m[0]=NULL WHERE id=1;
> -- and/or UPDATE bugs.bug1 SET m[1]=NULL WHERE id=1;
> SELECT * FROM bugs.bug1;
> {code}
> {code}
>  id | m
> +
>   1 | {4: 3}
> (1 rows)
> {code}
> {code}
> $ ./bin/nodetool stopdaemon
> $ ./bin/cassandra
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Don't try to validate values for cell tombstones

2016-05-02 Thread slebresne
Don't try to validate values for cell tombstones

patch by slebresne; reviewed by blerer for CASSANDRA-11618


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de228c77
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de228c77
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de228c77

Branch: refs/heads/trunk
Commit: de228c77b0dc17cadb77a676883c4045cfc43332
Parents: c08eeaf
Author: Sylvain Lebresne 
Authored: Wed Apr 20 15:57:48 2016 +0200
Committer: Sylvain Lebresne 
Committed: Mon May 2 12:17:44 2016 +0200

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/rows/AbstractCell.java  | 15 ++--
 test/unit/org/apache/cassandra/db/CellTest.java | 77 +++-
 3 files changed, 86 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de228c77/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 95f450b..1f715f4 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * Potential error replaying commitlog with smallint/tinyint/date/time types 
(CASSANDRA-11618)
  * Fix queries with filtering on counter columns (CASSANDRA-11629)
  * Improve tombstone printing in sstabledump (CASSANDRA-11655)
  * Fix paging for range queries where all clustering columns are specified 
(CASSANDRA-11669)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de228c77/src/java/org/apache/cassandra/db/rows/AbstractCell.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/AbstractCell.java 
b/src/java/org/apache/cassandra/db/rows/AbstractCell.java
index 882c0e0..00fc286 100644
--- a/src/java/org/apache/cassandra/db/rows/AbstractCell.java
+++ b/src/java/org/apache/cassandra/db/rows/AbstractCell.java
@@ -52,8 +52,6 @@ public abstract class AbstractCell extends Cell
 
 public void validate()
 {
-column().validateCellValue(value());
-
 if (ttl() < 0)
 throw new MarshalException("A TTL should not be negative");
 if (localDeletionTime() < 0)
@@ -61,9 +59,16 @@ public abstract class AbstractCell extends Cell
 if (isExpiring() && localDeletionTime() == NO_DELETION_TIME)
 throw new MarshalException("Shoud not have a TTL without an 
associated local deletion time");
 
-// If cell is a tombstone, it shouldn't have a value.
-if (isTombstone() && value().hasRemaining())
-throw new MarshalException("A tombstone should not have a value");
+if (isTombstone())
+{
+// If cell is a tombstone, it shouldn't have a value.
+if (value().hasRemaining())
+throw new MarshalException("A tombstone should not have a 
value");
+}
+else
+{
+column().validateCellValue(value());
+}
 
 if (path() != null)
 column().validateCellPath(path());

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de228c77/test/unit/org/apache/cassandra/db/CellTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/CellTest.java 
b/test/unit/org/apache/cassandra/db/CellTest.java
index 5953255..9072f98 100644
--- a/test/unit/org/apache/cassandra/db/CellTest.java
+++ b/test/unit/org/apache/cassandra/db/CellTest.java
@@ -30,12 +30,12 @@ import org.junit.Test;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.cql3.ColumnIdentifier;
-import org.apache.cassandra.db.marshal.IntegerType;
-import org.apache.cassandra.db.marshal.MapType;
+import org.apache.cassandra.db.marshal.*;
 import org.apache.cassandra.db.rows.*;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.schema.KeyspaceParams;
+import org.apache.cassandra.serializers.MarshalException;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 
@@ -53,6 +53,8 @@ public class CellTest
  
.addRegularColumn("m", MapType.getInstance(IntegerType.instance, 
IntegerType.instance, true))
  .build();
 
+private static final CFMetaData fakeMetadata = 
CFMetaData.createFake("fakeKS", "fakeTable");
+
 @BeforeClass
 public static void defineSchema() throws ConfigurationException
 {
@@ -60,6 +62,16 @@ public class CellTest
 SchemaLoader.createKeyspace(KEYSPACE1, KeyspaceParams.simple(1), cfm, 
cfm2);
 }
 
+

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-05-02 Thread slebresne
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b6cc43e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b6cc43e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b6cc43e8

Branch: refs/heads/trunk
Commit: b6cc43e879d73d593957416e04dfc4af0f14cf19
Parents: a62f70d de228c7
Author: Sylvain Lebresne 
Authored: Mon May 2 12:18:14 2016 +0200
Committer: Sylvain Lebresne 
Committed: Mon May 2 12:18:14 2016 +0200

--

--




  1   2   >