[jira] [Commented] (CASSANDRA-10791) RangeTombstones can be written after END_OF_ROW markers when streaming

2015-12-01 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034551#comment-15034551
 ] 

Branimir Lambov commented on CASSANDRA-10791:
-

Uploaded new version od the two branches which also patches scrubber to 
identify files written with this bug and rectify the problem.

Unit tests (linked above) passed, dtests are still running.

> RangeTombstones can be written after END_OF_ROW markers when streaming
> --
>
> Key: CASSANDRA-10791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10791
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Critical
> Fix For: 2.1.x, 2.2.x
>
>
> {{SSTableWriter.appendFromStream}} calls {{ColumnIndex.build}} only after 
> finishing the row and writing an {{END_OF_ROW}} marker. After CASSANDRA-7953, 
> the latter may still need to write tombstones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10730) periodic timeout errors in dtest

2015-12-01 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034505#comment-15034505
 ] 

Jim Witschey commented on CASSANDRA-10730:
--

It doesn't look like the default heap size is ever set below 1024:

https://github.com/apache/cassandra/blob/trunk/conf/cassandra-env.sh#L57

I don't see anything in the cassandra-3.0 Jenkins job configuration, or in 
Jenkins' automaton.conf, that would set it lower.

> periodic timeout errors in dtest
> 
>
> Key: CASSANDRA-10730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10730
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Jim Witschey
>
> Dtests often fail with connection timeout errors. For example:
> http://cassci.datastax.com/job/cassandra-3.1_dtest/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3/deletion_test/
> {code}
> ('Unable to connect to any servers', {'127.0.0.1': 
> OperationTimedOut('errors=Timed out creating connection (10 seconds), 
> last_host=None',)})
> {code}
> We've merged a PR to increase timeouts:
> https://github.com/riptano/cassandra-dtest/pull/663
> It doesn't look like this has improved things:
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/363/testReport/
> Next steps here are
> * to scrape Jenkins history to see if and how the number of tests failing 
> this way has increased (it feels like it has). From there we can bisect over 
> the dtests, ccm, or C*, depending on what looks like the source of the 
> problem.
> * to better instrument the dtest/ccm/C* startup process to see why the nodes 
> start but don't successfully make the CQL port available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10791) RangeTombstones can be written after END_OF_ROW markers when streaming

2015-12-01 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034551#comment-15034551
 ] 

Branimir Lambov edited comment on CASSANDRA-10791 at 12/1/15 8:56 PM:
--

Uploaded new version of the two branches which also patches scrubber to 
identify rows written with this bug and rectify the problem.

Unit tests (linked above) passed, dtests are still running.


was (Author: blambov):
Uploaded new version od the two branches which also patches scrubber to 
identify files written with this bug and rectify the problem.

Unit tests (linked above) passed, dtests are still running.

> RangeTombstones can be written after END_OF_ROW markers when streaming
> --
>
> Key: CASSANDRA-10791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10791
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>Priority: Critical
> Fix For: 2.1.x, 2.2.x
>
>
> {{SSTableWriter.appendFromStream}} calls {{ColumnIndex.build}} only after 
> finishing the row and writing an {{END_OF_ROW}} marker. After CASSANDRA-7953, 
> the latter may still need to write tombstones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10781) Connection Timed Out When PasswordAuthenticator Enabled on Cubieboard

2015-12-01 Thread Xiangyu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034665#comment-15034665
 ] 

Xiangyu Zhang commented on CASSANDRA-10781:
---

We install OpenJDK using the command 'sudo apt-get install openjdk-7-jre'.

> Connection Timed Out When PasswordAuthenticator Enabled on Cubieboard
> -
>
> Key: CASSANDRA-10781
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10781
> Project: Cassandra
>  Issue Type: Bug
> Environment: Working Environment: Cubie-board A80, running Ubuntu 
> Linaro 14.04 (which is download from 
> http://dl.cubieboard.org/model/cc-a80/Image/ubuntu-linaro)
> Cassandra version: Version 2.1.7 (from 
> http://downloads.datastax.com/community/dsc-cassandra-2.1.7-bin.tar.gz)
> Also tested with 2.2.3 version
>Reporter: Xiangyu Zhang
>Priority: Minor
>
> Connect using default username and password ./cqlsh 192.168.10.26 -u 
> cassandra -p cassandra), it will timed out:Connection error: ('Unable to 
> connect to any servers', {'192.168.10.26': OperationTimedOut('errors=Timed 
> out creating connection, last_host=None',)}). This happens when 
> PasswordAuthenticator is enabled. When it is disabled, database can be 
> connected. Might be some dependency issue that causing the authentication 
> timed out, checked that openssl properly installed. Is there a possible way 
> to track down why the authentication timed out?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10730) periodic timeout errors in dtest

2015-12-01 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034661#comment-15034661
 ] 

Jim Witschey commented on CASSANDRA-10730:
--

Builds >= #16 on my debugging job will include jmap in the output:

http://cassci.datastax.com/job/mambocab-cassandra-3.0-dtest/16/

> periodic timeout errors in dtest
> 
>
> Key: CASSANDRA-10730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10730
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Jim Witschey
>
> Dtests often fail with connection timeout errors. For example:
> http://cassci.datastax.com/job/cassandra-3.1_dtest/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3/deletion_test/
> {code}
> ('Unable to connect to any servers', {'127.0.0.1': 
> OperationTimedOut('errors=Timed out creating connection (10 seconds), 
> last_host=None',)})
> {code}
> We've merged a PR to increase timeouts:
> https://github.com/riptano/cassandra-dtest/pull/663
> It doesn't look like this has improved things:
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/363/testReport/
> Next steps here are
> * to scrape Jenkins history to see if and how the number of tests failing 
> this way has increased (it feels like it has). From there we can bisect over 
> the dtests, ccm, or C*, depending on what looks like the source of the 
> problem.
> * to better instrument the dtest/ccm/C* startup process to see why the nodes 
> start but don't successfully make the CQL port available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA

2015-12-01 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034759#comment-15034759
 ] 

DOAN DuyHai commented on CASSANDRA-10783:
-

bq. I'd really like to have something that does not require any cast for the 
majority of use cases and also supports bind variables

Well, forbid overloaded method makes it much simpler, even with prepared 
statements since there is no ambiguity with bound variables. Worth trying

> Allow literal value as parameter of UDF & UDA
> -
>
> Key: CASSANDRA-10783
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10783
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> I have defined the following UDF
> {code:sql}
> CREATE OR REPLACE FUNCTION  maxOf(current int, testValue int) RETURNS NULL ON 
> NULL INPUT 
> RETURNS int 
> LANGUAGE java 
> AS  'return Math.max(current,testValue);'
> CREATE TABLE maxValue(id int primary key, val int);
> INSERT INTO maxValue(id, val) VALUES(1, 100);
> SELECT maxOf(val, 101) FROM maxValue WHERE id=1;
> {code}
> I got the following error message:
> {code}
> SyntaxException:  message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, 
> [101]...)">
> {code}
>  It would be nice to allow literal value as parameter of UDF and UDA too.
>  I was thinking about an use-case for an UDA groupBy() function where the end 
> user can *inject* at runtime a literal value to select which aggregation he 
> want to display, something similar to GROUP BY ... HAVING 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10395) Monitor UDFs using a single thread

2015-12-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034779#comment-15034779
 ] 

Ariel Weisberg commented on CASSANDRA-10395:


The current approach is to put every UDF call into a concurrent queue. We 
implemented the "optimal" approach to monitoring timeouts as part of 
CASSANDRA-7392. The key difference is you don't have to hit a global concurrent 
linked queue for every single UDF call. You can register the threads once on 
creation and they can publish timeout information to be checked by the timeout 
thread to the same object repeatedly.

[This {{ConcurrentLinkedQueue}} ends up being used as a 
set.|https://github.com/apache/cassandra/compare/trunk...snazy:10395-udf-monitor-3.0?expand=1#diff-074b58383a8982056f42ad1887c3a9e3R172].
 If you need a set it would better to use it rather than having to always walk 
the queue to add/remove stuff.

[{{CopyOnWriteMap}} from Apache mina is an odd choice. Maybe use 
{{org.cliffc.high_scale_lib.NonBlockingHashMap}}?|https://github.com/apache/cassandra/compare/trunk...snazy:10395-udf-monitor-3.0?expand=1#diff-30a3dbf7d783cf329b5fb28a8b14332eR140]

The whole business in general with the thread local. Is that because the list 
of allowable packages changes depending on whether the  thread is currently in 
the UDF or not? Does this allow access to leak if the thread accesses a 
forbidden class/package while not inside a UDF? Once it's loaded it will stay 
loaded right?

It seems like the logging of UDF warnings should be rate limited with a counter 
of how many things were not logged. It's one of the things we agonized about 
how to do well in CASSANDRA-7392.

For the timeout it checks thread cpu time, but it doesn't also check wall clock 
time. The two aren't quite interchangeable if someone figures out a way to 
block the UDF thread such as with the use of concurrent data structures or 
locking.


> Monitor UDFs using a single thread
> --
>
> Key: CASSANDRA-10395
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10395
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.0.x
>
>
> Actually each UDF execution is handed over to a separate thread pool to be 
> able to detect UDF timeouts. We could actually leave UDF execution in the 
> "original" thread but have another thread/scheduled job regularly looking for 
> UDF timeouts, which would save some time executing the UDF.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9474) Validate dc information on startup

2015-12-01 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034815#comment-15034815
 ] 

Paulo Motta commented on CASSANDRA-9474:


Tests look good (paired with main branch failures). Marking as ready to commit.

> Validate dc information on startup
> --
>
> Key: CASSANDRA-9474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9474
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Cassandra 2.1.5
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
> Fix For: 3.1, 3.2, 2.1.x, 2.2.x, 3.0.x
>
> Attachments: CASSANDRA-9474-2.2-1.patch, CASSANDRA-9474-2.2.patch, 
> CASSANDRA-9474-3.0-1.patch, CASSANDRA-9474-dtest.patch, 
> CASSANDRA-9474-trunk.patch, cassandra-2.1-9474.patch, 
> cassandra-2.1-dc_rack_healthcheck.patch
>
>
> When using GossipingPropertyFileSnitch it is possible to change the data 
> center and rack of a live node by changing the cassandra-rackdc.properties 
> file. Should this really be possible? In the documentation at 
> http://docs.datastax.com/en/cassandra/2.1/cassandra/initialize/initializeMultipleDS.html
>  it's stated that you should ??Choose the name carefully; renaming a data 
> center is not possible??, but with this functionality it doesn't seem 
> impossible(maybe a bit hard with changing replication etc.).
> This functionality was introduced by CASSANDRA-5897 so I'm guessing there is 
> some use case for this?
> Personally I would want the DC/rack settings to be as restricted as the 
> cluster name, otherwise if a node could just join another data center without 
> removing it's local information couldn't it mess up the token ranges? And 
> suddenly the old data center/rack would loose 1 replica of all the data that 
> the node contains.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10796) Views do not handle single-column deletions of view PK columns correctly

2015-12-01 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-10796:
---

 Summary: Views do not handle single-column deletions of view PK 
columns correctly
 Key: CASSANDRA-10796
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10796
 Project: Cassandra
  Issue Type: Bug
  Components: Coordination
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 3.0.1, 3.1


When a materialized view has a regular base column in its primary key, and that 
regular base column is deleted through a single-column deletion, the view does 
not handle it correctly, and may produce an error.

For example, with a table like:
{noformat}
CREATE TABLE foo (
a int, b int, c int, d int
PRIMARY KEY (a, b)
)
{noformat}

and a view like:
{noformat}
CREATE MATERIALIZED VIEW BAR
AS SELECT * FROM foo
WHERE ...
PRIMARY KEY (a, d, b)
{noformat}

a deletion like this will not be handled correctly:
{noformat}
DELETE d FROM foo WHERE a = 0 AND b = 0
{noformat}

The source of the problem is that we aren't checking whether individual cells 
in the TemporalRow are live or not when building the clustering and partition 
key for the row.  Instead, we're just using the cell value, which is an empty 
ByteBuffer.

I should have a patch with a fix and tests posted tomorrow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10797) Bootstrap new node fails with OOM when streaming nodes contains thousands of sstables

2015-12-01 Thread Jose Martinez Poblete (JIRA)
Jose Martinez Poblete created CASSANDRA-10797:
-

 Summary: Bootstrap new node fails with OOM when streaming nodes 
contains thousands of sstables
 Key: CASSANDRA-10797
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10797
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
 Environment: Cassandra 2.1.8.621 w/G1GC
Reporter: Jose Martinez Poblete
 Attachments: 112415_system.log, Heapdump_OOM.zip

When adding a new node to an existing DC, it runs OOM after 25-45 minutes
Upon heapdump revision, it is found the sending nodes are streaming thousands 
of sstables which in turns blows the bootstrapping node heap 

{noformat}
ERROR [RMI Scheduler(0)] 2015-11-24 10:10:44,585 JVMStabilityInspector.java:94 
- JVM state determined to be unstable.  Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
ERROR [STREAM-IN-/173.36.28.148] 2015-11-24 10:10:44,585 StreamSession.java:502 
- [Stream #0bb13f50-92cb-11e5-bc8d-f53b7528ffb4] Streaming error occurred
java.lang.IllegalStateException: Shutdown in progress
at 
java.lang.ApplicationShutdownHooks.remove(ApplicationShutdownHooks.java:82) 
~[na:1.8.0_65]
at java.lang.Runtime.removeShutdownHook(Runtime.java:239) ~[na:1.8.0_65]
at 
org.apache.cassandra.service.StorageService.removeShutdownHook(StorageService.java:747)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.utils.JVMStabilityInspector$Killer.killCurrentJVM(JVMStabilityInspector.java:95)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:64)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:66)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:55)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:250)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
ERROR [RMI TCP Connection(idle)] 2015-11-24 10:10:44,585 
JVMStabilityInspector.java:94 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.lang.OutOfMemoryError: Java heap space
ERROR [OptionalTasks:1] 2015-11-24 10:10:44,585 CassandraDaemon.java:223 - 
Exception in thread Thread[OptionalTasks:1,5,main]
java.lang.IllegalStateException: Shutdown in progress
{noformat}

Attached is the Eclipse MAT report as a zipped web page





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9748) Can't see other nodes when using multiple network interfaces

2015-12-01 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034813#comment-15034813
 ] 

Paulo Motta commented on CASSANDRA-9748:


Updated dtests to check if ports are bound, in addition to logs. I guess in 
case log changes we can just update tests. Tested on Windows and works fine.

Let's wait [~RomanB]'s new patch results in his environment before marking this 
as ready to commit.


> Can't see other nodes when using multiple network interfaces
> 
>
> Key: CASSANDRA-9748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9748
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
> Environment: Cassandra 2.0.16; multi-DC configuration
>Reporter: Roman Bielik
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
> Attachments: system_node1.log, system_node2.log
>
>
> The idea is to setup a multi-DC environment across 2 different networks based 
> on the following configuration recommendations:
> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configMultiNetworks.html
> Each node has 2 network interfaces. One used as a private network (DC1: 
> 10.0.1.x and DC2: 10.0.2.x). The second one a "public" network where all 
> nodes can see each other (this one has a higher latency). 
> Using the following settings in cassandra.yaml:
> *seeds:* public IP (same as used in broadcast_address)
> *listen_address:* private IP
> *broadcast_address:* public IP
> *rpc_address:* 0.0.0.0
> *endpoint_snitch:* GossipingPropertyFileSnitch
> _(tried different combinations with no luck)_
> No firewall and no SSL/encryption used.
> The problem is that nodes do not see each other (a gossip problem I guess). 
> The nodetool ring/status shows only the local node but not the other ones 
> (even from the same DC).
> When I set listen_address to public IP, then everything works fine, but that 
> is not the required configuration.
> _Note: Not using EC2 cloud!_
> netstat -anp | grep -E "(7199|9160|9042|7000)"
> tcp0  0 0.0.0.0:71990.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:9160   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:9042   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:7000   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 127.0.0.1:7199  127.0.0.1:52874 
> ESTABLISHED 3587/java   
> tcp0  0 10.0.1.1:7199   10.0.1.1:39650  
> ESTABLISHED 3587/java 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10730) periodic timeout errors in dtest

2015-12-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034825#comment-15034825
 ] 

Ariel Weisberg commented on CASSANDRA-10730:


Great. I am still thinking on this. A 1 gig heap is pretty big and the RSS in 
top wasn't at 1 gig. It makes me think that it might not be GC after all. I am 
wondering if it even gets to the point that it would accept client connections. 
We can't use the stacks to tell because accept for clients is done non-blocking 
with Netty.

Mabye it is accepting connections on the socket, but then throws an exception 
in {{Server.Initializer}} that Netty is swallowing.

Are the log files from the servers that were part of the test collected? [From 
this build I don't see a place to get the generated 
artifacts.|http://cassci.datastax.com/job/mambocab-cassandra-3.0-dtest/10/#showFailuresLink]

> periodic timeout errors in dtest
> 
>
> Key: CASSANDRA-10730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10730
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Jim Witschey
>
> Dtests often fail with connection timeout errors. For example:
> http://cassci.datastax.com/job/cassandra-3.1_dtest/lastCompletedBuild/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3/deletion_test/
> {code}
> ('Unable to connect to any servers', {'127.0.0.1': 
> OperationTimedOut('errors=Timed out creating connection (10 seconds), 
> last_host=None',)})
> {code}
> We've merged a PR to increase timeouts:
> https://github.com/riptano/cassandra-dtest/pull/663
> It doesn't look like this has improved things:
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/363/testReport/
> Next steps here are
> * to scrape Jenkins history to see if and how the number of tests failing 
> this way has increased (it feels like it has). From there we can bisect over 
> the dtests, ccm, or C*, depending on what looks like the source of the 
> problem.
> * to better instrument the dtest/ccm/C* startup process to see why the nodes 
> start but don't successfully make the CQL port available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10797) Bootstrap new node fails with OOM when streaming nodes contains thousands of sstables

2015-12-01 Thread Jose Martinez Poblete (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Martinez Poblete updated CASSANDRA-10797:
--
Attachment: Screen Shot 2015-12-01 at 7.34.40 PM.png

> Bootstrap new node fails with OOM when streaming nodes contains thousands of 
> sstables
> -
>
> Key: CASSANDRA-10797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10797
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.8.621 w/G1GC
>Reporter: Jose Martinez Poblete
> Attachments: 112415_system.log, Heapdump_OOM.zip, Screen Shot 
> 2015-12-01 at 7.34.40 PM.png
>
>
> When adding a new node to an existing DC, it runs OOM after 25-45 minutes
> Upon heapdump revision, it is found the sending nodes are streaming thousands 
> of sstables which in turns blows the bootstrapping node heap 
> {noformat}
> ERROR [RMI Scheduler(0)] 2015-11-24 10:10:44,585 
> JVMStabilityInspector.java:94 - JVM state determined to be unstable.  Exiting 
> forcefully due to:
> java.lang.OutOfMemoryError: Java heap space
> ERROR [STREAM-IN-/173.36.28.148] 2015-11-24 10:10:44,585 
> StreamSession.java:502 - [Stream #0bb13f50-92cb-11e5-bc8d-f53b7528ffb4] 
> Streaming error occurred
> java.lang.IllegalStateException: Shutdown in progress
> at 
> java.lang.ApplicationShutdownHooks.remove(ApplicationShutdownHooks.java:82) 
> ~[na:1.8.0_65]
> at java.lang.Runtime.removeShutdownHook(Runtime.java:239) 
> ~[na:1.8.0_65]
> at 
> org.apache.cassandra.service.StorageService.removeShutdownHook(StorageService.java:747)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.utils.JVMStabilityInspector$Killer.killCurrentJVM(JVMStabilityInspector.java:95)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:64)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:66)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:55)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:250)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> ERROR [RMI TCP Connection(idle)] 2015-11-24 10:10:44,585 
> JVMStabilityInspector.java:94 - JVM state determined to be unstable.  Exiting 
> forcefully due to:
> java.lang.OutOfMemoryError: Java heap space
> ERROR [OptionalTasks:1] 2015-11-24 10:10:44,585 CassandraDaemon.java:223 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.IllegalStateException: Shutdown in progress
> {noformat}
> Attached is the Eclipse MAT report as a zipped web page



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10788) Upgrade from 2.2.1 to 3.0.0 fails with NullPointerException

2015-12-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035108#comment-15035108
 ] 

Stefania commented on CASSANDRA-10788:
--

This would happen if for some reason we have a file instead of a folder in the 
keyspace folder. I reproduced it with a new unit test, we had a bug in our 
directory filter. I've also added a couple more trace messages.

Patches and CI are here:

||3.0||3.1||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/10788-3.0]|[patch|https://github.com/stef1927/cassandra/commits/10788-3.1]|[patch|https://github.com/stef1927/cassandra/commits/10788]|
|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10788-3.0-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10788-3.1-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10788-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10788-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10788-3.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10788-dtest/]|


> Upgrade from 2.2.1 to 3.0.0 fails with NullPointerException
> ---
>
> Key: CASSANDRA-10788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10788
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Tomas Ramanauskas
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> I tried to upgrade Cassandra from 2.2.1 to 3.0.0, however, I get this error 
> on startup after Cassandra 3.0 software was installed:
> {code}
> ERROR [main] 2015-11-30 15:44:50,164 CassandraDaemon.java:702 - Exception 
> encountered during startup
> java.lang.NullPointerException: null
>   at org.apache.cassandra.io.util.FileUtils.delete(FileUtils.java:374) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.SystemKeyspace.migrateDataDirs(SystemKeyspace.java:1341)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:180) 
> [apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:561)
>  [apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.0.0.jar:3.0.0]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10798) C* 2.1 doesn't create dir name with uuid if dir is already present

2015-12-01 Thread MASSIMO CELLI (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MASSIMO CELLI updated CASSANDRA-10798:
--
Description: 
on C* 2.1.12 if you create a new table and a directory with the same name 
already exist under the keyspace then C* will simply use that directory rather 
than creating a new one that has uuid in the name.
Even if you drop and recreate the same table it will still use the previous dir 
and never switch to a new one with uuid. This can happen on one of the nodes in 
the cluster while the other nodes will use the uuid format for the same table.
For example I dropped and recreated the same table three times in this test on 
a two nodes cluster

node1
drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 mytable

node2
drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 
mytable-678a7e31988511e58ce7cfa0aa9730a2
drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:41 
mytable-cade4ee1988411e58ce7cfa0aa9730a2
drwxr-xr-x 2 cassandra cassandra 4096 Dec  1 23:47 
mytable-db1b9b41988511e58ce7cfa0aa9730a2

This seems to break the changes introduced by CASSANDRA-5202


  was:
on C* 2.1.12 if you create a new table and a directory with the same name 
already exist under the keyspace then C* will simply use that directory rather 
than creating a new one that has uuid in the name.
Even if you drop and recreate the same table it will still use the previous dir 
and never switch to a new one with uuid. This can happen on one of the nodes in 
the cluster while the other nodes will use the uuid format for the same table.
This seems to break the changes introduced by CASSANDRA-5202



> C* 2.1 doesn't create dir name with uuid if dir is already present
> --
>
> Key: CASSANDRA-10798
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10798
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: C* 2.1.11
>Reporter: MASSIMO CELLI
> Fix For: 2.1.x
>
>
> on C* 2.1.12 if you create a new table and a directory with the same name 
> already exist under the keyspace then C* will simply use that directory 
> rather than creating a new one that has uuid in the name.
> Even if you drop and recreate the same table it will still use the previous 
> dir and never switch to a new one with uuid. This can happen on one of the 
> nodes in the cluster while the other nodes will use the uuid format for the 
> same table.
> For example I dropped and recreated the same table three times in this test 
> on a two nodes cluster
> node1
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 mytable
> node2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 
> mytable-678a7e31988511e58ce7cfa0aa9730a2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:41 
> mytable-cade4ee1988411e58ce7cfa0aa9730a2
> drwxr-xr-x 2 cassandra cassandra 4096 Dec  1 23:47 
> mytable-db1b9b41988511e58ce7cfa0aa9730a2
> This seems to break the changes introduced by CASSANDRA-5202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10798) C* 2.1 doesn't create dir name with uuid if dir is already present

2015-12-01 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035073#comment-15035073
 ] 

Michael Shuler commented on CASSANDRA-10798:


The drop/create issue is exactly why 5202 was created. The behavior of using 
the data in a name-only data directory was for functional upgrades to 2.1 from 
2.0.

What behavior are you expecting that addresses 2.0->2.1 upgrades as well as 
solving 5202?

> C* 2.1 doesn't create dir name with uuid if dir is already present
> --
>
> Key: CASSANDRA-10798
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10798
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: C* 2.1.11
>Reporter: MASSIMO CELLI
> Fix For: 2.1.x
>
>
> on C* 2.1.12 if you create a new table and a directory with the same name 
> already exist under the keyspace then C* will simply use that directory 
> rather than creating a new one that has uuid in the name.
> Even if you drop and recreate the same table it will still use the previous 
> dir and never switch to a new one with uuid. This can happen on one of the 
> nodes in the cluster while the other nodes will use the uuid format for the 
> same table.
> For example I dropped and recreated the same table three times in this test 
> on a two nodes cluster
> node1
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 mytable
> node2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 
> mytable-678a7e31988511e58ce7cfa0aa9730a2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:41 
> mytable-cade4ee1988411e58ce7cfa0aa9730a2
> drwxr-xr-x 2 cassandra cassandra 4096 Dec  1 23:47 
> mytable-db1b9b41988511e58ce7cfa0aa9730a2
> This seems to break the changes introduced by CASSANDRA-5202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10788) Upgrade from 2.2.1 to 3.0.0 fails with NullPointerException

2015-12-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035115#comment-15035115
 ] 

Stefania commented on CASSANDRA-10788:
--

[~tomas0413]: could you please test the 3.0 patch to confirm it fixes your 
issue? Alternatively could you confirm if you have files in your keyspace 
folders?

> Upgrade from 2.2.1 to 3.0.0 fails with NullPointerException
> ---
>
> Key: CASSANDRA-10788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10788
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Tomas Ramanauskas
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> I tried to upgrade Cassandra from 2.2.1 to 3.0.0, however, I get this error 
> on startup after Cassandra 3.0 software was installed:
> {code}
> ERROR [main] 2015-11-30 15:44:50,164 CassandraDaemon.java:702 - Exception 
> encountered during startup
> java.lang.NullPointerException: null
>   at org.apache.cassandra.io.util.FileUtils.delete(FileUtils.java:374) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.SystemKeyspace.migrateDataDirs(SystemKeyspace.java:1341)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:180) 
> [apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:561)
>  [apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.0.0.jar:3.0.0]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10765) add RangeIterator interface and QueryPlan for SI

2015-12-01 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035184#comment-15035184
 ] 

Pavel Yaskevich commented on CASSANDRA-10765:
-

After working on this for a couple of days it looks like this is going to 
require a bunch of changes to the existing SI API, so instead we are planing to 
take an alternative route: merge SASI with trunk as is; add composite support 
and merge SASI into trunk, after doing that we can try to incrementally 
integrate it with the existing indexes.

> add RangeIterator interface and QueryPlan for SI
> 
>
> Key: CASSANDRA-10765
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10765
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
> Fix For: 3.2
>
>
> Currently built-in indexes have only one way of handling 
> intersections/unions: pick the highest selectivity predicate and filter on 
> other index expressions. This is not always the most efficient approach. 
> Dynamic query planning based on the different index characteristics would be 
> more optimal. Query Plan should be able to choose how to do intersections, 
> unions based on the metadata provided by indexes (returned by RangeIterator) 
> and RangeIterator would became a base for cross index interactions and should 
> have information such as min/max token, estimate number of wrapped tokens etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-10243) Warn or fail when changing cluster topology live

2015-12-01 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reopened CASSANDRA-10243:
--

> Warn or fail when changing cluster topology live
> 
>
> Key: CASSANDRA-10243
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10243
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.12, 2.2.4, 3.0.1, 3.1, 3.2
>
>
> Moving a node from one rack to another in the snitch, while it is alive, is 
> almost always the wrong thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10788) Upgrade from 2.2.1 to 3.0.0 fails with NullPointerException

2015-12-01 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-10788:


Assignee: Stefania

> Upgrade from 2.2.1 to 3.0.0 fails with NullPointerException
> ---
>
> Key: CASSANDRA-10788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10788
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Tomas Ramanauskas
>Assignee: Stefania
> Fix For: 3.0.x, 3.x
>
>
> I tried to upgrade Cassandra from 2.2.1 to 3.0.0, however, I get this error 
> on startup after Cassandra 3.0 software was installed:
> {code}
> ERROR [main] 2015-11-30 15:44:50,164 CassandraDaemon.java:702 - Exception 
> encountered during startup
> java.lang.NullPointerException: null
>   at org.apache.cassandra.io.util.FileUtils.delete(FileUtils.java:374) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.SystemKeyspace.migrateDataDirs(SystemKeyspace.java:1341)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:180) 
> [apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:561)
>  [apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.0.0.jar:3.0.0]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10243) Warn or fail when changing cluster topology live

2015-12-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035009#comment-15035009
 ] 

Stefania commented on CASSANDRA-10243:
--

+1 with a couple of nits:

* introduced documentation so people are less tempted to remove this method in 
future
* rename {{liveEndpoints}} to {{liveMembers}}
* re-introduced {{getLiveTokenOwners()}} as well, this too was public and in 
the same class

Patches and CI here:

||2.1||2.2||3.0||3.1||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/10243-getLiveMembers-2.1]|[patch|https://github.com/stef1927/cassandra/commits/10243-getLiveMembers-2.2]|[patch|https://github.com/stef1927/cassandra/commits/10243-getLiveMembers-3.0]|[patch|https://github.com/stef1927/cassandra/commits/10243-getLiveMembers-3.1]|[patch|https://github.com/stef1927/cassandra/commits/10243-getLiveMembers]|
|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-2.1-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-2.2-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-3.0-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-3.1-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-2.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-3.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10243-getLiveMembers-dtest/]|


> Warn or fail when changing cluster topology live
> 
>
> Key: CASSANDRA-10243
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10243
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.12, 2.2.4, 3.0.1, 3.1, 3.2
>
>
> Moving a node from one rack to another in the snitch, while it is alive, is 
> almost always the wrong thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10798) C* 2.1 doesn't create dir name with uuid if dir is already present

2015-12-01 Thread MASSIMO CELLI (JIRA)
MASSIMO CELLI created CASSANDRA-10798:
-

 Summary: C* 2.1 doesn't create dir name with uuid if dir is 
already present
 Key: CASSANDRA-10798
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10798
 Project: Cassandra
  Issue Type: Bug
  Components: Local Write-Read Paths
 Environment: C* 2.1.11
Reporter: MASSIMO CELLI
 Fix For: 2.1.x


on C* 2.1.12 if you create a new table and a directory with the same name 
already exist under the keyspace then C* will simply use that directory rather 
than creating a new one that has uuid in the name.
Even if you drop and recreate the same table it will still use the previous dir 
and never switch to a new one with uuid. This can happen on one of the nodes in 
the cluster while the other nodes will use the uuid format for the same table.
This seems to break the changes introduced by CASSANDRA-5202




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10798) C* 2.1 doesn't create dir name with uuid if dir is already present

2015-12-01 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-10798.

Resolution: Not A Problem

Yeah, I'm calling this not a problem. This is the expected behavior and is 
documented in NEWS.txt

https://github.com/apache/cassandra/blob/cassandra-2.1.0/NEWS.txt#L31-L36

If you have upgraded nodes from 2.0.x and need new table directories created in 
the typical 2.1 style, then after you drop {{mytable}}, get your sysadmin to 
{{rm -r mytable}} before creating it again (or just {{mv}} it, so you have 
access to snapshots).

> C* 2.1 doesn't create dir name with uuid if dir is already present
> --
>
> Key: CASSANDRA-10798
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10798
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: C* 2.1.11
>Reporter: MASSIMO CELLI
> Fix For: 2.1.x
>
>
> on C* 2.1.12 if you create a new table and a directory with the same name 
> already exist under the keyspace then C* will simply use that directory 
> rather than creating a new one that has uuid in the name.
> Even if you drop and recreate the same table it will still use the previous 
> dir and never switch to a new one with uuid. This can happen on one of the 
> nodes in the cluster while the other nodes will use the uuid format for the 
> same table.
> For example I dropped and recreated the same table three times in this test 
> on a two nodes cluster
> node1
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 mytable
> node2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 
> mytable-678a7e31988511e58ce7cfa0aa9730a2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:41 
> mytable-cade4ee1988411e58ce7cfa0aa9730a2
> drwxr-xr-x 2 cassandra cassandra 4096 Dec  1 23:47 
> mytable-db1b9b41988511e58ce7cfa0aa9730a2
> This seems to break the changes introduced by CASSANDRA-5202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8942) Keep node up even when bootstrap is failed (and provide tool to resume bootstrap)

2015-12-01 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034063#comment-15034063
 ] 

Yuki Morishita commented on CASSANDRA-8942:
---

I think we can. Would you mind creating ticket?

> Keep node up even when bootstrap is failed (and provide tool to resume 
> bootstrap)
> -
>
> Key: CASSANDRA-8942
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8942
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Streaming and Messaging
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Minor
> Fix For: 2.2.0 beta 1
>
>
> With CASSANDRA-8838, we can keep bootstrapping node up when some streaming 
> failed, if we provide tool to resume failed bootstrap streaming.
> Failed bootstrap node enters the mode similar to 'write survey mode'. So 
> other nodes in the cluster still view it as bootstrapping, though they send 
> writes to bootstrapping node as well.
> Providing new nodetool command to resume bootstrap from saved bootstrap 
> state, we can continue bootstrapping after resolving issue that caused 
> previous bootstrap failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9474) Validate dc information on startup

2015-12-01 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034062#comment-15034062
 ] 

Paulo Motta commented on CASSANDRA-9474:


Thanks! Resubmitted tests, should be +1 if CI doesn't complain.

||2.1||2.2||3.0||3.1||trunk||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.1...pauloricardomg:2.1-9474]|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-9474-v2]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-9474-v2]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.1...pauloricardomg:3.1-9474-v2]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-9474-v2]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.1-9474-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-9474-v2-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-9474-v2-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.1-9474-v2-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-9474-v2-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.1-9474-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-9474-v2-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-9474-v2-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.1-9474-v2-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-9474-v2-dtest/lastCompletedBuild/testReport/]|

> Validate dc information on startup
> --
>
> Key: CASSANDRA-9474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9474
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Cassandra 2.1.5
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
> Fix For: 3.1, 3.2, 2.1.x, 2.2.x, 3.0.x
>
> Attachments: CASSANDRA-9474-2.2-1.patch, CASSANDRA-9474-2.2.patch, 
> CASSANDRA-9474-3.0-1.patch, CASSANDRA-9474-dtest.patch, 
> CASSANDRA-9474-trunk.patch, cassandra-2.1-9474.patch, 
> cassandra-2.1-dc_rack_healthcheck.patch
>
>
> When using GossipingPropertyFileSnitch it is possible to change the data 
> center and rack of a live node by changing the cassandra-rackdc.properties 
> file. Should this really be possible? In the documentation at 
> http://docs.datastax.com/en/cassandra/2.1/cassandra/initialize/initializeMultipleDS.html
>  it's stated that you should ??Choose the name carefully; renaming a data 
> center is not possible??, but with this functionality it doesn't seem 
> impossible(maybe a bit hard with changing replication etc.).
> This functionality was introduced by CASSANDRA-5897 so I'm guessing there is 
> some use case for this?
> Personally I would want the DC/rack settings to be as restricted as the 
> cluster name, otherwise if a node could just join another data center without 
> removing it's local information couldn't it mess up the token ranges? And 
> suddenly the old data center/rack would loose 1 replica of all the data that 
> the node contains.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Update of "FrontPage" by JoshuaMcKenzie

2015-12-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "FrontPage" page has been changed by JoshuaMcKenzie:
https://wiki.apache.org/cassandra/FrontPage?action=diff=110=111

   * [[ArchitectureInternals|Architecture Internals]]
   * [[TopLevelPackages|Top Level Packages]]
   * [[CLI%20Design|CLI Design]]
-  * [[HowToContribute|How To Contribute]]?
+  * [[HowToContribute|How To Contribute]]
   * [[HowToReview|How To Review]]
-  * [[How To Commit?|HowToCommit]]
+  * [[HowToCommit|How To Commit]]
   * [[HowToPublishReleases|How To Release]] (Note: currently a work in 
progress) (Note: only relevant to Cassandra Committers)
   * [[Windows Development|WindowsDevelopment]]
   * [[LoggingGuidelines|Logging Guidelines|]]


[jira] [Commented] (CASSANDRA-10739) Timeout for CQL Deletes on an Entire Partition Against Specified Columns

2015-12-01 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034100#comment-15034100
 ] 

Benjamin Lerer commented on CASSANDRA-10739:


Thanks for the review.

> Timeout for CQL Deletes on an Entire Partition Against Specified Columns
> 
>
> Key: CASSANDRA-10739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10739
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Local Write-Read Paths
>Reporter: Caleb William Rackliffe
>Assignee: Benjamin Lerer
> Fix For: 3.1
>
> Attachments: 10739-3.0.txt
>
>
> {noformat}
> cqlsh:graphs> delete color from composite_pk where k1 = '1' and k2 = '1';
> cqlsh:graphs> create table composite_with_clustering(k1 text, k2 text, c1 
> text, color text, value float, primary key ((k1, k2), c1));
> cqlsh:graphs> insert into composite_with_clustering(k1, k2, c1, value) values 
> ('1','1', '1', 6);
> cqlsh:graphs> insert into composite_with_clustering(k1, k2, c1, color) values 
> ('1','1', '2', 'green');
> cqlsh:graphs> delete color from composite_with_clustering where k1 = '1' and 
> k2 = '1';
> WriteTimeout: code=1100 [Coordinator node timed out waiting for replica 
> nodes' responses] message="Operation timed out - received only 0 responses." 
> info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {noformat}
> {{Clustering$Serializer}} clearly doesn't like this:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-19 20:55:15,935  
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.AssertionError: Invalid clustering for the table: 
> org.apache.cassandra.db.Clustering$2@3157dded
>   at 
> org.apache.cassandra.db.Clustering$Serializer.serialize(Clustering.java:136) 
> ~[cassandra-all-3.0.0.710.jar:3.0.0.710]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:159)
>  ~[cassandra-all-3.0.0.710.jar:3.0.0.710]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[cassandra-all-3.0.0.710.jar:3.0.0.710]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[cassandra-all-3.0.0.710.jar:3.0.0.710]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[cassandra-all-3.0.0.710.jar:3.0.0.710]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[cassandra-all-3.0.0.710.jar:3.0.0.710]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:599)
>  ~[cassandra-all-3.0.0.710.jar:3.0.0.710]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:291)
>  ~[cassandra-all-3.0.0.710.jar:3.0.0.710]
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
> ~[cassandra-all-3.0.0.710.jar:3.0.0.710]
> {noformat}
> If this isn't supported, there should probably be a more obvious error 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10674) Materialized View SSTable streaming/leaving status race on decommission

2015-12-01 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034104#comment-15034104
 ] 

Joel Knighton commented on CASSANDRA-10674:
---

I believe you are correct - in fact, I think if either 1) there are pending 
nodes or 2) there are no view natural endpoints, the current MV design will not 
want the batchlog cleaned up on a successful write.

It's worth asking Jake/Carl about intent in the original MV design.

> Materialized View SSTable streaming/leaving status race on decommission
> ---
>
> Key: CASSANDRA-10674
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10674
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Distributed Metadata
>Reporter: Joel Knighton
>Assignee: Paulo Motta
> Fix For: 3.0.1, 3.1
>
> Attachments: leaving-node-debug.log, receiving-node-debug.log
>
>
> On decommission of a node in a cluster with materialized views, it is 
> possible for the decommissioning node to begin streaming sstables for an MV 
> base table before the receiving node is aware of the leaving status.
> The materialized view base/view replica pairing checks pending endpoints to 
> handle the case when an sstable is received from a leaving node; without the 
> leaving message, this check breaks and an exception is thrown. The streamed 
> sstable is never applied.
> Logs from a decommissioning node and a node receiving such a stream are 
> attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7281) SELECT on tuple relations are broken for mixed ASC/DESC clustering order

2015-12-01 Thread Marcin Szymaniuk (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034102#comment-15034102
 ] 

Marcin Szymaniuk commented on CASSANDRA-7281:
-

I'm still interested. I will have a look at your patch.

> SELECT on tuple relations are broken for mixed ASC/DESC clustering order
> 
>
> Key: CASSANDRA-7281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7281
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Marcin Szymaniuk
> Fix For: 2.1.x
>
> Attachments: 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-.patch, 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v2.patch, 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v3.patch, 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v4.patch, 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v5.patch, 
> 7281_unit_tests.txt
>
>
> As noted on 
> [CASSANDRA-6875|https://issues.apache.org/jira/browse/CASSANDRA-6875?focusedCommentId=13992153=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13992153],
>  the tuple notation is broken when the clustering order mixes ASC and DESC 
> directives because the range of data they describe don't correspond to a 
> single continuous slice internally. To copy the example from CASSANDRA-6875:
> {noformat}
> cqlsh:ks> create table foo (a int, b int, c int, PRIMARY KEY (a, b, c)) WITH 
> CLUSTERING ORDER BY (b DESC, c ASC);
> cqlsh:ks> INSERT INTO foo (a, b, c) VALUES (0, 2, 0);
> cqlsh:ks> INSERT INTO foo (a, b, c) VALUES (0, 1, 0);
> cqlsh:ks> INSERT INTO foo (a, b, c) VALUES (0, 1, 1);
> cqlsh:ks> INSERT INTO foo (a, b, c) VALUES (0, 0, 0);
> cqlsh:ks> SELECT * FROM foo WHERE a=0;
>  a | b | c
> ---+---+---
>  0 | 2 | 0
>  0 | 1 | 0
>  0 | 1 | 1
>  0 | 0 | 0
> (4 rows)
> cqlsh:ks> SELECT * FROM foo WHERE a=0 AND (b, c) > (1, 0);
>  a | b | c
> ---+---+---
>  0 | 2 | 0
> (1 rows)
> {noformat}
> The last query should really return {{(0, 2, 0)}} and {{(0, 1, 1)}}.
> For that specific example we should generate 2 internal slices, but I believe 
> that with more clustering columns we may have more slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9666) Provide an alternative to DTCS

2015-12-01 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034070#comment-15034070
 ] 

Jeff Jirsa commented on CASSANDRA-9666:
---

 If/when it's wont-fixed, I'll continue updating at 
https://github.com/jeffjirsa/twcs/

We're almost certainly going to continue using TWCS, and given that others are 
using it in production, I'll continue maintaining it until that's no longer true


> Provide an alternative to DTCS
> --
>
> Key: CASSANDRA-9666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9666
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 2.1.x, 2.2.x
>
>
> DTCS is great for time series data, but it comes with caveats that make it 
> difficult to use in production (typical operator behaviors such as bootstrap, 
> removenode, and repair have MAJOR caveats as they relate to 
> max_sstable_age_days, and hints/read repair break the selection algorithm).
> I'm proposing an alternative, TimeWindowCompactionStrategy, that sacrifices 
> the tiered nature of DTCS in order to address some of DTCS' operational 
> shortcomings. I believe it is necessary to propose an alternative rather than 
> simply adjusting DTCS, because it fundamentally removes the tiered nature in 
> order to remove the parameter max_sstable_age_days - the result is very very 
> different, even if it is heavily inspired by DTCS. 
> Specifically, rather than creating a number of windows of ever increasing 
> sizes, this strategy allows an operator to choose the window size, compact 
> with STCS within the first window of that size, and aggressive compact down 
> to a single sstable once that window is no longer current. The window size is 
> a combination of unit (minutes, hours, days) and size (1, etc), such that an 
> operator can expect all data using a block of that size to be compacted 
> together (that is, if your unit is hours, and size is 6, you will create 
> roughly 4 sstables per day, each one containing roughly 6 hours of data). 
> The result addresses a number of the problems with 
> DateTieredCompactionStrategy:
> - At the present time, DTCS’s first window is compacted using an unusual 
> selection criteria, which prefers files with earlier timestamps, but ignores 
> sizes. In TimeWindowCompactionStrategy, the first window data will be 
> compacted with the well tested, fast, reliable STCS. All STCS options can be 
> passed to TimeWindowCompactionStrategy to configure the first window’s 
> compaction behavior.
> - HintedHandoff may put old data in new sstables, but it will have little 
> impact other than slightly reduced efficiency (sstables will cover a wider 
> range, but the old timestamps will not impact sstable selection criteria 
> during compaction)
> - ReadRepair may put old data in new sstables, but it will have little impact 
> other than slightly reduced efficiency (sstables will cover a wider range, 
> but the old timestamps will not impact sstable selection criteria during 
> compaction)
> - Small, old sstables resulting from streams of any kind will be swiftly and 
> aggressively compacted with the other sstables matching their similar 
> maxTimestamp, without causing sstables in neighboring windows to grow in size.
> - The configuration options are explicit and straightforward - the tuning 
> parameters leave little room for error. The window is set in common, easily 
> understandable terms such as “12 hours”, “1 Day”, “30 days”. The 
> minute/hour/day options are granular enough for users keeping data for hours, 
> and users keeping data for years. 
> - There is no explicitly configurable max sstable age, though sstables will 
> naturally stop compacting once new data is written in that window. 
> - Streaming operations can create sstables with old timestamps, and they'll 
> naturally be joined together with sstables in the same time bucket. This is 
> true for bootstrap/repair/sstableloader/removenode. 
> - It remains true that if old data and new data is written into the memtable 
> at the same time, the resulting sstables will be treated as if they were new 
> sstables, however, that no longer negatively impacts the compaction 
> strategy’s selection criteria for older windows. 
> Patch provided for : 
> - 2.1: https://github.com/jeffjirsa/cassandra/commits/twcs-2.1 
> - 2.2: https://github.com/jeffjirsa/cassandra/commits/twcs-2.2
> - trunk (post-8099):  https://github.com/jeffjirsa/cassandra/commits/twcs 
> Rebased, force-pushed July 18, with bug fixes for estimated pending 
> compactions and potential starvation if more than min_threshold tables 
> existed in current window but STCS did not consider them viable candidates
> Rebased, force-pushed Aug 20 to bring in relevant logic from CASSANDRA-9882



--
This message was sent 

cassandra git commit: Rejects partition range deletions when columns are specified

2015-12-01 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 2af4fba7e -> 3864b2114


Rejects partition range deletions when columns are specified

patch by Benjamin Lerer; reviewed by Carl Yeksigian for CASSANDRA-10739


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3864b211
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3864b211
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3864b211

Branch: refs/heads/cassandra-3.0
Commit: 3864b2114ab11a02cf55e91c1e5553c9c4f854bc
Parents: 2af4fba
Author: Benjamin Lerer 
Authored: Tue Dec 1 18:10:11 2015 +0100
Committer: Benjamin Lerer 
Committed: Tue Dec 1 18:12:46 2015 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/DeleteStatement.java| 6 ++
 .../cassandra/cql3/validation/operations/DeleteTest.java | 8 
 3 files changed, 15 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3864b211/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bd14e67..7fffbbf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.1
+ * Rejects partition range deletions when columns are specified 
(CASSANDRA-10739)
  * Fix error when saving cached key for old format sstable (CASSANDRA-10778)
  * Invalidate prepared statements on DROP INDEX (CASSANDRA-10758)
  * Fix SELECT statement with IN restrictions on partition key,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3864b211/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
index 0efe35c..daeecfe 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
@@ -78,6 +78,12 @@ public class DeleteStatement extends ModificationStatement
 {
 if (!regularDeletions.isEmpty())
 {
+// if the clustering size is zero but there are some 
clustering columns, it means that it's a
+// range deletion (the full partition) in which case we need 
to throw an error as range deletion
+// do not support specific columns
+checkFalse(clustering.size() == 0 && 
cfm.clusteringColumns().size() != 0,
+   "Range deletions are not supported for specific 
columns");
+
 params.newRow(clustering);
 
 for (Operation op : regularDeletions)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3864b211/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 5d9ef8f..4f35afa 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@ -718,6 +718,8 @@ public class DeleteTest extends CQLTester
 // Test invalid queries
 assertInvalidMessage("Range deletions are not supported for 
specific columns",
  "DELETE value FROM %s WHERE partitionKey = ? 
AND clustering >= ?", 2, 1);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = 
?", 2);
 }
 }
 
@@ -911,6 +913,12 @@ public class DeleteTest extends CQLTester
 // Test invalid queries
 assertInvalidMessage("Range deletions are not supported for 
specific columns",
  "DELETE value FROM %s WHERE partitionKey = ? 
AND (clustering_1, clustering_2) >= (?, ?)", 2, 3, 1);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = ? 
AND clustering_1 >= ?", 2, 3);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = ? 
AND clustering_1 = ?", 2, 3);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = 
?", 2);
 }
 }
 



[2/3] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-12-01 Thread blerer
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60aeef3d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60aeef3d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60aeef3d

Branch: refs/heads/trunk
Commit: 60aeef3d663d18243b19e56b9f1f9d95a1d28908
Parents: 924fb4d 3864b21
Author: Benjamin Lerer 
Authored: Tue Dec 1 18:14:58 2015 +0100
Committer: Benjamin Lerer 
Committed: Tue Dec 1 18:14:58 2015 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/DeleteStatement.java| 6 ++
 .../cassandra/cql3/validation/operations/DeleteTest.java | 8 
 3 files changed, 15 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/60aeef3d/CHANGES.txt
--
diff --cc CHANGES.txt
index 99777ec,7fffbbf..ed66b69
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,5 +1,6 @@@
 -3.0.1
 +3.1
 +Merged from 3.0:
+  * Rejects partition range deletions when columns are specified 
(CASSANDRA-10739)
   * Fix error when saving cached key for old format sstable (CASSANDRA-10778)
   * Invalidate prepared statements on DROP INDEX (CASSANDRA-10758)
   * Fix SELECT statement with IN restrictions on partition key,



[1/3] cassandra git commit: Rejects partition range deletions when columns are specified

2015-12-01 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk c92dab522 -> 7c3e0b191


Rejects partition range deletions when columns are specified

patch by Benjamin Lerer; reviewed by Carl Yeksigian for CASSANDRA-10739


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3864b211
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3864b211
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3864b211

Branch: refs/heads/trunk
Commit: 3864b2114ab11a02cf55e91c1e5553c9c4f854bc
Parents: 2af4fba
Author: Benjamin Lerer 
Authored: Tue Dec 1 18:10:11 2015 +0100
Committer: Benjamin Lerer 
Committed: Tue Dec 1 18:12:46 2015 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/DeleteStatement.java| 6 ++
 .../cassandra/cql3/validation/operations/DeleteTest.java | 8 
 3 files changed, 15 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3864b211/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bd14e67..7fffbbf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.1
+ * Rejects partition range deletions when columns are specified 
(CASSANDRA-10739)
  * Fix error when saving cached key for old format sstable (CASSANDRA-10778)
  * Invalidate prepared statements on DROP INDEX (CASSANDRA-10758)
  * Fix SELECT statement with IN restrictions on partition key,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3864b211/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
index 0efe35c..daeecfe 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
@@ -78,6 +78,12 @@ public class DeleteStatement extends ModificationStatement
 {
 if (!regularDeletions.isEmpty())
 {
+// if the clustering size is zero but there are some 
clustering columns, it means that it's a
+// range deletion (the full partition) in which case we need 
to throw an error as range deletion
+// do not support specific columns
+checkFalse(clustering.size() == 0 && 
cfm.clusteringColumns().size() != 0,
+   "Range deletions are not supported for specific 
columns");
+
 params.newRow(clustering);
 
 for (Operation op : regularDeletions)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3864b211/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 5d9ef8f..4f35afa 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@ -718,6 +718,8 @@ public class DeleteTest extends CQLTester
 // Test invalid queries
 assertInvalidMessage("Range deletions are not supported for 
specific columns",
  "DELETE value FROM %s WHERE partitionKey = ? 
AND clustering >= ?", 2, 1);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = 
?", 2);
 }
 }
 
@@ -911,6 +913,12 @@ public class DeleteTest extends CQLTester
 // Test invalid queries
 assertInvalidMessage("Range deletions are not supported for 
specific columns",
  "DELETE value FROM %s WHERE partitionKey = ? 
AND (clustering_1, clustering_2) >= (?, ?)", 2, 3, 1);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = ? 
AND clustering_1 >= ?", 2, 3);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = ? 
AND clustering_1 = ?", 2, 3);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = 
?", 2);
 }
 }
 



[3/3] cassandra git commit: Merge branch 'cassandra-3.1' into trunk

2015-12-01 Thread blerer
Merge branch 'cassandra-3.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7c3e0b19
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7c3e0b19
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7c3e0b19

Branch: refs/heads/trunk
Commit: 7c3e0b191f5b535a212866925d57a23e81b063de
Parents: c92dab5 60aeef3
Author: Benjamin Lerer 
Authored: Tue Dec 1 18:15:59 2015 +0100
Committer: Benjamin Lerer 
Committed: Tue Dec 1 18:16:08 2015 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/DeleteStatement.java| 6 ++
 .../cassandra/cql3/validation/operations/DeleteTest.java | 8 
 3 files changed, 15 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c3e0b19/CHANGES.txt
--
diff --cc CHANGES.txt
index c91066a,ed66b69..60aaa49
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,6 +1,19 @@@
 +3.2
 + * Make compression ratio much more accurate (CASSANDRA-10225)
 + * Optimize building of Clustering object when only one is created 
(CASSANDRA-10409)
 + * Make index building pluggable (CASSANDRA-10681)
 + * Add sstable flush observer (CASSANDRA-10678)
 + * Improve NTS endpoints calculation (CASSANDRA-10200)
 + * Improve performance of the folderSize function (CASSANDRA-10677)
 + * Add support for type casting in selection clause (CASSANDRA-10310)
 + * Added graphing option to cassandra-stress (CASSANDRA-7918)
 + * Abort in-progress queries that time out (CASSANDRA-7392)
 + * Add transparent data encryption core classes (CASSANDRA-9945)
 +
 +
  3.1
  Merged from 3.0:
+  * Rejects partition range deletions when columns are specified 
(CASSANDRA-10739)
   * Fix error when saving cached key for old format sstable (CASSANDRA-10778)
   * Invalidate prepared statements on DROP INDEX (CASSANDRA-10758)
   * Fix SELECT statement with IN restrictions on partition key,



[Cassandra Wiki] Trivial Update of "HowToContribute" by JoshuaMcKenzie

2015-12-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "HowToContribute" page has been changed by JoshuaMcKenzie:
https://wiki.apache.org/cassandra/HowToContribute?action=diff=64=65

* Make sure all tests pass by running "ant test" in the project directory.
 * You can run specific tests like so: `ant test 
-Dtest.name=SSTableReaderTest``
* For testing multi-node behavior, https://github.com/pcmanus/ccm is useful
+   * Consider going through the [[HowToReview|Review Checklist]] for your code
   1. When you're happy with the result create a patch:
* git add 
* git commit -m ''


[jira] [Created] (CASSANDRA-10795) Improve Failure Detector Unknown EP message

2015-12-01 Thread Anthony Cozzie (JIRA)
Anthony Cozzie created CASSANDRA-10795:
--

 Summary: Improve Failure Detector Unknown EP message
 Key: CASSANDRA-10795
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10795
 Project: Cassandra
  Issue Type: Bug
Reporter: Anthony Cozzie
Assignee: Anthony Cozzie
Priority: Minor


When the failure detector is asked whether an unknown endpoint is alive, it 
prints an uninformative error message.  This patch adds a stack trace to the 
print statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10795) Improve Failure Detector Unknown EP message

2015-12-01 Thread Anthony Cozzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Cozzie updated CASSANDRA-10795:
---
Attachment: trunk-10795.txt

> Improve Failure Detector Unknown EP message
> ---
>
> Key: CASSANDRA-10795
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10795
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
>Priority: Minor
> Attachments: trunk-10795.txt
>
>
> When the failure detector is asked whether an unknown endpoint is alive, it 
> prints an uninformative error message.  This patch adds a stack trace to the 
> print statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-12-01 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034081#comment-15034081
 ] 

T Jake Luciani commented on CASSANDRA-10592:


[~aweisberg] if you could squash and rebase I'll commit

> IllegalArgumentException in DataOutputBuffer.reallocate
> ---
>
> Key: CASSANDRA-10592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10592
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths, Streaming and 
> Messaging
>Reporter: Sebastian Estevez
>Assignee: Ariel Weisberg
> Fix For: 3.0.1, 3.1, 2.2.x
>
>
> CORRECTION-
> It turns out the exception occurs when running a read using a thrift jdbc 
> driver. Once you have loaded the data with stress below, run 
> SELECT * FROM "autogeneratedtest"."transaction_by_retailer" using this tool - 
> http://www.aquafold.com/aquadatastudio_downloads.html
>  
> The exception:
> {code}
> WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.RuntimeException: java.lang.IllegalArgumentException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> Caused by: java.lang.IllegalArgumentException: null
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
>  ~[main/:na]
>   ... 4 common frames omitted
> {code}
> I was running this command:
> {code}
> tools/bin/cassandra-stress user 
> profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
> threads=30
> {code}
> Here's the stress.yaml UPDATED!
> {code}
> ### DML ### THIS IS UNDER CONSTRUCTION!!!
> # Keyspace Name
> keyspace: autogeneratedtest
> # The CQL for creating a keyspace (optional if it already 

[jira] [Comment Edited] (CASSANDRA-9666) Provide an alternative to DTCS

2015-12-01 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034070#comment-15034070
 ] 

Jeff Jirsa edited comment on CASSANDRA-9666 at 12/1/15 5:07 PM:


 If/when it's wont-fixed, I'll continue updating at 
https://github.com/jeffjirsa/twcs/

We're almost certainly going to continue using TWCS, and given that others are 
using it in production, I'll continue maintaining it until that's no longer true

For what it's worth, we'll continue using TWCS because the explicit confit 
options are easier to reason about, and it's significantly less likely to be 
confused by old data via foreground read repair



was (Author: jjirsa):
 If/when it's wont-fixed, I'll continue updating at 
https://github.com/jeffjirsa/twcs/

We're almost certainly going to continue using TWCS, and given that others are 
using it in production, I'll continue maintaining it until that's no longer true


> Provide an alternative to DTCS
> --
>
> Key: CASSANDRA-9666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9666
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 2.1.x, 2.2.x
>
>
> DTCS is great for time series data, but it comes with caveats that make it 
> difficult to use in production (typical operator behaviors such as bootstrap, 
> removenode, and repair have MAJOR caveats as they relate to 
> max_sstable_age_days, and hints/read repair break the selection algorithm).
> I'm proposing an alternative, TimeWindowCompactionStrategy, that sacrifices 
> the tiered nature of DTCS in order to address some of DTCS' operational 
> shortcomings. I believe it is necessary to propose an alternative rather than 
> simply adjusting DTCS, because it fundamentally removes the tiered nature in 
> order to remove the parameter max_sstable_age_days - the result is very very 
> different, even if it is heavily inspired by DTCS. 
> Specifically, rather than creating a number of windows of ever increasing 
> sizes, this strategy allows an operator to choose the window size, compact 
> with STCS within the first window of that size, and aggressive compact down 
> to a single sstable once that window is no longer current. The window size is 
> a combination of unit (minutes, hours, days) and size (1, etc), such that an 
> operator can expect all data using a block of that size to be compacted 
> together (that is, if your unit is hours, and size is 6, you will create 
> roughly 4 sstables per day, each one containing roughly 6 hours of data). 
> The result addresses a number of the problems with 
> DateTieredCompactionStrategy:
> - At the present time, DTCS’s first window is compacted using an unusual 
> selection criteria, which prefers files with earlier timestamps, but ignores 
> sizes. In TimeWindowCompactionStrategy, the first window data will be 
> compacted with the well tested, fast, reliable STCS. All STCS options can be 
> passed to TimeWindowCompactionStrategy to configure the first window’s 
> compaction behavior.
> - HintedHandoff may put old data in new sstables, but it will have little 
> impact other than slightly reduced efficiency (sstables will cover a wider 
> range, but the old timestamps will not impact sstable selection criteria 
> during compaction)
> - ReadRepair may put old data in new sstables, but it will have little impact 
> other than slightly reduced efficiency (sstables will cover a wider range, 
> but the old timestamps will not impact sstable selection criteria during 
> compaction)
> - Small, old sstables resulting from streams of any kind will be swiftly and 
> aggressively compacted with the other sstables matching their similar 
> maxTimestamp, without causing sstables in neighboring windows to grow in size.
> - The configuration options are explicit and straightforward - the tuning 
> parameters leave little room for error. The window is set in common, easily 
> understandable terms such as “12 hours”, “1 Day”, “30 days”. The 
> minute/hour/day options are granular enough for users keeping data for hours, 
> and users keeping data for years. 
> - There is no explicitly configurable max sstable age, though sstables will 
> naturally stop compacting once new data is written in that window. 
> - Streaming operations can create sstables with old timestamps, and they'll 
> naturally be joined together with sstables in the same time bucket. This is 
> true for bootstrap/repair/sstableloader/removenode. 
> - It remains true that if old data and new data is written into the memtable 
> at the same time, the resulting sstables will be treated as if they were new 
> sstables, however, that no longer negatively impacts the compaction 
> strategy’s selection criteria for older windows. 
> Patch provided for : 
> - 2.1: 

[1/2] cassandra git commit: Rejects partition range deletions when columns are specified

2015-12-01 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.1 924fb4d50 -> 60aeef3d6


Rejects partition range deletions when columns are specified

patch by Benjamin Lerer; reviewed by Carl Yeksigian for CASSANDRA-10739


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3864b211
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3864b211
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3864b211

Branch: refs/heads/cassandra-3.1
Commit: 3864b2114ab11a02cf55e91c1e5553c9c4f854bc
Parents: 2af4fba
Author: Benjamin Lerer 
Authored: Tue Dec 1 18:10:11 2015 +0100
Committer: Benjamin Lerer 
Committed: Tue Dec 1 18:12:46 2015 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/DeleteStatement.java| 6 ++
 .../cassandra/cql3/validation/operations/DeleteTest.java | 8 
 3 files changed, 15 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3864b211/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bd14e67..7fffbbf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.1
+ * Rejects partition range deletions when columns are specified 
(CASSANDRA-10739)
  * Fix error when saving cached key for old format sstable (CASSANDRA-10778)
  * Invalidate prepared statements on DROP INDEX (CASSANDRA-10758)
  * Fix SELECT statement with IN restrictions on partition key,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3864b211/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
index 0efe35c..daeecfe 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DeleteStatement.java
@@ -78,6 +78,12 @@ public class DeleteStatement extends ModificationStatement
 {
 if (!regularDeletions.isEmpty())
 {
+// if the clustering size is zero but there are some 
clustering columns, it means that it's a
+// range deletion (the full partition) in which case we need 
to throw an error as range deletion
+// do not support specific columns
+checkFalse(clustering.size() == 0 && 
cfm.clusteringColumns().size() != 0,
+   "Range deletions are not supported for specific 
columns");
+
 params.newRow(clustering);
 
 for (Operation op : regularDeletions)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3864b211/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 5d9ef8f..4f35afa 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@ -718,6 +718,8 @@ public class DeleteTest extends CQLTester
 // Test invalid queries
 assertInvalidMessage("Range deletions are not supported for 
specific columns",
  "DELETE value FROM %s WHERE partitionKey = ? 
AND clustering >= ?", 2, 1);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = 
?", 2);
 }
 }
 
@@ -911,6 +913,12 @@ public class DeleteTest extends CQLTester
 // Test invalid queries
 assertInvalidMessage("Range deletions are not supported for 
specific columns",
  "DELETE value FROM %s WHERE partitionKey = ? 
AND (clustering_1, clustering_2) >= (?, ?)", 2, 3, 1);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = ? 
AND clustering_1 >= ?", 2, 3);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = ? 
AND clustering_1 = ?", 2, 3);
+assertInvalidMessage("Range deletions are not supported for 
specific columns",
+ "DELETE value FROM %s WHERE partitionKey = 
?", 2);
 }
 }
 



[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-12-01 Thread blerer
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60aeef3d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60aeef3d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60aeef3d

Branch: refs/heads/cassandra-3.1
Commit: 60aeef3d663d18243b19e56b9f1f9d95a1d28908
Parents: 924fb4d 3864b21
Author: Benjamin Lerer 
Authored: Tue Dec 1 18:14:58 2015 +0100
Committer: Benjamin Lerer 
Committed: Tue Dec 1 18:14:58 2015 +0100

--
 CHANGES.txt  | 1 +
 .../apache/cassandra/cql3/statements/DeleteStatement.java| 6 ++
 .../cassandra/cql3/validation/operations/DeleteTest.java | 8 
 3 files changed, 15 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/60aeef3d/CHANGES.txt
--
diff --cc CHANGES.txt
index 99777ec,7fffbbf..ed66b69
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,5 +1,6 @@@
 -3.0.1
 +3.1
 +Merged from 3.0:
+  * Rejects partition range deletions when columns are specified 
(CASSANDRA-10739)
   * Fix error when saving cached key for old format sstable (CASSANDRA-10778)
   * Invalidate prepared statements on DROP INDEX (CASSANDRA-10758)
   * Fix SELECT statement with IN restrictions on partition key,



[Cassandra Wiki] Trivial Update of "HowToContribute" by JoshuaMcKenzie

2015-12-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "HowToContribute" page has been changed by JoshuaMcKenzie:
https://wiki.apache.org/cassandra/HowToContribute?action=diff=63=64

* git format-patch
* mv   (e.g. trunk-123.txt, 
cassandra-0.6-123.txt)
   1. Attach the newly generated patch to the issue and click "Submit patch" in 
the left side of the JIRA page
-  1. Wait for other developers or committers to review it and hopefully +1 the 
ticket
+  1. Wait for other developers or committers to review it and hopefully +1 the 
ticket (see [[HowToReview|How To Review]])
   1. Wait for a committer to commit it.
  
  == Testing and Coverage ==


[jira] [Comment Edited] (CASSANDRA-9666) Provide an alternative to DTCS

2015-12-01 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034070#comment-15034070
 ] 

Jeff Jirsa edited comment on CASSANDRA-9666 at 12/1/15 5:19 PM:


 If/when it's wont-fixed, I'll continue updating at 
https://github.com/jeffjirsa/twcs/

We're almost certainly going to continue using TWCS, and given that others are 
using it in production, I'll continue maintaining it until that's no longer true

For what it's worth, we'll continue using TWCS because the explicit config 
options are easier to reason about, and it's significantly less likely to be 
confused by old data via foreground read repair



was (Author: jjirsa):
 If/when it's wont-fixed, I'll continue updating at 
https://github.com/jeffjirsa/twcs/

We're almost certainly going to continue using TWCS, and given that others are 
using it in production, I'll continue maintaining it until that's no longer true

For what it's worth, we'll continue using TWCS because the explicit confit 
options are easier to reason about, and it's significantly less likely to be 
confused by old data via foreground read repair


> Provide an alternative to DTCS
> --
>
> Key: CASSANDRA-9666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9666
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 2.1.x, 2.2.x
>
>
> DTCS is great for time series data, but it comes with caveats that make it 
> difficult to use in production (typical operator behaviors such as bootstrap, 
> removenode, and repair have MAJOR caveats as they relate to 
> max_sstable_age_days, and hints/read repair break the selection algorithm).
> I'm proposing an alternative, TimeWindowCompactionStrategy, that sacrifices 
> the tiered nature of DTCS in order to address some of DTCS' operational 
> shortcomings. I believe it is necessary to propose an alternative rather than 
> simply adjusting DTCS, because it fundamentally removes the tiered nature in 
> order to remove the parameter max_sstable_age_days - the result is very very 
> different, even if it is heavily inspired by DTCS. 
> Specifically, rather than creating a number of windows of ever increasing 
> sizes, this strategy allows an operator to choose the window size, compact 
> with STCS within the first window of that size, and aggressive compact down 
> to a single sstable once that window is no longer current. The window size is 
> a combination of unit (minutes, hours, days) and size (1, etc), such that an 
> operator can expect all data using a block of that size to be compacted 
> together (that is, if your unit is hours, and size is 6, you will create 
> roughly 4 sstables per day, each one containing roughly 6 hours of data). 
> The result addresses a number of the problems with 
> DateTieredCompactionStrategy:
> - At the present time, DTCS’s first window is compacted using an unusual 
> selection criteria, which prefers files with earlier timestamps, but ignores 
> sizes. In TimeWindowCompactionStrategy, the first window data will be 
> compacted with the well tested, fast, reliable STCS. All STCS options can be 
> passed to TimeWindowCompactionStrategy to configure the first window’s 
> compaction behavior.
> - HintedHandoff may put old data in new sstables, but it will have little 
> impact other than slightly reduced efficiency (sstables will cover a wider 
> range, but the old timestamps will not impact sstable selection criteria 
> during compaction)
> - ReadRepair may put old data in new sstables, but it will have little impact 
> other than slightly reduced efficiency (sstables will cover a wider range, 
> but the old timestamps will not impact sstable selection criteria during 
> compaction)
> - Small, old sstables resulting from streams of any kind will be swiftly and 
> aggressively compacted with the other sstables matching their similar 
> maxTimestamp, without causing sstables in neighboring windows to grow in size.
> - The configuration options are explicit and straightforward - the tuning 
> parameters leave little room for error. The window is set in common, easily 
> understandable terms such as “12 hours”, “1 Day”, “30 days”. The 
> minute/hour/day options are granular enough for users keeping data for hours, 
> and users keeping data for years. 
> - There is no explicitly configurable max sstable age, though sstables will 
> naturally stop compacting once new data is written in that window. 
> - Streaming operations can create sstables with old timestamps, and they'll 
> naturally be joined together with sstables in the same time bucket. This is 
> true for bootstrap/repair/sstableloader/removenode. 
> - It remains true that if old data and new data is written into the memtable 
> at the same time, the 

[Cassandra Wiki] Update of "ContributorsGroup" by BrandonWilliams

2015-12-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ContributorsGroup" page has been changed by BrandonWilliams:
https://wiki.apache.org/cassandra/ContributorsGroup?action=diff=50=51

   * SamTunnicliffe
   * PauloMotta
   * AdamHolmberg
+  * JoelKnighton
  


[jira] [Updated] (CASSANDRA-10794) System table name resource_role_permissons_index is spelt wrong!

2015-12-01 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-10794:

Fix Version/s: 3.x
   3.0.x

> System table name resource_role_permissons_index is spelt wrong!
> 
>
> Key: CASSANDRA-10794
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10794
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Max Bowsher
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> System table name resource_role_permissons_index is spelt wrong!
> "permissons" is missing an "i"
> Fixing that isn't going to be fun, though :-(



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9474) Validate dc information on startup

2015-12-01 Thread Marcus Olsson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034282#comment-15034282
 ] 

Marcus Olsson commented on CASSANDRA-9474:
--

Great! :)

> Validate dc information on startup
> --
>
> Key: CASSANDRA-9474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9474
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Cassandra 2.1.5
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
> Fix For: 3.1, 3.2, 2.1.x, 2.2.x, 3.0.x
>
> Attachments: CASSANDRA-9474-2.2-1.patch, CASSANDRA-9474-2.2.patch, 
> CASSANDRA-9474-3.0-1.patch, CASSANDRA-9474-dtest.patch, 
> CASSANDRA-9474-trunk.patch, cassandra-2.1-9474.patch, 
> cassandra-2.1-dc_rack_healthcheck.patch
>
>
> When using GossipingPropertyFileSnitch it is possible to change the data 
> center and rack of a live node by changing the cassandra-rackdc.properties 
> file. Should this really be possible? In the documentation at 
> http://docs.datastax.com/en/cassandra/2.1/cassandra/initialize/initializeMultipleDS.html
>  it's stated that you should ??Choose the name carefully; renaming a data 
> center is not possible??, but with this functionality it doesn't seem 
> impossible(maybe a bit hard with changing replication etc.).
> This functionality was introduced by CASSANDRA-5897 so I'm guessing there is 
> some use case for this?
> Personally I would want the DC/rack settings to be as restricted as the 
> cluster name, otherwise if a node could just join another data center without 
> removing it's local information couldn't it mess up the token ranges? And 
> suddenly the old data center/rack would loose 1 replica of all the data that 
> the node contains.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8639) Can OOM on CL replay with dense mutations

2015-12-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034214#comment-15034214
 ] 

Ariel Weisberg commented on CASSANDRA-8639:
---

bq. This isn't related to the ticket but maybe we should fix it as well; I 
don't see anyplace we wait for the replay futures to complete before we finish 
recover().
Both 2.1 code and your patch will exit early before the futures have all 
finished. It looks like the old version only waited when there were more than 
max outstanding mutations. Which is also wrong and racy. We should always wait 
for the queue to drain completely before the method exits.
Are you talking about 
{{[CommitLogReplayer.blockForWrites()|https://github.com/apache/cassandra/blob/cassandra-2.1.3/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L98]}}
 which is invoked from 
{{[CommitLog.recover()|https://github.com/apache/cassandra/blob/cassandra-2.1.3/src/java/org/apache/cassandra/db/commitlog/CommitLog.java#L145]}}?
 I think that is already covered. I can refactor it if you want and have 
{{CommitLogReplayer.recover()}} return until everything is done anyways. Seems 
like a safe change unless it turns out to deadlock some weird test code 
somewhere.

bq. I'm not sure why futures was changed to a deque. looks like you only use 
queue methods, but maybe I missed it?
It's just a habit for this kind of asynchronous result queue. Muscle memory 
writes a loop that prioritizes consuming result futures over submitting new 
work in order to minimize latency, working set size, and temporal cache 
locality. To do that you need to be able to {{poll()}} a queue and {{List}} 
doesn't let you do that. 

The other issue with doing 1k and then draining 1k is that there is a period 
where the number of tasks in flight is small because the producer is waiting 
for the stragglers from the last 1k to arrive before issuing new work. This can 
starve consumers. This version keeps a targeted number of pending 
mutations/mutation bytes in flight at any given time.

bq. The only other thing I noticed was in the test you should validate the data 
test data is not found after you clear the CF in-case the replay isn't working.
[Does clearing the data before 
replay|https://github.com/apache/cassandra/compare/cassandra-2.1...aweisberg:CASSANDRA-8639-2.1?expand=1#diff-92ffe896212dc94b91ad86349f0647abR141]
 and [then checking for it afterwards accomplish 
that?|https://github.com/apache/cassandra/compare/cassandra-2.1...aweisberg:CASSANDRA-8639-2.1?expand=1#diff-92ffe896212dc94b91ad86349f0647abR183]
 Unless you think that {{clearUnsafe()}} might not work it seems sufficient.

bq. You also have a 2.1 utest failure related to CL not sure if that's related. 
org.apache.cassandra.cql3.DropKeyspaceCommitLogRecycleTest.testRecycle
[That is a pretty flakey 
test.|http://cassci.datastax.com/view/cassandra-2.1/job/cassandra-2.1_testall/271/testReport/org.apache.cassandra.cql3/DropKeyspaceCommitLogRecycleTest/testRecycle/history/]

bq. And one dtest failure in 2.1 commitlog_test.TestCommitLog.test_bad_crc
[Looks like another failing/unreliable 
test.|http://cassci.datastax.com/view/cassandra-2.1/job/cassandra-2.1_dtest/361/#showFailuresLink]



> Can OOM on CL replay with dense mutations
> -
>
> Key: CASSANDRA-8639
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8639
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: T Jake Luciani
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.1.x
>
>
> If you write dense mutations with many clustering keys, the replay of the CL 
> can quickly overwhelm a node on startup.  This looks to be caused by the fact 
> we only ensure there are 1000 mutations in flight at a time. but those 
> mutations could have thousands of cells in them.
> A better approach would be to limit the CL replay to the amount of memory in 
> flight using cell.unsharedHeapSize()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8639) Can OOM on CL replay with dense mutations

2015-12-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034214#comment-15034214
 ] 

Ariel Weisberg edited comment on CASSANDRA-8639 at 12/1/15 6:18 PM:


{quote}
This isn't related to the ticket but maybe we should fix it as well; I don't 
see anyplace we wait for the replay futures to complete before we finish 
recover().
Both 2.1 code and your patch will exit early before the futures have all 
finished. It looks like the old version only waited when there were more than 
max outstanding mutations. Which is also wrong and racy. We should always wait 
for the queue to drain completely before the method exits.
{quote}
Are you talking about 
{{[CommitLogReplayer.blockForWrites()|https://github.com/apache/cassandra/blob/cassandra-2.1.3/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L98]}}
 which is invoked from 
{{[CommitLog.recover()|https://github.com/apache/cassandra/blob/cassandra-2.1.3/src/java/org/apache/cassandra/db/commitlog/CommitLog.java#L145]}}?
 I think that is already covered. I can refactor it if you want and have 
{{CommitLogReplayer.recover()}} return until everything is done anyways. Seems 
like a safe change unless it turns out to deadlock some weird test code 
somewhere.

bq. I'm not sure why futures was changed to a deque. looks like you only use 
queue methods, but maybe I missed it?
It's just a habit for this kind of asynchronous result queue. Muscle memory 
writes a loop that prioritizes consuming result futures over submitting new 
work in order to minimize latency, working set size, and temporal cache 
locality. To do that you need to be able to {{poll()}} a queue and {{List}} 
doesn't let you do that. 

The other issue with doing 1k and then draining 1k is that there is a period 
where the number of tasks in flight is small because the producer is waiting 
for the stragglers from the last 1k to arrive before issuing new work. This can 
starve consumers. This version keeps a targeted number of pending 
mutations/mutation bytes in flight at any given time.

bq. The only other thing I noticed was in the test you should validate the data 
test data is not found after you clear the CF in-case the replay isn't working.
[Does clearing the data before 
replay|https://github.com/apache/cassandra/compare/cassandra-2.1...aweisberg:CASSANDRA-8639-2.1?expand=1#diff-92ffe896212dc94b91ad86349f0647abR141]
 and [then checking for it afterwards accomplish 
that?|https://github.com/apache/cassandra/compare/cassandra-2.1...aweisberg:CASSANDRA-8639-2.1?expand=1#diff-92ffe896212dc94b91ad86349f0647abR183]
 Unless you think that {{clearUnsafe()}} might not work it seems sufficient.

bq. You also have a 2.1 utest failure related to CL not sure if that's related. 
org.apache.cassandra.cql3.DropKeyspaceCommitLogRecycleTest.testRecycle
[That is a pretty flakey 
test.|http://cassci.datastax.com/view/cassandra-2.1/job/cassandra-2.1_testall/271/testReport/org.apache.cassandra.cql3/DropKeyspaceCommitLogRecycleTest/testRecycle/history/]

bq. And one dtest failure in 2.1 commitlog_test.TestCommitLog.test_bad_crc
[Looks like another failing/unreliable 
test.|http://cassci.datastax.com/view/cassandra-2.1/job/cassandra-2.1_dtest/361/#showFailuresLink]




was (Author: aweisberg):
bq. This isn't related to the ticket but maybe we should fix it as well; I 
don't see anyplace we wait for the replay futures to complete before we finish 
recover().
Both 2.1 code and your patch will exit early before the futures have all 
finished. It looks like the old version only waited when there were more than 
max outstanding mutations. Which is also wrong and racy. We should always wait 
for the queue to drain completely before the method exits.
Are you talking about 
{{[CommitLogReplayer.blockForWrites()|https://github.com/apache/cassandra/blob/cassandra-2.1.3/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L98]}}
 which is invoked from 
{{[CommitLog.recover()|https://github.com/apache/cassandra/blob/cassandra-2.1.3/src/java/org/apache/cassandra/db/commitlog/CommitLog.java#L145]}}?
 I think that is already covered. I can refactor it if you want and have 
{{CommitLogReplayer.recover()}} return until everything is done anyways. Seems 
like a safe change unless it turns out to deadlock some weird test code 
somewhere.

bq. I'm not sure why futures was changed to a deque. looks like you only use 
queue methods, but maybe I missed it?
It's just a habit for this kind of asynchronous result queue. Muscle memory 
writes a loop that prioritizes consuming result futures over submitting new 
work in order to minimize latency, working set size, and temporal cache 
locality. To do that you need to be able to {{poll()}} a queue and {{List}} 
doesn't let you do that. 

The other issue with doing 1k and then draining 1k is that there is a period 
where the number of 

[jira] [Updated] (CASSANDRA-10122) AssertionError after upgrade to 3.0

2015-12-01 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-10122:
---
Fix Version/s: (was: 3.0 beta 2)
   3.1
   3.0.1

> AssertionError after upgrade to 3.0
> ---
>
> Key: CASSANDRA-10122
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10122
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sylvain Lebresne
> Fix For: 3.0.1, 3.1
>
> Attachments: node1.log, node2.log, node3.log
>
>
> Upgrade tests are encountering this exception after upgrade from 2.2 HEAD to 
> 3.0 HEAD:
> {noformat}
> ERROR [SharedPool-Worker-4] 2015-08-18 12:33:57,858 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xa5ba2c7a, 
> /127.0.0.1:55048 => /127.0.0.1:9042]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:520)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:461)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:583)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:733)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:676) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:659)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:103)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:323)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:457)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:202)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:470)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:447)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:139)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
> at 

[jira] [Updated] (CASSANDRA-10122) AssertionError after upgrade to 3.0

2015-12-01 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-10122:
---
Reproduced In: 3.0.x  (was: 3.0 alpha 1)

> AssertionError after upgrade to 3.0
> ---
>
> Key: CASSANDRA-10122
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10122
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 2
>
> Attachments: node1.log, node2.log, node3.log
>
>
> Upgrade tests are encountering this exception after upgrade from 2.2 HEAD to 
> 3.0 HEAD:
> {noformat}
> ERROR [SharedPool-Worker-4] 2015-08-18 12:33:57,858 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xa5ba2c7a, 
> /127.0.0.1:55048 => /127.0.0.1:9042]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:520)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:461)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:583)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:733)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:676) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:659)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:103)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:323)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:457)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:202)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:470)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:447)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:139)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> 

[jira] [Commented] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-12-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034273#comment-15034273
 ] 

Ariel Weisberg commented on CASSANDRA-10592:


Squashed and rebased. Tests are running.

|[2.2 
Code|https://github.com/apache/cassandra/compare/cassandra-2.2...aweisberg:CASSANDRA-10592-2.2-squashed-v3?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10592-2.2-squashed-v3-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10592-2.2-squashed-v3-dtest/]|
|[3.0 
Code|https://github.com/apache/cassandra/compare/cassandra-3.0...aweisberg:CASSANDRA-10592-3.0-squashed-v3?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10592-3.0-squashed-v3-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10592-3.0-squashed-v3-dtest/]|


> IllegalArgumentException in DataOutputBuffer.reallocate
> ---
>
> Key: CASSANDRA-10592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10592
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths, Streaming and 
> Messaging
>Reporter: Sebastian Estevez
>Assignee: Ariel Weisberg
> Fix For: 3.0.1, 3.1, 2.2.x
>
>
> CORRECTION-
> It turns out the exception occurs when running a read using a thrift jdbc 
> driver. Once you have loaded the data with stress below, run 
> SELECT * FROM "autogeneratedtest"."transaction_by_retailer" using this tool - 
> http://www.aquafold.com/aquadatastudio_downloads.html
>  
> The exception:
> {code}
> WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.RuntimeException: java.lang.IllegalArgumentException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> Caused by: java.lang.IllegalArgumentException: null
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[main/:na]
>   at 
> 

[jira] [Commented] (CASSANDRA-9510) assassinating an unknown endpoint could npe

2015-12-01 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034289#comment-15034289
 ] 

Joshua McKenzie commented on CASSANDRA-9510:


[~dbrosius]: You want me to commit this one for you or do you have it?

> assassinating an unknown endpoint could npe
> ---
>
> Key: CASSANDRA-9510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9510
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Dave Brosius
>Assignee: Dave Brosius
>Priority: Trivial
> Fix For: 3.0.1, 3.1
>
> Attachments: assissinate_unknown.txt
>
>
> If the code assissinates an unknown endpoint, it doesn't generate a 'tokens' 
> collection, which then does
> epState.addApplicationState(ApplicationState.STATUS, 
> StorageService.instance.valueFactory.left(tokens, computeExpireTime()));
> and left(null, time); will npe



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Update of "WritePathForUsers" by MichaelEdge

2015-12-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "WritePathForUsers" page has been changed by MichaelEdge:
https://wiki.apache.org/cassandra/WritePathForUsers?action=diff=33=34

  === If write request modifies materialized view ===
  Keeping a materialized view in sync with its base table adds more complexity 
to the write path and also incurs performance overheads on the replica node in 
the form of read-before-write, locks and batch logs.
   1. The replica node acquires a lock on the partition, to ensure that write 
requests are serialised and applied to base table and materialized views in 
order.
-  1. The replica node reads the partition data and constructs the set of 
deltas to be applied to the materialized view. One insert/update/delete to the 
base table may result in many inserts/updates/deletes to the associated 
materialized view.
+  1. The replica node reads the partition data and constructs the set of 
deltas to be applied to the materialized view. One insert/update/delete to the 
base table may result in one or more inserts/updates/deletes in the associated 
materialized view.
   1. Write data to the Commit Log. 
   1. Create batch log containing updates to the materialized view. The batch 
log ensures the set of updates to the materialized view is atomic, and is part 
of the mechanism that ensures base table and materialized view are kept 
consistent. 
   1. Store the batch log containing the materialized view updates on the local 
replica node.
-  1. Send materialized view updates asynchronously to the materialized view 
replica (note, the materialized view could be stored on the same or a different 
replica node to the base table).
+  1. Send materialized view updates asynchronously to the materialized view 
replica (note, the materialized view partition could be stored on the same or a 
different replica node to the base table).
   1. Write data to the MemTable.
   1. The materialized view replica node will apply the update and return an 
acknowledgement to the base table replica node.
   1. The same process takes place on each replica node that stores the data 
for the partition key.


[Cassandra Wiki] Update of "WritePathForUsers" by MichaelEdge

2015-12-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "WritePathForUsers" page has been changed by MichaelEdge:
https://wiki.apache.org/cassandra/WritePathForUsers?action=diff=32=33

  == The Local Coordinator ==
  The local coordinator receives the write request from the client and performs 
the following:
   1. Firstly, the local coordinator determines which nodes are responsible for 
storing the data:
-  * The first replica is chosen based on hashing the primary key using the 
Partitioner; Murmur3Partitioner is the default.
+   * The first replica is chosen based on hashing the primary key using the 
Partitioner; Murmur3Partitioner is the default.
-  * Other replicas are chosen based on the replication strategy defined for 
the keyspace. In a production cluster this is most likely the 
NetworkTopologyStrategy.
+   * Other replicas are chosen based on the replication strategy defined for 
the keyspace. In a production cluster this is most likely the 
NetworkTopologyStrategy.
   1. The local coordinator determines whether the write request would modify 
an associated materialized view. 
  === If write request modifies materialized view ===
  When using materialized views it’s important to ensure that the base table 
and materialized view are consistent, i.e. all changes applied to the base 
table MUST be applied to the materialized view. Cassandra uses a two-stage 
batch log process for this: 


[Cassandra Wiki] Update of "WritePathForUsers" by MichaelEdge

2015-12-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "WritePathForUsers" page has been changed by MichaelEdge:
https://wiki.apache.org/cassandra/WritePathForUsers?action=diff=32=33

  == The Local Coordinator ==
  The local coordinator receives the write request from the client and performs 
the following:
   1. Firstly, the local coordinator determines which nodes are responsible for 
storing the data:
-  * The first replica is chosen based on hashing the primary key using the 
Partitioner; Murmur3Partitioner is the default.
+   * The first replica is chosen based on hashing the primary key using the 
Partitioner; Murmur3Partitioner is the default.
-  * Other replicas are chosen based on the replication strategy defined for 
the keyspace. In a production cluster this is most likely the 
NetworkTopologyStrategy.
+   * Other replicas are chosen based on the replication strategy defined for 
the keyspace. In a production cluster this is most likely the 
NetworkTopologyStrategy.
   1. The local coordinator determines whether the write request would modify 
an associated materialized view. 
  === If write request modifies materialized view ===
  When using materialized views it’s important to ensure that the base table 
and materialized view are consistent, i.e. all changes applied to the base 
table MUST be applied to the materialized view. Cassandra uses a two-stage 
batch log process for this: 


[jira] [Commented] (CASSANDRA-10243) Warn or fail when changing cluster topology live

2015-12-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035369#comment-15035369
 ] 

Stefania commented on CASSANDRA-10243:
--

CI looks OK to me.

> Warn or fail when changing cluster topology live
> 
>
> Key: CASSANDRA-10243
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10243
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.12, 2.2.4, 3.0.1, 3.1, 3.2
>
>
> Moving a node from one rack to another in the snitch, while it is alive, is 
> almost always the wrong thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Update of "WritePathForUsers" by MichaelEdge

2015-12-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "WritePathForUsers" page has been changed by MichaelEdge:
https://wiki.apache.org/cassandra/WritePathForUsers?action=diff=32=33

  == The Local Coordinator ==
  The local coordinator receives the write request from the client and performs 
the following:
   1. Firstly, the local coordinator determines which nodes are responsible for 
storing the data:
-  * The first replica is chosen based on hashing the primary key using the 
Partitioner; Murmur3Partitioner is the default.
+   * The first replica is chosen based on hashing the primary key using the 
Partitioner; Murmur3Partitioner is the default.
-  * Other replicas are chosen based on the replication strategy defined for 
the keyspace. In a production cluster this is most likely the 
NetworkTopologyStrategy.
+   * Other replicas are chosen based on the replication strategy defined for 
the keyspace. In a production cluster this is most likely the 
NetworkTopologyStrategy.
   1. The local coordinator determines whether the write request would modify 
an associated materialized view. 
  === If write request modifies materialized view ===
  When using materialized views it’s important to ensure that the base table 
and materialized view are consistent, i.e. all changes applied to the base 
table MUST be applied to the materialized view. Cassandra uses a two-stage 
batch log process for this: 


[1/3] cassandra git commit: Fix integer overflow in DataOutputBuffer doubling and test as best as possible given that allocating 2 gigs in a unit test is problematic.

2015-12-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 ccb20ad46 -> f7aaea013


Fix integer overflow in DataOutputBuffer doubling and test as best as possible 
given that allocating 2 gigs in a unit test is problematic.

Patch by Ariel Weisberg; reviewed by tjake for CASSANDRA-10592


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a320737b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a320737b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a320737b

Branch: refs/heads/cassandra-3.0
Commit: a320737b18c19e3ec59035e5e487f2af1dcd0172
Parents: 2491ede
Author: Ariel Weisberg 
Authored: Tue Oct 27 12:19:14 2015 -0400
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:34:28 2015 -0500

--
 CHANGES.txt |   1 +
 .../io/util/BufferedDataOutputStreamPlus.java   |  20 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 6 files changed, 296 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7541212..cf73f57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -16,6 +16,7 @@
  * Fix RangeNamesQueryPager (CASSANDRA-10509)
  * Deprecate Pig support (CASSANDRA-10542)
  * Reduce contention getting instances of CompositeType (CASSANDRA-10433)
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
 Merged from 2.1:
  * Add proper error handling to stream receiver (CASSANDRA-10774)
  * Warn or fail when changing cluster topology live (CASSANDRA-10243)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
index 5669a8d..d55db47 100644
--- a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
+++ b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
@@ -118,7 +118,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(len - copied);
 }
 }
 }
@@ -142,11 +142,12 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 else
 {
 assert toWrite.isDirect();
-if (toWrite.remaining() > buffer.remaining())
+int toWriteRemaining = toWrite.remaining();
+if (toWriteRemaining > buffer.remaining())
 {
-doFlush();
+doFlush(toWriteRemaining);
 MemoryUtil.duplicateDirectByteBuffer(toWrite, hollowBuffer);
-if (toWrite.remaining() > buffer.remaining())
+if (toWriteRemaining > buffer.remaining())
 {
 while (hollowBuffer.hasRemaining())
 channel.write(hollowBuffer);
@@ -254,7 +255,10 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 write(buffer);
 }
 
-protected void doFlush() throws IOException
+/*
+ * Count is the number of bytes remaining to write ignoring already 
remaining capacity
+ */
+protected void doFlush(int count) throws IOException
 {
 buffer.flip();
 
@@ -267,13 +271,13 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 @Override
 public void flush() throws IOException
 {
-doFlush();
+doFlush(0);
 }
 
 @Override
 public void close() throws IOException
 {
-doFlush();
+doFlush(0);
 channel.close();
 FileUtils.clean(buffer);
 buffer = null;
@@ -282,7 +286,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 protected void ensureRemaining(int minimum) throws IOException
 {
 if (buffer.remaining() < minimum)
-doFlush();
+doFlush(minimum);
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
--
diff --git a/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java 

cassandra git commit: Fix integer overflow in DataOutputBuffer doubling and test as best as possible given that allocating 2 gigs in a unit test is problematic.

2015-12-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 2491ede35 -> a320737b1


Fix integer overflow in DataOutputBuffer doubling and test as best as possible 
given that allocating 2 gigs in a unit test is problematic.

Patch by Ariel Weisberg; reviewed by tjake for CASSANDRA-10592


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a320737b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a320737b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a320737b

Branch: refs/heads/cassandra-2.2
Commit: a320737b18c19e3ec59035e5e487f2af1dcd0172
Parents: 2491ede
Author: Ariel Weisberg 
Authored: Tue Oct 27 12:19:14 2015 -0400
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:34:28 2015 -0500

--
 CHANGES.txt |   1 +
 .../io/util/BufferedDataOutputStreamPlus.java   |  20 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 6 files changed, 296 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7541212..cf73f57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -16,6 +16,7 @@
  * Fix RangeNamesQueryPager (CASSANDRA-10509)
  * Deprecate Pig support (CASSANDRA-10542)
  * Reduce contention getting instances of CompositeType (CASSANDRA-10433)
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
 Merged from 2.1:
  * Add proper error handling to stream receiver (CASSANDRA-10774)
  * Warn or fail when changing cluster topology live (CASSANDRA-10243)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
index 5669a8d..d55db47 100644
--- a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
+++ b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
@@ -118,7 +118,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(len - copied);
 }
 }
 }
@@ -142,11 +142,12 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 else
 {
 assert toWrite.isDirect();
-if (toWrite.remaining() > buffer.remaining())
+int toWriteRemaining = toWrite.remaining();
+if (toWriteRemaining > buffer.remaining())
 {
-doFlush();
+doFlush(toWriteRemaining);
 MemoryUtil.duplicateDirectByteBuffer(toWrite, hollowBuffer);
-if (toWrite.remaining() > buffer.remaining())
+if (toWriteRemaining > buffer.remaining())
 {
 while (hollowBuffer.hasRemaining())
 channel.write(hollowBuffer);
@@ -254,7 +255,10 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 write(buffer);
 }
 
-protected void doFlush() throws IOException
+/*
+ * Count is the number of bytes remaining to write ignoring already 
remaining capacity
+ */
+protected void doFlush(int count) throws IOException
 {
 buffer.flip();
 
@@ -267,13 +271,13 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 @Override
 public void flush() throws IOException
 {
-doFlush();
+doFlush(0);
 }
 
 @Override
 public void close() throws IOException
 {
-doFlush();
+doFlush(0);
 channel.close();
 FileUtils.clean(buffer);
 buffer = null;
@@ -282,7 +286,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 protected void ensureRemaining(int minimum) throws IOException
 {
 if (buffer.remaining() < minimum)
-doFlush();
+doFlush(minimum);
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
--
diff --git a/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java 

[2/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-01 Thread jake
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f785f8b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f785f8b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f785f8b5

Branch: refs/heads/cassandra-3.0
Commit: f785f8b5c1702a41e27b6217b2cf2dea8c316c19
Parents: ccb20ad a320737
Author: T Jake Luciani 
Authored: Tue Dec 1 22:39:39 2015 -0500
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:39:39 2015 -0500

--

--




[jira] [Commented] (CASSANDRA-10799) 2 cqlshlib tests still failing with cythonized driver installation

2015-12-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035270#comment-15035270
 ] 

Stefania commented on CASSANDRA-10799:
--

Both failing tests are caused by problems decoding blobs:

{code}
test_cqlsh: DEBUG: read "\x1b[0;1;31mFailed to format value 
'\\x00\\x01\\x02\\x03\\x04\\x05\\xff\\xfe\\xfd' : 'ascii' codec can't decode 
byte 0xff in position 6: ordinal not in range(128)\x1b[0m\r\n\x1b[0;1;31mFailed 
to format value '\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff' : 'ascii' codec 
can't decode byte 0xff in position 0: ordinal not in 
range(128)\x1b[0m\r\n\x1b[0;1;31m1 more decoding errors suppressed.\x1b[0m\r\n" 
from subproc
{code}

>From past experience, I suspect that the cqlsh driver patch for converting 
>blobs into byte arrays is failing with cythonized driver installations, cc 
>[~aholmber] to confirm this.

If I am correct, other than running these tests with the driver not cythonized, 
is there anything else that can be done?

In fact, these tests should probably use the embedded driver (which is not 
cythonized) rather than the installed driver.


> 2 cqlshlib tests still failing with cythonized driver installation
> --
>
> Key: CASSANDRA-10799
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10799
> Project: Cassandra
>  Issue Type: Test
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.2.x, 3.0.x
>
>
> We still have 2 cqlshlib tests failing on Jenkins:
> http://cassci.datastax.com/job/cassandra-3.0_cqlshlib/lastCompletedBuild/testReport/
> Locally, these tests only fail with a cythonized driver installation. If the 
> driver is not cythonized (installed with {{--no_extensions}}) then the tests 
> are fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10798) C* 2.1 doesn't create dir name with uuid if dir is already present

2015-12-01 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035280#comment-15035280
 ] 

Michael Shuler commented on CASSANDRA-10798:


New 2.1 nodes will create 2.1-style uuid data directories. Nodes are autonomous 
- they have no idea that another node in the cluster was upgraded at some point.

> C* 2.1 doesn't create dir name with uuid if dir is already present
> --
>
> Key: CASSANDRA-10798
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10798
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: C* 2.1.11
>Reporter: MASSIMO CELLI
> Fix For: 2.1.x
>
>
> on C* 2.1.12 if you create a new table and a directory with the same name 
> already exist under the keyspace then C* will simply use that directory 
> rather than creating a new one that has uuid in the name.
> Even if you drop and recreate the same table it will still use the previous 
> dir and never switch to a new one with uuid. This can happen on one of the 
> nodes in the cluster while the other nodes will use the uuid format for the 
> same table.
> For example I dropped and recreated the same table three times in this test 
> on a two nodes cluster
> node1
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 mytable
> node2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 
> mytable-678a7e31988511e58ce7cfa0aa9730a2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:41 
> mytable-cade4ee1988411e58ce7cfa0aa9730a2
> drwxr-xr-x 2 cassandra cassandra 4096 Dec  1 23:47 
> mytable-db1b9b41988511e58ce7cfa0aa9730a2
> This seems to break the changes introduced by CASSANDRA-5202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9302) Optimize cqlsh COPY FROM, part 3

2015-12-01 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035309#comment-15035309
 ] 

Stefania commented on CASSANDRA-9302:
-

Rebased and relaunched CI on 2.1, 2.2 and 3.0.

> Optimize cqlsh COPY FROM, part 3
> 
>
> Key: CASSANDRA-9302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9302
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.x
>
>
> We've had some discussion moving to Spark CSV import for bulk load in 3.x, 
> but people need a good bulk load tool now.  One option is to add a separate 
> Java bulk load tool (CASSANDRA-9048), but if we can match that performance 
> from cqlsh I would prefer to leave COPY FROM as the preferred option to which 
> we point people, rather than adding more tools that need to be supported 
> indefinitely.
> Previous work on COPY FROM optimization was done in CASSANDRA-7405 and 
> CASSANDRA-8225.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Fix integer overflow in DataOutputBuffer doubling and test as best as possible given that allocating 2 gigs in a unit test is problematic.

2015-12-01 Thread jake
Fix integer overflow in DataOutputBuffer doubling and test as best as possible 
given that allocating 2 gigs in a unit test is problematic.

Patch by Ariel Weisberg; reviewed by tjake for CASSANDRA-10592


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7aaea01
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7aaea01
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7aaea01

Branch: refs/heads/cassandra-3.0
Commit: f7aaea013e98178064103d9b4cd39f66bad083f3
Parents: f785f8b
Author: Ariel Weisberg 
Authored: Tue Dec 1 12:33:46 2015 -0500
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:43:03 2015 -0500

--
 CHANGES.txt |   2 +
 .../io/compress/CompressedSequentialWriter.java |   2 +-
 .../io/util/BufferedDataOutputStreamPlus.java   |  22 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/SequentialWriter.java |   4 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 8 files changed, 301 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7aaea01/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a01011b..1af2745 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -14,6 +14,7 @@
  * Keep the file open in trySkipCache (CASSANDRA-10669)
  * Updated trigger example (CASSANDRA-10257)
 Merged from 2.2:
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
  * Show CQL help in cqlsh in web browser (CASSANDRA-7225)
  * Serialize on disk the proper SSTable compression ratio (CASSANDRA-10775)
  * Reject index queries while the index is building (CASSANDRA-8505)
@@ -90,6 +91,7 @@ Merged from 2.2:
  * Deprecate memory_allocator in cassandra.yaml (CASSANDRA-10581,10628)
  * Expose phi values from failure detector via JMX and tweak debug
and trace logging (CASSANDRA-9526)
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
 Merged from 2.1:
  * Shutdown compaction in drain to prevent leak (CASSANDRA-10079)
  * (cqlsh) fix COPY using wrong variable name for time_format (CASSANDRA-10633)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7aaea01/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java 
b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index bbec6f5..14f1ba7 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@ -156,7 +156,7 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 public FileMark mark()
 {
 if (!buffer.hasRemaining())
-doFlush();
+doFlush(0);
 return new CompressedFileWriterMark(chunkOffset, current(), 
buffer.position(), chunkCount + 1);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7aaea01/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
index 9434219..54122ee 100644
--- a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
+++ b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
@@ -129,7 +129,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(len - copied);
 }
 }
 }
@@ -154,8 +154,9 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 {
 assert toWrite.isDirect();
 MemoryUtil.duplicateDirectByteBuffer(toWrite, hollowBuffer);
+int toWriteRemaining = toWrite.remaining();
 
-if (toWrite.remaining() > buffer.remaining())
+if (toWriteRemaining > buffer.remaining())
 {
 if (strictFlushing)
 {
@@ -163,7 +164,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(toWriteRemaining - 

[4/4] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-12-01 Thread jake
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c6f3256
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c6f3256
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c6f3256

Branch: refs/heads/cassandra-3.1
Commit: 4c6f32569dcb8d9851ca0c3976c1ae055b99e069
Parents: 5b6a368 f7aaea0
Author: T Jake Luciani 
Authored: Tue Dec 1 22:46:10 2015 -0500
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:46:10 2015 -0500

--
 CHANGES.txt |   2 +
 .../io/compress/CompressedSequentialWriter.java |   2 +-
 .../io/util/BufferedDataOutputStreamPlus.java   |  22 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/SequentialWriter.java |   4 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 8 files changed, 301 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c6f3256/CHANGES.txt
--



[3/5] cassandra git commit: Fix integer overflow in DataOutputBuffer doubling and test as best as possible given that allocating 2 gigs in a unit test is problematic.

2015-12-01 Thread jake
Fix integer overflow in DataOutputBuffer doubling and test as best as possible 
given that allocating 2 gigs in a unit test is problematic.

Patch by Ariel Weisberg; reviewed by tjake for CASSANDRA-10592


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7aaea01
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7aaea01
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7aaea01

Branch: refs/heads/trunk
Commit: f7aaea013e98178064103d9b4cd39f66bad083f3
Parents: f785f8b
Author: Ariel Weisberg 
Authored: Tue Dec 1 12:33:46 2015 -0500
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:43:03 2015 -0500

--
 CHANGES.txt |   2 +
 .../io/compress/CompressedSequentialWriter.java |   2 +-
 .../io/util/BufferedDataOutputStreamPlus.java   |  22 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/SequentialWriter.java |   4 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 8 files changed, 301 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7aaea01/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a01011b..1af2745 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -14,6 +14,7 @@
  * Keep the file open in trySkipCache (CASSANDRA-10669)
  * Updated trigger example (CASSANDRA-10257)
 Merged from 2.2:
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
  * Show CQL help in cqlsh in web browser (CASSANDRA-7225)
  * Serialize on disk the proper SSTable compression ratio (CASSANDRA-10775)
  * Reject index queries while the index is building (CASSANDRA-8505)
@@ -90,6 +91,7 @@ Merged from 2.2:
  * Deprecate memory_allocator in cassandra.yaml (CASSANDRA-10581,10628)
  * Expose phi values from failure detector via JMX and tweak debug
and trace logging (CASSANDRA-9526)
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
 Merged from 2.1:
  * Shutdown compaction in drain to prevent leak (CASSANDRA-10079)
  * (cqlsh) fix COPY using wrong variable name for time_format (CASSANDRA-10633)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7aaea01/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java 
b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index bbec6f5..14f1ba7 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@ -156,7 +156,7 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 public FileMark mark()
 {
 if (!buffer.hasRemaining())
-doFlush();
+doFlush(0);
 return new CompressedFileWriterMark(chunkOffset, current(), 
buffer.position(), chunkCount + 1);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7aaea01/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
index 9434219..54122ee 100644
--- a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
+++ b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
@@ -129,7 +129,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(len - copied);
 }
 }
 }
@@ -154,8 +154,9 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 {
 assert toWrite.isDirect();
 MemoryUtil.duplicateDirectByteBuffer(toWrite, hollowBuffer);
+int toWriteRemaining = toWrite.remaining();
 
-if (toWrite.remaining() > buffer.remaining())
+if (toWriteRemaining > buffer.remaining())
 {
 if (strictFlushing)
 {
@@ -163,7 +164,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(toWriteRemaining - 

[1/5] cassandra git commit: Fix integer overflow in DataOutputBuffer doubling and test as best as possible given that allocating 2 gigs in a unit test is problematic.

2015-12-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 03863ed24 -> 6bf1f75f5


Fix integer overflow in DataOutputBuffer doubling and test as best as possible 
given that allocating 2 gigs in a unit test is problematic.

Patch by Ariel Weisberg; reviewed by tjake for CASSANDRA-10592


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a320737b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a320737b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a320737b

Branch: refs/heads/trunk
Commit: a320737b18c19e3ec59035e5e487f2af1dcd0172
Parents: 2491ede
Author: Ariel Weisberg 
Authored: Tue Oct 27 12:19:14 2015 -0400
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:34:28 2015 -0500

--
 CHANGES.txt |   1 +
 .../io/util/BufferedDataOutputStreamPlus.java   |  20 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 6 files changed, 296 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7541212..cf73f57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -16,6 +16,7 @@
  * Fix RangeNamesQueryPager (CASSANDRA-10509)
  * Deprecate Pig support (CASSANDRA-10542)
  * Reduce contention getting instances of CompositeType (CASSANDRA-10433)
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
 Merged from 2.1:
  * Add proper error handling to stream receiver (CASSANDRA-10774)
  * Warn or fail when changing cluster topology live (CASSANDRA-10243)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
index 5669a8d..d55db47 100644
--- a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
+++ b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
@@ -118,7 +118,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(len - copied);
 }
 }
 }
@@ -142,11 +142,12 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 else
 {
 assert toWrite.isDirect();
-if (toWrite.remaining() > buffer.remaining())
+int toWriteRemaining = toWrite.remaining();
+if (toWriteRemaining > buffer.remaining())
 {
-doFlush();
+doFlush(toWriteRemaining);
 MemoryUtil.duplicateDirectByteBuffer(toWrite, hollowBuffer);
-if (toWrite.remaining() > buffer.remaining())
+if (toWriteRemaining > buffer.remaining())
 {
 while (hollowBuffer.hasRemaining())
 channel.write(hollowBuffer);
@@ -254,7 +255,10 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 write(buffer);
 }
 
-protected void doFlush() throws IOException
+/*
+ * Count is the number of bytes remaining to write ignoring already 
remaining capacity
+ */
+protected void doFlush(int count) throws IOException
 {
 buffer.flip();
 
@@ -267,13 +271,13 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 @Override
 public void flush() throws IOException
 {
-doFlush();
+doFlush(0);
 }
 
 @Override
 public void close() throws IOException
 {
-doFlush();
+doFlush(0);
 channel.close();
 FileUtils.clean(buffer);
 buffer = null;
@@ -282,7 +286,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 protected void ensureRemaining(int minimum) throws IOException
 {
 if (buffer.remaining() < minimum)
-doFlush();
+doFlush(minimum);
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
--
diff --git a/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java 

[5/5] cassandra git commit: Merge branch 'cassandra-3.1' into trunk

2015-12-01 Thread jake
Merge branch 'cassandra-3.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bf1f75f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bf1f75f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bf1f75f

Branch: refs/heads/trunk
Commit: 6bf1f75f531825d14729f5561dca0b56983ebdeb
Parents: 03863ed 4c6f325
Author: T Jake Luciani 
Authored: Tue Dec 1 22:46:45 2015 -0500
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:46:45 2015 -0500

--
 CHANGES.txt |   2 +
 .../io/compress/CompressedSequentialWriter.java |   2 +-
 .../io/util/BufferedDataOutputStreamPlus.java   |  22 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/SequentialWriter.java |   4 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 8 files changed, 301 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bf1f75f/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bf1f75f/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
--



[1/4] cassandra git commit: Fix integer overflow in DataOutputBuffer doubling and test as best as possible given that allocating 2 gigs in a unit test is problematic.

2015-12-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.1 5b6a368c9 -> 4c6f32569


Fix integer overflow in DataOutputBuffer doubling and test as best as possible 
given that allocating 2 gigs in a unit test is problematic.

Patch by Ariel Weisberg; reviewed by tjake for CASSANDRA-10592


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a320737b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a320737b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a320737b

Branch: refs/heads/cassandra-3.1
Commit: a320737b18c19e3ec59035e5e487f2af1dcd0172
Parents: 2491ede
Author: Ariel Weisberg 
Authored: Tue Oct 27 12:19:14 2015 -0400
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:34:28 2015 -0500

--
 CHANGES.txt |   1 +
 .../io/util/BufferedDataOutputStreamPlus.java   |  20 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 6 files changed, 296 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7541212..cf73f57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -16,6 +16,7 @@
  * Fix RangeNamesQueryPager (CASSANDRA-10509)
  * Deprecate Pig support (CASSANDRA-10542)
  * Reduce contention getting instances of CompositeType (CASSANDRA-10433)
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
 Merged from 2.1:
  * Add proper error handling to stream receiver (CASSANDRA-10774)
  * Warn or fail when changing cluster topology live (CASSANDRA-10243)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
index 5669a8d..d55db47 100644
--- a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
+++ b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
@@ -118,7 +118,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(len - copied);
 }
 }
 }
@@ -142,11 +142,12 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 else
 {
 assert toWrite.isDirect();
-if (toWrite.remaining() > buffer.remaining())
+int toWriteRemaining = toWrite.remaining();
+if (toWriteRemaining > buffer.remaining())
 {
-doFlush();
+doFlush(toWriteRemaining);
 MemoryUtil.duplicateDirectByteBuffer(toWrite, hollowBuffer);
-if (toWrite.remaining() > buffer.remaining())
+if (toWriteRemaining > buffer.remaining())
 {
 while (hollowBuffer.hasRemaining())
 channel.write(hollowBuffer);
@@ -254,7 +255,10 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 write(buffer);
 }
 
-protected void doFlush() throws IOException
+/*
+ * Count is the number of bytes remaining to write ignoring already 
remaining capacity
+ */
+protected void doFlush(int count) throws IOException
 {
 buffer.flip();
 
@@ -267,13 +271,13 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 @Override
 public void flush() throws IOException
 {
-doFlush();
+doFlush(0);
 }
 
 @Override
 public void close() throws IOException
 {
-doFlush();
+doFlush(0);
 channel.close();
 FileUtils.clean(buffer);
 buffer = null;
@@ -282,7 +286,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 protected void ensureRemaining(int minimum) throws IOException
 {
 if (buffer.remaining() < minimum)
-doFlush();
+doFlush(minimum);
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a320737b/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
--
diff --git a/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java 

[2/4] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-01 Thread jake
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f785f8b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f785f8b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f785f8b5

Branch: refs/heads/cassandra-3.1
Commit: f785f8b5c1702a41e27b6217b2cf2dea8c316c19
Parents: ccb20ad a320737
Author: T Jake Luciani 
Authored: Tue Dec 1 22:39:39 2015 -0500
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:39:39 2015 -0500

--

--




[3/4] cassandra git commit: Fix integer overflow in DataOutputBuffer doubling and test as best as possible given that allocating 2 gigs in a unit test is problematic.

2015-12-01 Thread jake
Fix integer overflow in DataOutputBuffer doubling and test as best as possible 
given that allocating 2 gigs in a unit test is problematic.

Patch by Ariel Weisberg; reviewed by tjake for CASSANDRA-10592


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7aaea01
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7aaea01
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7aaea01

Branch: refs/heads/cassandra-3.1
Commit: f7aaea013e98178064103d9b4cd39f66bad083f3
Parents: f785f8b
Author: Ariel Weisberg 
Authored: Tue Dec 1 12:33:46 2015 -0500
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:43:03 2015 -0500

--
 CHANGES.txt |   2 +
 .../io/compress/CompressedSequentialWriter.java |   2 +-
 .../io/util/BufferedDataOutputStreamPlus.java   |  22 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/SequentialWriter.java |   4 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 8 files changed, 301 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7aaea01/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a01011b..1af2745 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -14,6 +14,7 @@
  * Keep the file open in trySkipCache (CASSANDRA-10669)
  * Updated trigger example (CASSANDRA-10257)
 Merged from 2.2:
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
  * Show CQL help in cqlsh in web browser (CASSANDRA-7225)
  * Serialize on disk the proper SSTable compression ratio (CASSANDRA-10775)
  * Reject index queries while the index is building (CASSANDRA-8505)
@@ -90,6 +91,7 @@ Merged from 2.2:
  * Deprecate memory_allocator in cassandra.yaml (CASSANDRA-10581,10628)
  * Expose phi values from failure detector via JMX and tweak debug
and trace logging (CASSANDRA-9526)
+ * Fix IllegalArgumentException in DataOutputBuffer.reallocate for large 
buffers (CASSANDRA-10592)
 Merged from 2.1:
  * Shutdown compaction in drain to prevent leak (CASSANDRA-10079)
  * (cqlsh) fix COPY using wrong variable name for time_format (CASSANDRA-10633)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7aaea01/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java 
b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
index bbec6f5..14f1ba7 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressedSequentialWriter.java
@@ -156,7 +156,7 @@ public class CompressedSequentialWriter extends 
SequentialWriter
 public FileMark mark()
 {
 if (!buffer.hasRemaining())
-doFlush();
+doFlush(0);
 return new CompressedFileWriterMark(chunkOffset, current(), 
buffer.position(), chunkCount + 1);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7aaea01/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
index 9434219..54122ee 100644
--- a/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
+++ b/src/java/org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.java
@@ -129,7 +129,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(len - copied);
 }
 }
 }
@@ -154,8 +154,9 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 {
 assert toWrite.isDirect();
 MemoryUtil.duplicateDirectByteBuffer(toWrite, hollowBuffer);
+int toWriteRemaining = toWrite.remaining();
 
-if (toWrite.remaining() > buffer.remaining())
+if (toWriteRemaining > buffer.remaining())
 {
 if (strictFlushing)
 {
@@ -163,7 +164,7 @@ public class BufferedDataOutputStreamPlus extends 
DataOutputStreamPlus
 }
 else
 {
-doFlush();
+doFlush(toWriteRemaining - 

[4/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-12-01 Thread jake
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c6f3256
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c6f3256
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c6f3256

Branch: refs/heads/trunk
Commit: 4c6f32569dcb8d9851ca0c3976c1ae055b99e069
Parents: 5b6a368 f7aaea0
Author: T Jake Luciani 
Authored: Tue Dec 1 22:46:10 2015 -0500
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:46:10 2015 -0500

--
 CHANGES.txt |   2 +
 .../io/compress/CompressedSequentialWriter.java |   2 +-
 .../io/util/BufferedDataOutputStreamPlus.java   |  22 +-
 .../cassandra/io/util/DataOutputBuffer.java |  78 ++-
 .../io/util/DataOutputBufferFixed.java  |   2 +-
 .../cassandra/io/util/SafeMemoryWriter.java |  10 +-
 .../cassandra/io/util/SequentialWriter.java |   4 +-
 .../cassandra/io/util/DataOutputTest.java   | 202 +++
 8 files changed, 301 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c6f3256/CHANGES.txt
--



[2/5] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-01 Thread jake
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f785f8b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f785f8b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f785f8b5

Branch: refs/heads/trunk
Commit: f785f8b5c1702a41e27b6217b2cf2dea8c316c19
Parents: ccb20ad a320737
Author: T Jake Luciani 
Authored: Tue Dec 1 22:39:39 2015 -0500
Committer: T Jake Luciani 
Committed: Tue Dec 1 22:39:39 2015 -0500

--

--




[jira] [Commented] (CASSANDRA-8639) Can OOM on CL replay with dense mutations

2015-12-01 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035228#comment-15035228
 ] 

T Jake Luciani commented on CASSANDRA-8639:
---

{code}
 public static long MAX_OUTSTANDING_REPLAY_BYTES = 
Long.getLong("cassandra.commitlog_max_outstanding_replay_bytes", 1024 * 64);
{code}

64k is pretty small limit I would make it 64Mb

bq. Are you talking about CommitLogReplayer.blockForWrites() which is invoked 
from CommitLog.recover()? I think that is already covered.

You are right.

bq. Does clearing the data before replay and then checking for it afterwards 
accomplish that? Unless you think that clearUnsafe() might not work

Yes that's what I'd like to verify., that the clear actually works. Just a nit.

If you address those and squash + rebase I'll commit.  



> Can OOM on CL replay with dense mutations
> -
>
> Key: CASSANDRA-8639
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8639
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: T Jake Luciani
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.1.x
>
>
> If you write dense mutations with many clustering keys, the replay of the CL 
> can quickly overwhelm a node on startup.  This looks to be caused by the fact 
> we only ensure there are 1000 mutations in flight at a time. but those 
> mutations could have thousands of cells in them.
> A better approach would be to limit the CL replay to the amount of memory in 
> flight using cell.unsharedHeapSize()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10799) 2 cqlshlib tests still failing with cythonized driver installation

2015-12-01 Thread Stefania (JIRA)
Stefania created CASSANDRA-10799:


 Summary: 2 cqlshlib tests still failing with cythonized driver 
installation
 Key: CASSANDRA-10799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10799
 Project: Cassandra
  Issue Type: Test
Reporter: Stefania
Assignee: Stefania
 Fix For: 2.2.x, 3.0.x


We still have 2 cqlshlib tests failing on Jenkins:

http://cassci.datastax.com/job/cassandra-3.0_cqlshlib/lastCompletedBuild/testReport/

Locally, these tests only fail with a cythonized driver installation. If the 
driver is not cythonized (installed with {{--no_extensions}}) then the tests 
are fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9510) assassinating an unknown endpoint could npe

2015-12-01 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035266#comment-15035266
 ] 

Dave Brosius commented on CASSANDRA-9510:
-

[~JoshuaMcKenzie] go ahead. thanks!

> assassinating an unknown endpoint could npe
> ---
>
> Key: CASSANDRA-9510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9510
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Dave Brosius
>Assignee: Dave Brosius
>Priority: Trivial
> Fix For: 3.0.1, 3.1
>
> Attachments: assissinate_unknown.txt
>
>
> If the code assissinates an unknown endpoint, it doesn't generate a 'tokens' 
> collection, which then does
> epState.addApplicationState(ApplicationState.STATUS, 
> StorageService.instance.valueFactory.left(tokens, computeExpireTime()));
> and left(null, time); will npe



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10592) IllegalArgumentException in DataOutputBuffer.reallocate

2015-12-01 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10592:
---
Fix Version/s: (was: 2.2.x)
   2.2.4

> IllegalArgumentException in DataOutputBuffer.reallocate
> ---
>
> Key: CASSANDRA-10592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10592
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths, Streaming and 
> Messaging
>Reporter: Sebastian Estevez
>Assignee: Ariel Weisberg
> Fix For: 2.2.4, 3.0.1, 3.1
>
>
> CORRECTION-
> It turns out the exception occurs when running a read using a thrift jdbc 
> driver. Once you have loaded the data with stress below, run 
> SELECT * FROM "autogeneratedtest"."transaction_by_retailer" using this tool - 
> http://www.aquafold.com/aquadatastudio_downloads.html
>  
> The exception:
> {code}
> WARN  [SharedPool-Worker-1] 2015-10-22 12:58:20,792 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.RuntimeException: java.lang.IllegalArgumentException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2366)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> Caused by: java.lang.IllegalArgumentException: null
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:63)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:57)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:263)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:183)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:381)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1697)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2362)
>  ~[main/:na]
>   ... 4 common frames omitted
> {code}
> I was running this command:
> {code}
> tools/bin/cassandra-stress user 
> profile=~/Desktop/startup/stress/stress.yaml n=10 ops\(insert=1\) -rate 
> threads=30
> {code}
> Here's the stress.yaml UPDATED!
> {code}
> ### DML ### THIS IS UNDER CONSTRUCTION!!!
> # Keyspace Name
> keyspace: autogeneratedtest
> # The CQL for creating a keyspace (optional if it already exists)
> keyspace_definition: |
>   

[jira] [Commented] (CASSANDRA-10798) C* 2.1 doesn't create dir name with uuid if dir is already present

2015-12-01 Thread MASSIMO CELLI (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035255#comment-15035255
 ] 

MASSIMO CELLI commented on CASSANDRA-10798:
---

[~mshuler] thanks for your quick comments on this. Please can you also tell me 
what is the expected behaviour if a new nodes is bootstrapped into a cluster 
that has been upgraded from 2.0 to 2.1 (and the dir names are without uiid)? 
Should the new node have uuid for the dir names or follow the 2.0 name style? 

> C* 2.1 doesn't create dir name with uuid if dir is already present
> --
>
> Key: CASSANDRA-10798
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10798
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: C* 2.1.11
>Reporter: MASSIMO CELLI
> Fix For: 2.1.x
>
>
> on C* 2.1.12 if you create a new table and a directory with the same name 
> already exist under the keyspace then C* will simply use that directory 
> rather than creating a new one that has uuid in the name.
> Even if you drop and recreate the same table it will still use the previous 
> dir and never switch to a new one with uuid. This can happen on one of the 
> nodes in the cluster while the other nodes will use the uuid format for the 
> same table.
> For example I dropped and recreated the same table three times in this test 
> on a two nodes cluster
> node1
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 mytable
> node2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:47 
> mytable-678a7e31988511e58ce7cfa0aa9730a2
> drwxr-xr-x 3 cassandra cassandra 4096 Dec  1 23:41 
> mytable-cade4ee1988411e58ce7cfa0aa9730a2
> drwxr-xr-x 2 cassandra cassandra 4096 Dec  1 23:47 
> mytable-db1b9b41988511e58ce7cfa0aa9730a2
> This seems to break the changes introduced by CASSANDRA-5202



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Update of "WritePathForUsers" by MichaelEdge

2015-12-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "WritePathForUsers" page has been changed by MichaelEdge:
https://wiki.apache.org/cassandra/WritePathForUsers?action=diff=29=30

  = Cassandra Write Path =
  This section provides an overview of the Cassandra Write Path for users of 
Cassandra. Cassandra developers, who work on the Cassandra source code, should 
refer to the [[ArchitectureInternals|Architecture Internals]] developer 
documentation for a more detailed overview.
  
- {{attachment:CassandraWritePath.png|text describing image|width=800}}
+ {{attachment:CassandraWritePath.png|Cassandra Write Path|width=800}}
  
  == The Local Coordinator ==
  The local coordinator receives the write request from the client and performs 
the following:
1. Firstly, the local coordinator determines which nodes are responsible 
for storing the data:
- * The first replica is chosen based on hashing the primary key using the 
Partitioner; the Murmur3Partitioner is the default.
+ * The first replica is chosen based on hashing the primary key using the 
Partitioner; Murmur3Partitioner is the default.
- * Other replicas are chosen based on the replication strategy defined for 
the keyspace. In a production cluster this is most likely the 
NetworkTopologyStrategy.
+ * Other replicas are chosen based on the replication strategy defined for the 
keyspace. In a production cluster this is most likely the 
NetworkTopologyStrategy.
+   1. The local coordinator determines whether the write request would modify 
an associated materialized view. 
+ === If write request modifies materialized view ===
+ When using materialized views it’s important to ensure that the base table 
and materialized view are consistent, i.e. all changes applied to the base 
table MUST be applied to the materialized view. Cassandra uses a two-stage 
batch log process for this: 
+  * one batch log on the local coordinator ensuring that an update is made on 
the base table to a Quorum of replica nodes
+  * one batch log on each replica node ensuring the update is made to the 
corresponding materialized view.
+ The process on the local coordinator looks as follows:
+   1. Create batch log. To ensure consistency, the batch log ensures that 
changes are applied to a Quorum of replica nodes, regardless of the 
consistently level of the write request. Acknowledgement to the client is still 
based on the write request consistency level.
1. The write request is then sent to all replica nodes simultaneously.
+ === If write request does not modify materialized view ===
+   1. The write request is then sent to all replica nodes simultaneously.
-   1. The total number of nodes receiving the write request is determined by 
the replication factor for the keyspace.
+ In both cases the total number of nodes receiving the write request is 
determined by the replication factor for the keyspace.
  
  == Replica Nodes ==
  Replica nodes receive the write request from the local coordinator and 
perform the following:
@@ -21, +30 @@

   1. If row caching is used, invalidate the cache for that row. Row cache is 
populated on read only, so it must be invalidated when data for that row is 
written.
   1. Acknowledge the write request back to the local coordinator.
  The local coordinator waits for the appropriate number of acknowledgements 
from the replica nodes (dependent on the consistency level for this write 
request) before acknowledging back to the client.
+ === If write request modifies materialized view ===
+ Keeping a materialized view in sync with its base table adds more complexity 
to the write path and also incurs performance overheads on the replica node in 
the form of read-before-write, locks and batch logs.
+  1. The replica node acquires a lock on the partition, to ensure that write 
requests are serialised and applied to base table and materialized views in 
order.
+  1. The replica node reads the partition data and constructs the set of 
deltas to be applied to the materialized view. One insert/update/delete to the 
base table may result in many inserts/updates/deletes to the associated 
materialized view.
+  1. Write data to the Commit Log. 
+  1. Create batch log containing updates to the materialized view. The batch 
log ensures the set of updates to the materialized view is atomic, and is part 
of the mechanism that ensures base table and materialized view are kept 
consistent. 
+  1. Store the batch log containing the materialized view updates on the local 
replica node.
+  1. Send materialized view updates asynchronously to the materialized view 
replica (note, the materialized view could be stored on the same or a different 
replica node to the base table).
+  1. Write data to the MemTable.
+  1. The materialized view replica node will apply the update and return an 
acknowledgement to the base table replica node.
+  1. The same process takes place on each replica 

[jira] [Reopened] (CASSANDRA-10122) AssertionError after upgrade to 3.0

2015-12-01 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reopened CASSANDRA-10122:


Looks like this is still happening, at least on 2.1->3.0 upgrades. I suspect 
it's probably also happening on 2.2->3.0 upgrades and will confirm shortly.

to repro on 2.1->3.0 using cassandra-dtest:
{noformat}
export UPGRADE_PATH=2_1:3_0
nosetests -xvs 
upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_1_HEAD_to_cassandra_3_0_HEAD.rolling_upgrade_test
{noformat}

Here's how it looks in the logs when repro'd today:
{noformat}
ERROR [SharedPool-Worker-5] 2015-12-01 11:31:30,067 Message.java:611 - 
Unexpected exception during request; channel = [id: 0xa7623e8b, 
/127.0.0.1:48657 => /127.0.0.1:9042]
java.lang.AssertionError: null
at 
org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1188)
 ~[main/:na]
at 
org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1135)
 ~[main/:na]
at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
~[main/:na]
at 
org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
 ~[main/:na]
at 
org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:595)
 ~[main/:na]
at 
org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:744) 
~[main/:na]
at 
org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:687) 
~[main/:na]
at 
org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:670)
 ~[main/:na]
at 
org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:112)
 ~[main/:na]
at 
org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
 ~[main/:na]
at 
org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:332)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599)
 ~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) 
~[main/:na]
at 
org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) 
~[main/:na]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
 ~[main/:na]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
 ~[main/:na]
at 
org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:302)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:338)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:214)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
 ~[main/:na]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
 ~[main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
 [main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
 [main/:na]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_66]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 

[jira] [Commented] (CASSANDRA-10122) AssertionError after upgrade to 3.0

2015-12-01 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034328#comment-15034328
 ] 

Russ Hatch commented on CASSANDRA-10122:


Confirmed happening on 2.2 upgrading to 3.0 as well.

> AssertionError after upgrade to 3.0
> ---
>
> Key: CASSANDRA-10122
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10122
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>Assignee: Sylvain Lebresne
> Fix For: 3.0.1, 3.1
>
> Attachments: node1.log, node2.log, node3.log
>
>
> Upgrade tests are encountering this exception after upgrade from 2.2 HEAD to 
> 3.0 HEAD:
> {noformat}
> ERROR [SharedPool-Worker-4] 2015-08-18 12:33:57,858 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xa5ba2c7a, 
> /127.0.0.1:55048 => /127.0.0.1:9042]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:520)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:461)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:583)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:733)
>  ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:676) 
> ~[main/:na]
> at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:659)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:103)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:323)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1599)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1554) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1501) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1420) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:457)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:202)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:470)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:447)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:139)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
> at 

[jira] [Commented] (CASSANDRA-10718) Group pending compactions based on table

2015-12-01 Thread Tushar Agrawal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034335#comment-15034335
 ] 

Tushar Agrawal commented on CASSANDRA-10718:


Need some help and guidance:

PendingTasks metrics comes from "pendingTasks" gauge in CompactionMetrics.java. 
In getValue() method it is currently looping through all keyspaces and tables 
to get the estimated remaining tasks.

Shall I change the getValue() method (and call hierarchy) to return a 
Map> ? How this would be 
displayed under JMX metrics?

What could be the other approaches?



> Group pending compactions based on table
> 
>
> Key: CASSANDRA-10718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10718
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Tushar Agrawal
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> Currently we only give a global number on how many compactions are pending, 
> we should group this on a per-table basis, example:
> {code}
> $ nodetool compactionstats
> pending tasks:
> - keyspace1.standard1: 10
> - other_ks.table: 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10768) Optimize the way we check if a token is repaired in anticompaction

2015-12-01 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034333#comment-15034333
 ] 

Ariel Weisberg commented on CASSANDRA-10768:


+1

> Optimize the way we check if a token is repaired in anticompaction
> --
>
> Key: CASSANDRA-10768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10768
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> When we anticompact we check each token if it is within a repaired range, 
> this is very inefficient with many tokens as we do a linear search instead of 
> sorting the ranges and doing a binary search (or even just keeping track of 
> the next right-boundary and checking against that to avoid 2 comparisons)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[01/13] cassandra git commit: Fix completion problems in cqlsh

2015-12-01 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 8738087ed -> 1b81ad19d
  refs/heads/cassandra-3.0 3864b2114 -> 803a3d901
  refs/heads/cassandra-3.1 60aeef3d6 -> 6bda8868c
  refs/heads/trunk 7c3e0b191 -> 5daf8d020


Fix completion problems in cqlsh

Patch by stefania; reviewed by pmotta for CASSANDRA-10753


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b81ad19
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b81ad19
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b81ad19

Branch: refs/heads/cassandra-2.2
Commit: 1b81ad19d33710bfa1724262f76cd3cd8114b162
Parents: 8738087
Author: Stefania 
Authored: Tue Dec 1 13:52:09 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:52:09 2015 -0500

--
 bin/cqlsh.py|   2 +-
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-c535450.zip | Bin 233938 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  11 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 6 files changed, 25 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 027a45e..a5a2bfa 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -2330,7 +2330,7 @@ class ImportProcess(mp.Process):
 table_meta = new_cluster.metadata.keyspaces[self.ks].tables[self.cf]
 
 pk_cols = [col.name for col in table_meta.primary_key]
-cqltypes = [table_meta.columns[name].typestring for name in 
self.columns]
+cqltypes = [table_meta.columns[name].cql_type for name in self.columns]
 pk_indexes = [self.columns.index(col.name) for col in 
table_meta.primary_key]
 is_counter_table = ("counter" in cqltypes)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
new file mode 100644
index 000..507370b
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip 
b/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip
deleted file mode 100644
index 9c75cd6..000
Binary files a/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index aed7d01..4c21f7a 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -767,21 +767,20 @@ def relation_token_subject_completer(ctxt, cass):
 @completer_for('relation', 'rel_lhs')
 def select_relation_lhs_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
-filterable = set((layout.partition_key[0].name, 
layout.clustering_key[0].name))
+filterable = set()
 already_filtered_on = map(dequote_name, ctxt.get_binding('rel_lhs', ()))
-for num in range(1, len(layout.partition_key)):
-if layout.partition_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.partition_key)):
+if num == 0 or layout.partition_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.partition_key[num].name)
 else:
 break
-for num in range(1, len(layout.clustering_key)):
-if layout.clustering_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.clustering_key)):
+if num == 0 or layout.clustering_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.clustering_key[num].name)
 else:
 break
-for cd in layout.columns.values():
-if cd.index:
-filterable.add(cd.name)
+for idx in layout.indexes.itervalues():
+filterable.add(idx.index_options["target"])
 return map(maybe_escape_name, filterable)
 
 explain_completion('selector', 'colname')
@@ -830,16 +829,16 @@ def insert_newval_completer(ctxt, cass):
 if 

[02/13] cassandra git commit: Fix completion problems in cqlsh

2015-12-01 Thread jmckenzie
Fix completion problems in cqlsh

Patch by stefania; reviewed by pmotta for CASSANDRA-10753


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b81ad19
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b81ad19
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b81ad19

Branch: refs/heads/trunk
Commit: 1b81ad19d33710bfa1724262f76cd3cd8114b162
Parents: 8738087
Author: Stefania 
Authored: Tue Dec 1 13:52:09 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:52:09 2015 -0500

--
 bin/cqlsh.py|   2 +-
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-c535450.zip | Bin 233938 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  11 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 6 files changed, 25 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 027a45e..a5a2bfa 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -2330,7 +2330,7 @@ class ImportProcess(mp.Process):
 table_meta = new_cluster.metadata.keyspaces[self.ks].tables[self.cf]
 
 pk_cols = [col.name for col in table_meta.primary_key]
-cqltypes = [table_meta.columns[name].typestring for name in 
self.columns]
+cqltypes = [table_meta.columns[name].cql_type for name in self.columns]
 pk_indexes = [self.columns.index(col.name) for col in 
table_meta.primary_key]
 is_counter_table = ("counter" in cqltypes)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
new file mode 100644
index 000..507370b
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip 
b/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip
deleted file mode 100644
index 9c75cd6..000
Binary files a/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index aed7d01..4c21f7a 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -767,21 +767,20 @@ def relation_token_subject_completer(ctxt, cass):
 @completer_for('relation', 'rel_lhs')
 def select_relation_lhs_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
-filterable = set((layout.partition_key[0].name, 
layout.clustering_key[0].name))
+filterable = set()
 already_filtered_on = map(dequote_name, ctxt.get_binding('rel_lhs', ()))
-for num in range(1, len(layout.partition_key)):
-if layout.partition_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.partition_key)):
+if num == 0 or layout.partition_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.partition_key[num].name)
 else:
 break
-for num in range(1, len(layout.clustering_key)):
-if layout.clustering_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.clustering_key)):
+if num == 0 or layout.clustering_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.clustering_key[num].name)
 else:
 break
-for cd in layout.columns.values():
-if cd.index:
-filterable.add(cd.name)
+for idx in layout.indexes.itervalues():
+filterable.add(idx.index_options["target"])
 return map(maybe_escape_name, filterable)
 
 explain_completion('selector', 'colname')
@@ -830,16 +829,16 @@ def insert_newval_completer(ctxt, cass):
 if len(valuesdone) >= len(insertcols):
 return []
 curcol = insertcols[len(valuesdone)]
-cqltype = layout.columns[curcol].data_type
-coltype = cqltype.typename
+coltype = layout.columns[curcol].cql_type
 if coltype in ('map', 

[09/13] cassandra git commit: 10753-3.0 patch

2015-12-01 Thread jmckenzie
10753-3.0 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/803a3d90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/803a3d90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/803a3d90

Branch: refs/heads/trunk
Commit: 803a3d901141dcef4bcfce78b568300b283713e4
Parents: 25a1e89
Author: Stefania 
Authored: Tue Dec 1 13:56:03 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:56:03 2015 -0500

--
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-3f15725.zip | Bin 234113 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  10 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 5 files changed, 23 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/803a3d90/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
new file mode 100644
index 000..507370b
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/803a3d90/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip 
b/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip
deleted file mode 100644
index b9afb58..000
Binary files a/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/803a3d90/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 9ba4122..25cf427 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -787,21 +787,20 @@ def relation_token_subject_completer(ctxt, cass):
 @completer_for('relation', 'rel_lhs')
 def select_relation_lhs_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
-filterable = set((layout.partition_key[0].name, 
layout.clustering_key[0].name))
+filterable = set()
 already_filtered_on = map(dequote_name, ctxt.get_binding('rel_lhs', ()))
-for num in range(1, len(layout.partition_key)):
-if layout.partition_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.partition_key)):
+if num == 0 or layout.partition_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.partition_key[num].name)
 else:
 break
-for num in range(1, len(layout.clustering_key)):
-if layout.clustering_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.clustering_key)):
+if num == 0 or layout.clustering_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.clustering_key[num].name)
 else:
 break
-for cd in layout.columns.values():
-if cd.index:
-filterable.add(cd.name)
+for idx in layout.indexes.itervalues():
+filterable.add(idx.index_options["target"])
 return map(maybe_escape_name, filterable)
 
 explain_completion('selector', 'colname')
@@ -850,16 +849,16 @@ def insert_newval_completer(ctxt, cass):
 if len(valuesdone) >= len(insertcols):
 return []
 curcol = insertcols[len(valuesdone)]
-cqltype = layout.columns[curcol].data_type
-coltype = cqltype.typename
+coltype = layout.columns[curcol].cql_type
 if coltype in ('map', 'set'):
 return ['{']
 if coltype == 'list':
 return ['[']
 if coltype == 'boolean':
 return ['true', 'false']
+
 return [Hint('' % (maybe_escape_name(curcol),
-  cqltype.cql_parameterized_type()))]
+  coltype))]
 
 
 @completer_for('insertStatement', 'valcomma')
@@ -919,29 +918,28 @@ def update_col_completer(ctxt, cass):
 def update_countername_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
 curcol = dequote_name(ctxt.get_binding('updatecol', ''))
-cqltype = layout.columns[curcol].data_type
-coltype = cqltype.typename
+coltype = layout.columns[curcol].cql_type
 if coltype == 'counter':
 return [maybe_escape_name(curcol)]
 if coltype in ('map', 'set'):
 return ["{"]
 if coltype == 'list':
 return ["["]
-

[08/13] cassandra git commit: 10753-3.0 patch

2015-12-01 Thread jmckenzie
10753-3.0 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/803a3d90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/803a3d90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/803a3d90

Branch: refs/heads/cassandra-3.1
Commit: 803a3d901141dcef4bcfce78b568300b283713e4
Parents: 25a1e89
Author: Stefania 
Authored: Tue Dec 1 13:56:03 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:56:03 2015 -0500

--
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-3f15725.zip | Bin 234113 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  10 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 5 files changed, 23 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/803a3d90/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
new file mode 100644
index 000..507370b
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/803a3d90/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip 
b/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip
deleted file mode 100644
index b9afb58..000
Binary files a/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/803a3d90/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 9ba4122..25cf427 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -787,21 +787,20 @@ def relation_token_subject_completer(ctxt, cass):
 @completer_for('relation', 'rel_lhs')
 def select_relation_lhs_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
-filterable = set((layout.partition_key[0].name, 
layout.clustering_key[0].name))
+filterable = set()
 already_filtered_on = map(dequote_name, ctxt.get_binding('rel_lhs', ()))
-for num in range(1, len(layout.partition_key)):
-if layout.partition_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.partition_key)):
+if num == 0 or layout.partition_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.partition_key[num].name)
 else:
 break
-for num in range(1, len(layout.clustering_key)):
-if layout.clustering_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.clustering_key)):
+if num == 0 or layout.clustering_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.clustering_key[num].name)
 else:
 break
-for cd in layout.columns.values():
-if cd.index:
-filterable.add(cd.name)
+for idx in layout.indexes.itervalues():
+filterable.add(idx.index_options["target"])
 return map(maybe_escape_name, filterable)
 
 explain_completion('selector', 'colname')
@@ -850,16 +849,16 @@ def insert_newval_completer(ctxt, cass):
 if len(valuesdone) >= len(insertcols):
 return []
 curcol = insertcols[len(valuesdone)]
-cqltype = layout.columns[curcol].data_type
-coltype = cqltype.typename
+coltype = layout.columns[curcol].cql_type
 if coltype in ('map', 'set'):
 return ['{']
 if coltype == 'list':
 return ['[']
 if coltype == 'boolean':
 return ['true', 'false']
+
 return [Hint('' % (maybe_escape_name(curcol),
-  cqltype.cql_parameterized_type()))]
+  coltype))]
 
 
 @completer_for('insertStatement', 'valcomma')
@@ -919,29 +918,28 @@ def update_col_completer(ctxt, cass):
 def update_countername_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
 curcol = dequote_name(ctxt.get_binding('updatecol', ''))
-cqltype = layout.columns[curcol].data_type
-coltype = cqltype.typename
+coltype = layout.columns[curcol].cql_type
 if coltype == 'counter':
 return [maybe_escape_name(curcol)]
 if coltype in ('map', 'set'):
 return ["{"]
 if coltype == 'list':
 return 

[03/13] cassandra git commit: Fix completion problems in cqlsh

2015-12-01 Thread jmckenzie
Fix completion problems in cqlsh

Patch by stefania; reviewed by pmotta for CASSANDRA-10753


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b81ad19
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b81ad19
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b81ad19

Branch: refs/heads/cassandra-3.0
Commit: 1b81ad19d33710bfa1724262f76cd3cd8114b162
Parents: 8738087
Author: Stefania 
Authored: Tue Dec 1 13:52:09 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:52:09 2015 -0500

--
 bin/cqlsh.py|   2 +-
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-c535450.zip | Bin 233938 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  11 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 6 files changed, 25 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 027a45e..a5a2bfa 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -2330,7 +2330,7 @@ class ImportProcess(mp.Process):
 table_meta = new_cluster.metadata.keyspaces[self.ks].tables[self.cf]
 
 pk_cols = [col.name for col in table_meta.primary_key]
-cqltypes = [table_meta.columns[name].typestring for name in 
self.columns]
+cqltypes = [table_meta.columns[name].cql_type for name in self.columns]
 pk_indexes = [self.columns.index(col.name) for col in 
table_meta.primary_key]
 is_counter_table = ("counter" in cqltypes)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
new file mode 100644
index 000..507370b
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip 
b/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip
deleted file mode 100644
index 9c75cd6..000
Binary files a/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index aed7d01..4c21f7a 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -767,21 +767,20 @@ def relation_token_subject_completer(ctxt, cass):
 @completer_for('relation', 'rel_lhs')
 def select_relation_lhs_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
-filterable = set((layout.partition_key[0].name, 
layout.clustering_key[0].name))
+filterable = set()
 already_filtered_on = map(dequote_name, ctxt.get_binding('rel_lhs', ()))
-for num in range(1, len(layout.partition_key)):
-if layout.partition_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.partition_key)):
+if num == 0 or layout.partition_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.partition_key[num].name)
 else:
 break
-for num in range(1, len(layout.clustering_key)):
-if layout.clustering_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.clustering_key)):
+if num == 0 or layout.clustering_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.clustering_key[num].name)
 else:
 break
-for cd in layout.columns.values():
-if cd.index:
-filterable.add(cd.name)
+for idx in layout.indexes.itervalues():
+filterable.add(idx.index_options["target"])
 return map(maybe_escape_name, filterable)
 
 explain_completion('selector', 'colname')
@@ -830,16 +829,16 @@ def insert_newval_completer(ctxt, cass):
 if len(valuesdone) >= len(insertcols):
 return []
 curcol = insertcols[len(valuesdone)]
-cqltype = layout.columns[curcol].data_type
-coltype = cqltype.typename
+coltype = layout.columns[curcol].cql_type
 if coltype in 

[11/13] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-12-01 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bda8868
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bda8868
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bda8868

Branch: refs/heads/cassandra-3.1
Commit: 6bda8868caedc831767e7de398e9d3be53142a11
Parents: 60aeef3 803a3d9
Author: Joshua McKenzie 
Authored: Tue Dec 1 13:56:25 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:56:25 2015 -0500

--
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-3f15725.zip | Bin 234113 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  10 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 5 files changed, 23 insertions(+), 38 deletions(-)
--




[10/13] cassandra git commit: 10753-3.0 patch

2015-12-01 Thread jmckenzie
10753-3.0 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/803a3d90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/803a3d90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/803a3d90

Branch: refs/heads/cassandra-3.0
Commit: 803a3d901141dcef4bcfce78b568300b283713e4
Parents: 25a1e89
Author: Stefania 
Authored: Tue Dec 1 13:56:03 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:56:03 2015 -0500

--
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-3f15725.zip | Bin 234113 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  10 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 5 files changed, 23 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/803a3d90/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
new file mode 100644
index 000..507370b
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/803a3d90/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip 
b/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip
deleted file mode 100644
index b9afb58..000
Binary files a/lib/cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/803a3d90/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 9ba4122..25cf427 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -787,21 +787,20 @@ def relation_token_subject_completer(ctxt, cass):
 @completer_for('relation', 'rel_lhs')
 def select_relation_lhs_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
-filterable = set((layout.partition_key[0].name, 
layout.clustering_key[0].name))
+filterable = set()
 already_filtered_on = map(dequote_name, ctxt.get_binding('rel_lhs', ()))
-for num in range(1, len(layout.partition_key)):
-if layout.partition_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.partition_key)):
+if num == 0 or layout.partition_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.partition_key[num].name)
 else:
 break
-for num in range(1, len(layout.clustering_key)):
-if layout.clustering_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.clustering_key)):
+if num == 0 or layout.clustering_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.clustering_key[num].name)
 else:
 break
-for cd in layout.columns.values():
-if cd.index:
-filterable.add(cd.name)
+for idx in layout.indexes.itervalues():
+filterable.add(idx.index_options["target"])
 return map(maybe_escape_name, filterable)
 
 explain_completion('selector', 'colname')
@@ -850,16 +849,16 @@ def insert_newval_completer(ctxt, cass):
 if len(valuesdone) >= len(insertcols):
 return []
 curcol = insertcols[len(valuesdone)]
-cqltype = layout.columns[curcol].data_type
-coltype = cqltype.typename
+coltype = layout.columns[curcol].cql_type
 if coltype in ('map', 'set'):
 return ['{']
 if coltype == 'list':
 return ['[']
 if coltype == 'boolean':
 return ['true', 'false']
+
 return [Hint('' % (maybe_escape_name(curcol),
-  cqltype.cql_parameterized_type()))]
+  coltype))]
 
 
 @completer_for('insertStatement', 'valcomma')
@@ -919,29 +918,28 @@ def update_col_completer(ctxt, cass):
 def update_countername_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
 curcol = dequote_name(ctxt.get_binding('updatecol', ''))
-cqltype = layout.columns[curcol].data_type
-coltype = cqltype.typename
+coltype = layout.columns[curcol].cql_type
 if coltype == 'counter':
 return [maybe_escape_name(curcol)]
 if coltype in ('map', 'set'):
 return ["{"]
 if coltype == 'list':
 return 

[13/13] cassandra git commit: Merge branch 'cassandra-3.1' into trunk

2015-12-01 Thread jmckenzie
Merge branch 'cassandra-3.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5daf8d02
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5daf8d02
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5daf8d02

Branch: refs/heads/trunk
Commit: 5daf8d020f7a37a8e71e3441c8cedc0cdaa85b04
Parents: 7c3e0b1 6bda886
Author: Joshua McKenzie 
Authored: Tue Dec 1 13:57:04 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:57:04 2015 -0500

--
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-3f15725.zip | Bin 234113 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  10 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 5 files changed, 23 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5daf8d02/pylib/cqlshlib/cql3handling.py
--



[07/13] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-01 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25a1e896
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25a1e896
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25a1e896

Branch: refs/heads/cassandra-3.0
Commit: 25a1e8960cdc7c7558e1ec7e34e7b8e22cfde152
Parents: 3864b21 1b81ad1
Author: Joshua McKenzie 
Authored: Tue Dec 1 13:55:07 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:55:07 2015 -0500

--

--




[04/13] cassandra git commit: Fix completion problems in cqlsh

2015-12-01 Thread jmckenzie
Fix completion problems in cqlsh

Patch by stefania; reviewed by pmotta for CASSANDRA-10753


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b81ad19
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b81ad19
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b81ad19

Branch: refs/heads/cassandra-3.1
Commit: 1b81ad19d33710bfa1724262f76cd3cd8114b162
Parents: 8738087
Author: Stefania 
Authored: Tue Dec 1 13:52:09 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:52:09 2015 -0500

--
 bin/cqlsh.py|   2 +-
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-c535450.zip | Bin 233938 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  11 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 6 files changed, 25 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 027a45e..a5a2bfa 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -2330,7 +2330,7 @@ class ImportProcess(mp.Process):
 table_meta = new_cluster.metadata.keyspaces[self.ks].tables[self.cf]
 
 pk_cols = [col.name for col in table_meta.primary_key]
-cqltypes = [table_meta.columns[name].typestring for name in 
self.columns]
+cqltypes = [table_meta.columns[name].cql_type for name in self.columns]
 pk_indexes = [self.columns.index(col.name) for col in 
table_meta.primary_key]
 is_counter_table = ("counter" in cqltypes)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip
new file mode 100644
index 000..507370b
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip 
b/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip
deleted file mode 100644
index 9c75cd6..000
Binary files a/lib/cassandra-driver-internal-only-3.0.0a3.post0-c535450.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b81ad19/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index aed7d01..4c21f7a 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -767,21 +767,20 @@ def relation_token_subject_completer(ctxt, cass):
 @completer_for('relation', 'rel_lhs')
 def select_relation_lhs_completer(ctxt, cass):
 layout = get_table_meta(ctxt, cass)
-filterable = set((layout.partition_key[0].name, 
layout.clustering_key[0].name))
+filterable = set()
 already_filtered_on = map(dequote_name, ctxt.get_binding('rel_lhs', ()))
-for num in range(1, len(layout.partition_key)):
-if layout.partition_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.partition_key)):
+if num == 0 or layout.partition_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.partition_key[num].name)
 else:
 break
-for num in range(1, len(layout.clustering_key)):
-if layout.clustering_key[num - 1].name in already_filtered_on:
+for num in range(0, len(layout.clustering_key)):
+if num == 0 or layout.clustering_key[num - 1].name in 
already_filtered_on:
 filterable.add(layout.clustering_key[num].name)
 else:
 break
-for cd in layout.columns.values():
-if cd.index:
-filterable.add(cd.name)
+for idx in layout.indexes.itervalues():
+filterable.add(idx.index_options["target"])
 return map(maybe_escape_name, filterable)
 
 explain_completion('selector', 'colname')
@@ -830,16 +829,16 @@ def insert_newval_completer(ctxt, cass):
 if len(valuesdone) >= len(insertcols):
 return []
 curcol = insertcols[len(valuesdone)]
-cqltype = layout.columns[curcol].data_type
-coltype = cqltype.typename
+coltype = layout.columns[curcol].cql_type
 if coltype in 

[05/13] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-01 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25a1e896
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25a1e896
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25a1e896

Branch: refs/heads/cassandra-3.1
Commit: 25a1e8960cdc7c7558e1ec7e34e7b8e22cfde152
Parents: 3864b21 1b81ad1
Author: Joshua McKenzie 
Authored: Tue Dec 1 13:55:07 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:55:07 2015 -0500

--

--




[06/13] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-01 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/25a1e896
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/25a1e896
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/25a1e896

Branch: refs/heads/trunk
Commit: 25a1e8960cdc7c7558e1ec7e34e7b8e22cfde152
Parents: 3864b21 1b81ad1
Author: Joshua McKenzie 
Authored: Tue Dec 1 13:55:07 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:55:07 2015 -0500

--

--




[12/13] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-12-01 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bda8868
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bda8868
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bda8868

Branch: refs/heads/trunk
Commit: 6bda8868caedc831767e7de398e9d3be53142a11
Parents: 60aeef3 803a3d9
Author: Joshua McKenzie 
Authored: Tue Dec 1 13:56:25 2015 -0500
Committer: Joshua McKenzie 
Committed: Tue Dec 1 13:56:25 2015 -0500

--
 ...andra-driver-internal-only-3.0.0-6af642d.zip | Bin 0 -> 228893 bytes
 ...iver-internal-only-3.0.0a3.post0-3f15725.zip | Bin 234113 -> 0 bytes
 pylib/cqlshlib/cql3handling.py  |  36 +--
 pylib/cqlshlib/test/test_cqlsh_completion.py|  10 +++---
 pylib/cqlshlib/test/test_cqlsh_output.py|  15 
 5 files changed, 23 insertions(+), 38 deletions(-)
--




[jira] [Commented] (CASSANDRA-10771) bootstrap_test.py:TestBootstrap.resumable_bootstrap_test is failing

2015-12-01 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034371#comment-15034371
 ] 

Joshua McKenzie commented on CASSANDRA-10771:
-

The interface on Transactional.abort expects a throwable:

{code}
Throwable abort(Throwable accumulate);
{code}

Rather than passing abort null and then checking for null, would it make more 
sense to just pass e?

{code}
catch (Throwable e)
{
if (writer != null)
{
Throwable e2 = writer.abort(null); // pass e here instead of 
null
// add abort error to original and continue so we can drain 
unread stream
e.addSuppressed(e2);
}
{code}

> bootstrap_test.py:TestBootstrap.resumable_bootstrap_test is failing
> ---
>
> Key: CASSANDRA-10771
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10771
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Streaming and Messaging
>Reporter: Philip Thompson
>Assignee: Yuki Morishita
> Fix For: 3.0.1, 3.1
>
>
> When running {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} 
> locally, the test is failing on cassandra-3.0. When I bisect the failure, I 
> find that 87f5e2e39c100, the commit that merged CASSANDRA-10557 into 3.0 is 
> the first failing commit. I can reproduce this consistently locally, but 
> cassci is only having intermittent failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[08/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-12-01 Thread yukim
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2491ede3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2491ede3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2491ede3

Branch: refs/heads/cassandra-3.0
Commit: 2491ede3515f4b774069ffd645b0fb18f9c73630
Parents: 1b81ad1 5ba69a3
Author: Yuki Morishita 
Authored: Tue Dec 1 13:05:36 2015 -0600
Committer: Yuki Morishita 
Committed: Tue Dec 1 13:05:36 2015 -0600

--
 CHANGES.txt |   1 +
 .../cassandra/streaming/StreamReceiveTask.java  | 105 ++-
 2 files changed, 59 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2491ede3/CHANGES.txt
--
diff --cc CHANGES.txt
index af1a186,3ce2da6..7541212
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,22 -1,5 +1,23 @@@
 -2.1.12
 +2.2.4
 + * Show CQL help in cqlsh in web browser (CASSANDRA-7225)
 + * Serialize on disk the proper SSTable compression ratio (CASSANDRA-10775)
 + * Reject index queries while the index is building (CASSANDRA-8505)
 + * CQL.textile syntax incorrectly includes optional keyspace for aggregate 
SFUNC and FINALFUNC (CASSANDRA-10747)
 + * Fix JSON update with prepared statements (CASSANDRA-10631)
 + * Don't do anticompaction after subrange repair (CASSANDRA-10422)
 + * Fix SimpleDateType type compatibility (CASSANDRA-10027)
 + * (Hadoop) fix splits calculation (CASSANDRA-10640)
 + * (Hadoop) ensure that Cluster instances are always closed (CASSANDRA-10058)
 + * (cqlsh) show partial trace if incomplete after max_trace_wait 
(CASSANDRA-7645)
 + * Use most up-to-date version of schema for system tables (CASSANDRA-10652)
 + * Deprecate memory_allocator in cassandra.yaml (CASSANDRA-10581,10628)
 + * Expose phi values from failure detector via JMX and tweak debug
 +   and trace logging (CASSANDRA-9526)
 + * Fix RangeNamesQueryPager (CASSANDRA-10509)
 + * Deprecate Pig support (CASSANDRA-10542)
 + * Reduce contention getting instances of CompositeType (CASSANDRA-10433)
 +Merged from 2.1:
+  * Add proper error handling to stream receiver (CASSANDRA-10774)
   * Warn or fail when changing cluster topology live (CASSANDRA-10243)
   * Status command in debian/ubuntu init script doesn't work (CASSANDRA-10213)
   * Some DROP ... IF EXISTS incorrectly result in exceptions on non-existing 
KS (CASSANDRA-10658)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2491ede3/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
--
diff --cc src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
index 846524b,8773cab..dd56b8b
--- a/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
+++ b/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
@@@ -37,8 -37,10 +37,9 @@@ import org.apache.cassandra.db.ColumnFa
  import org.apache.cassandra.db.Keyspace;
  import org.apache.cassandra.dht.Bounds;
  import org.apache.cassandra.dht.Token;
 -import org.apache.cassandra.io.sstable.SSTableReader;
 -import org.apache.cassandra.io.sstable.SSTableWriter;
 -import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.io.sstable.format.SSTableWriter;
+ import org.apache.cassandra.utils.JVMStabilityInspector;
  import org.apache.cassandra.utils.Pair;
  
  import org.apache.cassandra.utils.concurrent.Refs;
@@@ -112,63 -117,73 +113,73 @@@ public class StreamReceiveTask extends 
  
  public void run()
  {
- Pair kscf = Schema.instance.getCF(task.cfId);
- if (kscf == null)
+ try
  {
- // schema was dropped during streaming
+ Pair kscf = Schema.instance.getCF(task.cfId);
+ if (kscf == null)
+ {
+ // schema was dropped during streaming
+ for (SSTableWriter writer : task.sstables)
+ writer.abort();
+ task.sstables.clear();
+ task.session.taskCompleted(task);
+ return;
+ }
+ ColumnFamilyStore cfs = 
Keyspace.open(kscf.left).getColumnFamilyStore(kscf.right);
+ 
+ File lockfiledir = 
cfs.directories.getWriteableLocationAsFile(task.sstables.size() * 256L);
+ if (lockfiledir == null)
+ throw new IOError(new IOException("All disks full"));
+ StreamLockfile lockfile = new StreamLockfile(lockfiledir, 
UUID.randomUUID());
+ 

[03/15] cassandra git commit: Add proper error handling to stream receiver

2015-12-01 Thread yukim
Add proper error handling to stream receiver

patch by Paulo Motta; reviewed by yukim for CASSANDRA-10774


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ba69a32
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ba69a32
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ba69a32

Branch: refs/heads/trunk
Commit: 5ba69a32590074610f5516a20b8198416b79dfcf
Parents: 7650fc1
Author: Paulo Motta 
Authored: Fri Nov 27 16:37:37 2015 -0800
Committer: Yuki Morishita 
Committed: Tue Dec 1 11:53:35 2015 -0600

--
 CHANGES.txt |   1 +
 .../cassandra/streaming/StreamReceiveTask.java  | 105 ++-
 2 files changed, 59 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ba69a32/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a2f7b6e..3ce2da6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.12
+ * Add proper error handling to stream receiver (CASSANDRA-10774)
  * Warn or fail when changing cluster topology live (CASSANDRA-10243)
  * Status command in debian/ubuntu init script doesn't work (CASSANDRA-10213)
  * Some DROP ... IF EXISTS incorrectly result in exceptions on non-existing KS 
(CASSANDRA-10658)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ba69a32/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java 
b/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
index 738c93c..8773cab 100644
--- a/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
+++ b/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
@@ -40,6 +40,7 @@ import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.io.sstable.SSTableWriter;
 import org.apache.cassandra.utils.FBUtilities;
+import org.apache.cassandra.utils.JVMStabilityInspector;
 import org.apache.cassandra.utils.Pair;
 
 import org.apache.cassandra.utils.concurrent.Refs;
@@ -116,63 +117,73 @@ public class StreamReceiveTask extends StreamTask
 
 public void run()
 {
-Pair kscf = Schema.instance.getCF(task.cfId);
-if (kscf == null)
+try
 {
-// schema was dropped during streaming
+Pair kscf = Schema.instance.getCF(task.cfId);
+if (kscf == null)
+{
+// schema was dropped during streaming
+for (SSTableWriter writer : task.sstables)
+writer.abort();
+task.sstables.clear();
+task.session.taskCompleted(task);
+return;
+}
+ColumnFamilyStore cfs = 
Keyspace.open(kscf.left).getColumnFamilyStore(kscf.right);
+
+File lockfiledir = 
cfs.directories.getWriteableLocationAsFile(task.sstables.size() * 256L);
+if (lockfiledir == null)
+throw new IOError(new IOException("All disks full"));
+StreamLockfile lockfile = new StreamLockfile(lockfiledir, 
UUID.randomUUID());
+lockfile.create(task.sstables);
+List readers = new ArrayList<>();
 for (SSTableWriter writer : task.sstables)
-writer.abort();
+readers.add(writer.closeAndOpenReader());
+lockfile.delete();
 task.sstables.clear();
-return;
-}
-ColumnFamilyStore cfs = 
Keyspace.open(kscf.left).getColumnFamilyStore(kscf.right);
-
-File lockfiledir = 
cfs.directories.getWriteableLocationAsFile(task.sstables.size() * 256L);
-if (lockfiledir == null)
-throw new IOError(new IOException("All disks full"));
-StreamLockfile lockfile = new StreamLockfile(lockfiledir, 
UUID.randomUUID());
-lockfile.create(task.sstables);
-List readers = new ArrayList<>();
-for (SSTableWriter writer : task.sstables)
-readers.add(writer.closeAndOpenReader());
-lockfile.delete();
-task.sstables.clear();
-
-try (Refs refs = Refs.ref(readers))
-{
-// add sstables and build secondary indexes
-cfs.addSSTables(readers);
-cfs.indexManager.maybeBuildSecondaryIndexes(readers, 
cfs.indexManager.allIndexesNames());
 
-//invalidate row and 

[01/15] cassandra git commit: Add proper error handling to stream receiver

2015-12-01 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 7650fc196 -> 5ba69a325
  refs/heads/cassandra-2.2 1b81ad19d -> 2491ede35
  refs/heads/cassandra-3.0 803a3d901 -> ccb20ad46
  refs/heads/cassandra-3.1 6bda8868c -> 5b6a368c9
  refs/heads/trunk 5daf8d020 -> 03863ed24


Add proper error handling to stream receiver

patch by Paulo Motta; reviewed by yukim for CASSANDRA-10774


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ba69a32
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ba69a32
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ba69a32

Branch: refs/heads/cassandra-2.1
Commit: 5ba69a32590074610f5516a20b8198416b79dfcf
Parents: 7650fc1
Author: Paulo Motta 
Authored: Fri Nov 27 16:37:37 2015 -0800
Committer: Yuki Morishita 
Committed: Tue Dec 1 11:53:35 2015 -0600

--
 CHANGES.txt |   1 +
 .../cassandra/streaming/StreamReceiveTask.java  | 105 ++-
 2 files changed, 59 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ba69a32/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a2f7b6e..3ce2da6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.12
+ * Add proper error handling to stream receiver (CASSANDRA-10774)
  * Warn or fail when changing cluster topology live (CASSANDRA-10243)
  * Status command in debian/ubuntu init script doesn't work (CASSANDRA-10213)
  * Some DROP ... IF EXISTS incorrectly result in exceptions on non-existing KS 
(CASSANDRA-10658)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ba69a32/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java 
b/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
index 738c93c..8773cab 100644
--- a/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
+++ b/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java
@@ -40,6 +40,7 @@ import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.io.sstable.SSTableWriter;
 import org.apache.cassandra.utils.FBUtilities;
+import org.apache.cassandra.utils.JVMStabilityInspector;
 import org.apache.cassandra.utils.Pair;
 
 import org.apache.cassandra.utils.concurrent.Refs;
@@ -116,63 +117,73 @@ public class StreamReceiveTask extends StreamTask
 
 public void run()
 {
-Pair kscf = Schema.instance.getCF(task.cfId);
-if (kscf == null)
+try
 {
-// schema was dropped during streaming
+Pair kscf = Schema.instance.getCF(task.cfId);
+if (kscf == null)
+{
+// schema was dropped during streaming
+for (SSTableWriter writer : task.sstables)
+writer.abort();
+task.sstables.clear();
+task.session.taskCompleted(task);
+return;
+}
+ColumnFamilyStore cfs = 
Keyspace.open(kscf.left).getColumnFamilyStore(kscf.right);
+
+File lockfiledir = 
cfs.directories.getWriteableLocationAsFile(task.sstables.size() * 256L);
+if (lockfiledir == null)
+throw new IOError(new IOException("All disks full"));
+StreamLockfile lockfile = new StreamLockfile(lockfiledir, 
UUID.randomUUID());
+lockfile.create(task.sstables);
+List readers = new ArrayList<>();
 for (SSTableWriter writer : task.sstables)
-writer.abort();
+readers.add(writer.closeAndOpenReader());
+lockfile.delete();
 task.sstables.clear();
-return;
-}
-ColumnFamilyStore cfs = 
Keyspace.open(kscf.left).getColumnFamilyStore(kscf.right);
-
-File lockfiledir = 
cfs.directories.getWriteableLocationAsFile(task.sstables.size() * 256L);
-if (lockfiledir == null)
-throw new IOError(new IOException("All disks full"));
-StreamLockfile lockfile = new StreamLockfile(lockfiledir, 
UUID.randomUUID());
-lockfile.create(task.sstables);
-List readers = new ArrayList<>();
-for (SSTableWriter writer : task.sstables)
-readers.add(writer.closeAndOpenReader());
-lockfile.delete();
-task.sstables.clear();
-
-try (Refs 

  1   2   3   >