[jira] [Commented] (CASSANDRA-7395) Support for pure user-defined functions (UDF)

2014-08-01 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082010#comment-14082010
 ] 

Robert Stupp commented on CASSANDRA-7395:
-

OK :)

Then I would build some unit tests.

BTW: Is there something, that I can reuse to add a unit test for schema 
migration in a cluster? E.g. Some unit test that creates a function on node A 
and checks if it can execute it on node B.

 Support for pure user-defined functions (UDF)
 -

 Key: CASSANDRA-7395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7395
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Robert Stupp
  Labels: cql
 Fix For: 3.0

 Attachments: 7395.txt, udf-create-syntax.png, udf-drop-syntax.png


 We have some tickets for various aspects of UDF (CASSANDRA-4914, 
 CASSANDRA-5970, CASSANDRA-4998) but they all suffer from various degrees of 
 ocean-boiling.
 Let's start with something simple: allowing pure user-defined functions in 
 the SELECT clause of a CQL query.  That's it.
 By pure I mean, must depend only on the input parameters.  No side effects. 
  No exposure to C* internals.  Column values in, result out.  
 http://en.wikipedia.org/wiki/Pure_function



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5959) CQL3 support for multi-column insert in a single operation (Batch Insert / Batch Mutate)

2014-08-01 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082025#comment-14082025
 ] 

Robert Stupp commented on CASSANDRA-5959:
-

Can we resolve this ticket as later instead of duplicate? This one covers 
CQL INSERT syntax, which is different to CASSANDRA-4693 which covers batch 
prepared statements.

In CASSANDRA-7654 I restricted rows to be in the same partition to keep updates 
as atomic as possible to prevent it being just another syntax for BATCH w/ 
pstmt.
If the rows are restricted to be in the same partition, it could solve another 
issue that deletes always win over inserts/updates (with the same 
modification timestamp).
It could be used to replace a whole partition although I'm not sold on the 
fact that an INSERT implicitly performs a DELETE.
I think with Thift it was possible to replace a complete row (not sure - did 
not work much with Thift).

 CQL3 support for multi-column insert in a single operation (Batch Insert / 
 Batch Mutate)
 

 Key: CASSANDRA-5959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5959
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Drivers (now out of tree)
Reporter: Les Hazlewood
  Labels: CQL

 h3. Impetus for this Request
 (from the original [question on 
 StackOverflow|http://stackoverflow.com/questions/18522191/using-cassandra-and-cql3-how-do-you-insert-an-entire-wide-row-in-a-single-reque]):
 I want to insert a single row with 50,000 columns into Cassandra 1.2.9. 
 Before inserting, I have all the data for the entire row ready to go (in 
 memory):
 {code}
 +-+--+--+--+--+---+
 | | 0| 1| 2| ...  | 4 |
 | row_id  +--+--+--+--+---+
 | | text | text | text | ...  | text  |
 +-+--+--+--|--+---+
 {code}
 The column names are integers, allowing slicing for pagination. The column 
 values are a value at that particular index.
 CQL3 table definition:
 {code}
 create table results (
 row_id text,
 index int,
 value text,
 primary key (row_id, index)
 ) 
 with compact storage;
 {code}
 As I already have the row_id and all 50,000 name/value pairs in memory, I 
 just want to insert a single row into Cassandra in a single request/operation 
 so it is as fast as possible.
 The only thing I can seem to find is to do execute the following 50,000 times:
 {code}
 INSERT INTO results (row_id, index, value) values (my_row_id, ?, ?);
 {code}
 where the first {{?}} is is an index counter ({{i}}) and the second {{?}} is 
 the text value to store at location {{i}}.
 With the Datastax Java Driver client and C* server on the same development 
 machine, this took a full minute to execute.
 Oddly enough, the same 50,000 insert statements in a [Datastax Java Driver 
 Batch|http://www.datastax.com/drivers/java/apidocs/com/datastax/driver/core/querybuilder/QueryBuilder.html#batch(com.datastax.driver.core.Statement...)]
  on the same machine took 7.5 minutes.  I thought batches were supposed to be 
 _faster_ than individual inserts?
 We tried instead with a Thrift client (Astyanax) and the same insert via a 
 [MutationBatch|http://netflix.github.io/astyanax/javadoc/com/netflix/astyanax/MutationBatch.html].
   This took _235 milliseconds_.
 h3. Feature Request
 As a result of this performance testing, this issue is to request that CQL3 
 support batch mutation operations as a single operation (statement) to ensure 
 the same speed/performance benefits as existing Thrift clients.
 Example suggested syntax (based on the above example table/column family):
 {code}
 insert into results (row_id, (index,value)) values 
 ((0,text0), (1,text1), (2,text2), ..., (N,textN));
 {code}
 Each value in the {{values}} clause is a tuple.  The first tuple element is 
 the column name, the second tuple element is the column value.  This seems to 
 be the most simple/accurate representation of what happens during a batch 
 insert/mutate.
 Not having this CQL feature forced us to remove the Datastax Java Driver 
 (which we liked) in favor of Astyanax because Astyanax supports this 
 behavior.  We desire feature/performance parity between Thrift and 
 CQL3/Datastax Java Driver, so we hope this request improves both CQL3 and the 
 Driver.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7638) Revisit GCInspector

2014-08-01 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082031#comment-14082031
 ] 

Robert Stupp commented on CASSANDRA-7638:
-

Since this one changes the way operations people work (log file changes), 2.1 
sounds good.

 Revisit GCInspector
 ---

 Key: CASSANDRA-7638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7638
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.0.10

 Attachments: 7638.txt


 In CASSANDRA-2868 we had to change the api that GCI uses to avoid the native 
 memory leak, but this caused GCI to be less reliable and more 'best effort' 
 than before where it was 100% reliable.  Let's revisit this and see if the 
 native memory leak is fixed in java7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7653) Add role based access control to Cassandra

2014-08-01 Thread Mike Adamson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082048#comment-14082048
 ] 

Mike Adamson commented on CASSANDRA-7653:
-

[~mshuler] There is a suite of dtests for this here: 
https://github.com/riptano/cassandra-dtest/blob/rbac/auth_roles_test.py

I can attach this as a patch if you want.

 Add role based access control to Cassandra
 --

 Key: CASSANDRA-7653
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7653
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Mike Adamson
 Fix For: 3.0

 Attachments: 7653.patch


 The current authentication model supports granting permissions to individual 
 users. While this is OK for small or medium organizations wanting to 
 implement authorization, it does not work well in large organizations because 
 of the overhead of having to maintain the permissions for each user.
 Introducing roles into the authentication model would allow sets of 
 permissions to be controlled in one place as a role and then the role granted 
 to users. Roles should also be able to be granted to other roles to allow 
 hierarchical sets of permissions to be built up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair

2014-08-01 Thread ZhongYu (JIRA)
ZhongYu created CASSANDRA-7664:
--

 Summary: IndexOutOfBoundsException thrown during repair
 Key: CASSANDRA-7664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7664
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1
Casandra 1.2.18
Reporter: ZhongYu


I was running repair command with moderate read and write load at the same 
time. And I found tens of IndexOutOfBoundsException in system log as follows:
ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
Exception in thread Thread[Thread-6056,5,main]
java.lang.IndexOutOfBoundsException
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)

I read the source code of CompressedInputStream.java and found there surely 
will throw IndexOutOfBoundsException in the following situation:

{code:title=Bar.java|borderStyle=solid}
// Part of CompressedInputStream.java start from Line 139
   protected void runMayThrow() throws Exception
{
byte[] compressedWithCRC;
while (chunks.hasNext())
{
CompressionMetadata.Chunk chunk = chunks.next();

int readLength = chunk.length + 4; // read with CRC
compressedWithCRC = new byte[readLength];

int bufferRead = 0;
while (bufferRead  readLength)
bufferRead += source.read(compressedWithCRC, bufferRead, 
readLength - bufferRead);
dataBuffer.put(compressedWithCRC);
}
}
{code}
If read function read nothing because the end of the stream has been reached, 
it will return -1, thus bufferRead can be negetive. In the next turn read 
function will throw IndexOutOfBoundsException because bufferRead is negetive. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair

2014-08-01 Thread ZhongYu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhongYu updated CASSANDRA-7664:
---

Description: 
I was running repair command with moderate read and write load at the same 
time. And I found tens of IndexOutOfBoundsException in system log as follows:
ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
Exception in thread Thread[Thread-6056,5,main]
java.lang.IndexOutOfBoundsException
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)

I read the source code of CompressedInputStream.java and found there surely 
will throw IndexOutOfBoundsException in the following situation:

{code:title=CompressedInputStream.java|borderStyle=solid}
// Part of CompressedInputStream.java start from Line 139
   protected void runMayThrow() throws Exception
{
byte[] compressedWithCRC;
while (chunks.hasNext())
{
CompressionMetadata.Chunk chunk = chunks.next();

int readLength = chunk.length + 4; // read with CRC
compressedWithCRC = new byte[readLength];

int bufferRead = 0;
while (bufferRead  readLength)
bufferRead += source.read(compressedWithCRC, bufferRead, 
readLength - bufferRead);
dataBuffer.put(compressedWithCRC);
}
}
{code}
If read function read nothing because the end of the stream has been reached, 
it will return -1, thus bufferRead can be negetive. In the next turn read 
function will throw IndexOutOfBoundsException because bufferRead is negetive. 

  was:
I was running repair command with moderate read and write load at the same 
time. And I found tens of IndexOutOfBoundsException in system log as follows:
ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
Exception in thread Thread[Thread-6056,5,main]
java.lang.IndexOutOfBoundsException
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)

I read the source code of CompressedInputStream.java and found there surely 
will throw IndexOutOfBoundsException in the following situation:

{code:title=Bar.java|borderStyle=solid}
// Part of CompressedInputStream.java start from Line 139
   protected void runMayThrow() throws Exception
{
byte[] compressedWithCRC;
while (chunks.hasNext())
{
CompressionMetadata.Chunk chunk = chunks.next();

int readLength = chunk.length + 4; // read with CRC
compressedWithCRC = new byte[readLength];

int bufferRead = 0;
while (bufferRead  readLength)
bufferRead += source.read(compressedWithCRC, bufferRead, 
readLength - bufferRead);
dataBuffer.put(compressedWithCRC);
}
}
{code}
If read function read nothing because the end of the stream has been reached, 
it will return -1, thus bufferRead can be negetive. In the next turn read 
function will throw IndexOutOfBoundsException because bufferRead is negetive. 


 IndexOutOfBoundsException thrown during repair
 --

 Key: CASSANDRA-7664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7664
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1
 Casandra 1.2.18
Reporter: ZhongYu

 I was running repair command with moderate read and write load at the same 
 time. And I found tens of IndexOutOfBoundsException in system log as follows:
 ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
 Exception in thread Thread[Thread-6056,5,main]
 java.lang.IndexOutOfBoundsException
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
 at 
 org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.lang.Thread.run(Thread.java:662)
 I read the source code of CompressedInputStream.java and found there surely 
 will throw IndexOutOfBoundsException in the following situation:
 {code:title=CompressedInputStream.java|borderStyle=solid}
 // Part of CompressedInputStream.java start from Line 

[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair

2014-08-01 Thread ZhongYu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhongYu updated CASSANDRA-7664:
---

Description: 
I was running repair command with moderate read and write load at the same 
time. And I found tens of IndexOutOfBoundsException in system log as follows:
ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
Exception in thread Thread[Thread-6056,5,main]
java.lang.IndexOutOfBoundsException
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)

I read the source code of CompressedInputStream.java and found there surely 
will throw IndexOutOfBoundsException in the following situation:

{code:title=CompressedInputStream.java|borderStyle=solid}
// Part of CompressedInputStream.java start from Line 139
   protected void runMayThrow() throws Exception
{
byte[] compressedWithCRC;
while (chunks.hasNext())
{
CompressionMetadata.Chunk chunk = chunks.next();

int readLength = chunk.length + 4; // read with CRC
compressedWithCRC = new byte[readLength];

int bufferRead = 0;
while (bufferRead  readLength)
bufferRead += source.read(compressedWithCRC, bufferRead, 
readLength - bufferRead);
dataBuffer.put(compressedWithCRC);
}
}
{code}
If read function read nothing because the end of the stream has been reached, 
it will return -1, thus bufferRead can be negetive. In the next circle, read 
function will throw IndexOutOfBoundsException because bufferRead is negetive. 

  was:
I was running repair command with moderate read and write load at the same 
time. And I found tens of IndexOutOfBoundsException in system log as follows:
ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
Exception in thread Thread[Thread-6056,5,main]
java.lang.IndexOutOfBoundsException
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)

I read the source code of CompressedInputStream.java and found there surely 
will throw IndexOutOfBoundsException in the following situation:

{code:title=CompressedInputStream.java|borderStyle=solid}
// Part of CompressedInputStream.java start from Line 139
   protected void runMayThrow() throws Exception
{
byte[] compressedWithCRC;
while (chunks.hasNext())
{
CompressionMetadata.Chunk chunk = chunks.next();

int readLength = chunk.length + 4; // read with CRC
compressedWithCRC = new byte[readLength];

int bufferRead = 0;
while (bufferRead  readLength)
bufferRead += source.read(compressedWithCRC, bufferRead, 
readLength - bufferRead);
dataBuffer.put(compressedWithCRC);
}
}
{code}
If read function read nothing because the end of the stream has been reached, 
it will return -1, thus bufferRead can be negetive. In the next turn read 
function will throw IndexOutOfBoundsException because bufferRead is negetive. 


 IndexOutOfBoundsException thrown during repair
 --

 Key: CASSANDRA-7664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7664
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1
 Casandra 1.2.18
Reporter: ZhongYu

 I was running repair command with moderate read and write load at the same 
 time. And I found tens of IndexOutOfBoundsException in system log as follows:
 ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
 Exception in thread Thread[Thread-6056,5,main]
 java.lang.IndexOutOfBoundsException
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
 at 
 org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.lang.Thread.run(Thread.java:662)
 I read the source code of CompressedInputStream.java and found there surely 
 will throw IndexOutOfBoundsException in the following situation:
 {code:title=CompressedInputStream.java|borderStyle=solid}
 // Part of 

[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair

2014-08-01 Thread ZhongYu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhongYu updated CASSANDRA-7664:
---

Description: 
I was running repair command with moderate read and write load at the same 
time. And I found tens of IndexOutOfBoundsException in system log as follows:
ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
Exception in thread Thread[Thread-6056,5,main]
java.lang.IndexOutOfBoundsException
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)

I read the source code of CompressedInputStream.java and found there surely 
will throw IndexOutOfBoundsException in the following situation:

{code:title=CompressedInputStream.java|borderStyle=solid}
// Part of CompressedInputStream.java start from Line 139
protected void runMayThrow() throws Exception
{
byte[] compressedWithCRC;
while (chunks.hasNext())
{
CompressionMetadata.Chunk chunk = chunks.next();

int readLength = chunk.length + 4; // read with CRC
compressedWithCRC = new byte[readLength];

int bufferRead = 0;
while (bufferRead  readLength)
bufferRead += source.read(compressedWithCRC, bufferRead, 
readLength - bufferRead);
dataBuffer.put(compressedWithCRC);
}
}
{code}
If read function read nothing because the end of the stream has been reached, 
it will return -1, thus bufferRead can be negetive. In the next circle, read 
function will throw IndexOutOfBoundsException because bufferRead is negetive. 

  was:
I was running repair command with moderate read and write load at the same 
time. And I found tens of IndexOutOfBoundsException in system log as follows:
ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
Exception in thread Thread[Thread-6056,5,main]
java.lang.IndexOutOfBoundsException
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)

I read the source code of CompressedInputStream.java and found there surely 
will throw IndexOutOfBoundsException in the following situation:

{code:title=CompressedInputStream.java|borderStyle=solid}
// Part of CompressedInputStream.java start from Line 139
   protected void runMayThrow() throws Exception
{
byte[] compressedWithCRC;
while (chunks.hasNext())
{
CompressionMetadata.Chunk chunk = chunks.next();

int readLength = chunk.length + 4; // read with CRC
compressedWithCRC = new byte[readLength];

int bufferRead = 0;
while (bufferRead  readLength)
bufferRead += source.read(compressedWithCRC, bufferRead, 
readLength - bufferRead);
dataBuffer.put(compressedWithCRC);
}
}
{code}
If read function read nothing because the end of the stream has been reached, 
it will return -1, thus bufferRead can be negetive. In the next circle, read 
function will throw IndexOutOfBoundsException because bufferRead is negetive. 


 IndexOutOfBoundsException thrown during repair
 --

 Key: CASSANDRA-7664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7664
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1
 Casandra 1.2.18
Reporter: ZhongYu

 I was running repair command with moderate read and write load at the same 
 time. And I found tens of IndexOutOfBoundsException in system log as follows:
 ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
 Exception in thread Thread[Thread-6056,5,main]
 java.lang.IndexOutOfBoundsException
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
 at 
 org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.lang.Thread.run(Thread.java:662)
 I read the source code of CompressedInputStream.java and found there surely 
 will throw IndexOutOfBoundsException in the following situation:
 {code:title=CompressedInputStream.java|borderStyle=solid}
 // Part of 

[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair

2014-08-01 Thread ZhongYu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhongYu updated CASSANDRA-7664:
---

Description: 
I was running repair command with moderate read and write load at the same 
time. And I found tens of IndexOutOfBoundsException in system log as follows:
{quote}
ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
Exception in thread Thread[Thread-6056,5,main]
java.lang.IndexOutOfBoundsException
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)
{quote}

I read the source code of CompressedInputStream.java and found there surely 
will throw IndexOutOfBoundsException in the following situation:

{code:title=CompressedInputStream.java|borderStyle=solid}
// Part of CompressedInputStream.java start from Line 139
protected void runMayThrow() throws Exception
{
byte[] compressedWithCRC;
while (chunks.hasNext())
{
CompressionMetadata.Chunk chunk = chunks.next();

int readLength = chunk.length + 4; // read with CRC
compressedWithCRC = new byte[readLength];

int bufferRead = 0;
while (bufferRead  readLength)
bufferRead += source.read(compressedWithCRC, bufferRead, 
readLength - bufferRead);
dataBuffer.put(compressedWithCRC);
}
}
{code}

If read function read nothing because the end of the stream has been reached, 
it will return -1, thus bufferRead can be negetive. In the next circle, read 
function will throw IndexOutOfBoundsException because bufferRead is negetive. 

  was:
I was running repair command with moderate read and write load at the same 
time. And I found tens of IndexOutOfBoundsException in system log as follows:
ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
Exception in thread Thread[Thread-6056,5,main]
java.lang.IndexOutOfBoundsException
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:662)

I read the source code of CompressedInputStream.java and found there surely 
will throw IndexOutOfBoundsException in the following situation:

{code:title=CompressedInputStream.java|borderStyle=solid}
// Part of CompressedInputStream.java start from Line 139
protected void runMayThrow() throws Exception
{
byte[] compressedWithCRC;
while (chunks.hasNext())
{
CompressionMetadata.Chunk chunk = chunks.next();

int readLength = chunk.length + 4; // read with CRC
compressedWithCRC = new byte[readLength];

int bufferRead = 0;
while (bufferRead  readLength)
bufferRead += source.read(compressedWithCRC, bufferRead, 
readLength - bufferRead);
dataBuffer.put(compressedWithCRC);
}
}
{code}
If read function read nothing because the end of the stream has been reached, 
it will return -1, thus bufferRead can be negetive. In the next circle, read 
function will throw IndexOutOfBoundsException because bufferRead is negetive. 


 IndexOutOfBoundsException thrown during repair
 --

 Key: CASSANDRA-7664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7664
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1
 Casandra 1.2.18
Reporter: ZhongYu

 I was running repair command with moderate read and write load at the same 
 time. And I found tens of IndexOutOfBoundsException in system log as follows:
 {quote}
 ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
 Exception in thread Thread[Thread-6056,5,main]
 java.lang.IndexOutOfBoundsException
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
 at 
 org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.lang.Thread.run(Thread.java:662)
 {quote}
 I read the source code of CompressedInputStream.java and found there surely 
 will throw IndexOutOfBoundsException in the following situation:
 

[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair

2014-08-01 Thread ZhongYu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhongYu updated CASSANDRA-7664:
---

Environment: 
RHEL 6.1
Casandra 1.2.3

  was:
RHEL 6.1
Casandra 1.2.18


 IndexOutOfBoundsException thrown during repair
 --

 Key: CASSANDRA-7664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7664
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1
 Casandra 1.2.3
Reporter: ZhongYu

 I was running repair command with moderate read and write load at the same 
 time. And I found tens of IndexOutOfBoundsException in system log as follows:
 {quote}
 ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
 Exception in thread Thread[Thread-6056,5,main]
 java.lang.IndexOutOfBoundsException
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
 at 
 org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.lang.Thread.run(Thread.java:662)
 {quote}
 I read the source code of CompressedInputStream.java and found there surely 
 will throw IndexOutOfBoundsException in the following situation:
 {code:title=CompressedInputStream.java|borderStyle=solid}
 // Part of CompressedInputStream.java start from Line 139
 protected void runMayThrow() throws Exception
 {
 byte[] compressedWithCRC;
 while (chunks.hasNext())
 {
 CompressionMetadata.Chunk chunk = chunks.next();
 int readLength = chunk.length + 4; // read with CRC
 compressedWithCRC = new byte[readLength];
 int bufferRead = 0;
 while (bufferRead  readLength)
 bufferRead += source.read(compressedWithCRC, bufferRead, 
 readLength - bufferRead);
 dataBuffer.put(compressedWithCRC);
 }
 }
 {code}
 If read function read nothing because the end of the stream has been reached, 
 it will return -1, thus bufferRead can be negetive. In the next circle, read 
 function will throw IndexOutOfBoundsException because bufferRead is negetive. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair

2014-08-01 Thread ZhongYu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhongYu updated CASSANDRA-7664:
---

Environment: 
RHEL 6.1
Casandra 1.2.3 - 1.2.18

  was:
RHEL 6.1
Casandra 1.2.3


 IndexOutOfBoundsException thrown during repair
 --

 Key: CASSANDRA-7664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7664
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1
 Casandra 1.2.3 - 1.2.18
Reporter: ZhongYu

 I was running repair command with moderate read and write load at the same 
 time. And I found tens of IndexOutOfBoundsException in system log as follows:
 {quote}
 ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
 Exception in thread Thread[Thread-6056,5,main]
 java.lang.IndexOutOfBoundsException
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
 at 
 org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.lang.Thread.run(Thread.java:662)
 {quote}
 I read the source code of CompressedInputStream.java and found there surely 
 will throw IndexOutOfBoundsException in the following situation:
 {code:title=CompressedInputStream.java|borderStyle=solid}
 // Part of CompressedInputStream.java start from Line 139
 protected void runMayThrow() throws Exception
 {
 byte[] compressedWithCRC;
 while (chunks.hasNext())
 {
 CompressionMetadata.Chunk chunk = chunks.next();
 int readLength = chunk.length + 4; // read with CRC
 compressedWithCRC = new byte[readLength];
 int bufferRead = 0;
 while (bufferRead  readLength)
 bufferRead += source.read(compressedWithCRC, bufferRead, 
 readLength - bufferRead);
 dataBuffer.put(compressedWithCRC);
 }
 }
 {code}
 If read function read nothing because the end of the stream has been reached, 
 it will return -1, thus bufferRead can be negetive. In the next circle, read 
 function will throw IndexOutOfBoundsException because bufferRead is negetive. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7593) Errors when upgrading through several versions to 2.1

2014-08-01 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082097#comment-14082097
 ] 

Marcus Eriksson commented on CASSANDRA-7593:


[~rhatch] I don't think that is related to this (NoSuchMethodError is likely an 
environment issue)

+1 on the patch

 Errors when upgrading through several versions to 2.1
 -

 Key: CASSANDRA-7593
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7593
 Project: Cassandra
  Issue Type: Bug
 Environment: java 1.7
Reporter: Russ Hatch
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.0

 Attachments: 0001-keep-clusteringSize-in-CompoundComposite.patch, 
 7593-v2.txt, 7593.txt


 I'm seeing two different errors cropping up in the dtest which upgrades a 
 cluster through several versions.
 This is the more common error:
 {noformat}
 ERROR [GossipStage:10] 2014-07-22 13:14:30,028 CassandraDaemon.java:168 - 
 Exception in thread Thread[GossipStage:10,5,main]
 java.lang.AssertionError: null
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.shouldInclude(SliceQueryFilter.java:347)
  ~[main/:na]
 at 
 org.apache.cassandra.db.filter.QueryFilter.shouldInclude(QueryFilter.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1873)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1681)
  ~[main/:na]
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:345) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:59)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.readLocally(SelectStatement.java:293)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:302)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:60)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:263)
  ~[main/:na]
 at 
 org.apache.cassandra.db.SystemKeyspace.getPreferredIP(SystemKeyspace.java:514)
  ~[main/:na]
 at 
 org.apache.cassandra.net.OutboundTcpConnectionPool.init(OutboundTcpConnectionPool.java:51)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:522)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:536)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:689)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:663)
  ~[main/:na]
 at 
 org.apache.cassandra.service.EchoVerbHandler.doVerb(EchoVerbHandler.java:40) 
 ~[main/:na]
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_60]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_60]
 {noformat}
 The same test sometimes fails with this exception instead:
 {noformat}
 ERROR [CompactionExecutor:4] 2014-07-22 16:18:21,008 CassandraDaemon.java:168 
 - Exception in thread Thread[CompactionExecutor:4,1,RMI Runtime]
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@7059d3e9 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@108f1504[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 95]
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) 
 ~[na:1.7.0_60]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.execute(ScheduledThreadPoolExecutor.java:619)
  ~[na:1.7.0_60]
 at 
 

[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair

2014-08-01 Thread xiangdong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangdong Huang updated CASSANDRA-7664:
---

Attachment: $O[TOX~GGUZRW~IHPYPEG{0.jpg

I got the same problem.
Firstly, I use nodetool repair command to repair the consistency.
second, I shutdown Cassandra ignore the repairation is running.
thirdly, I restart Cassandra and minites later, I find erro logs .

 IndexOutOfBoundsException thrown during repair
 --

 Key: CASSANDRA-7664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7664
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: RHEL 6.1
 Casandra 1.2.3 - 1.2.18
Reporter: ZhongYu
 Attachments: $O[TOX~GGUZRW~IHPYPEG{0.jpg


 I was running repair command with moderate read and write load at the same 
 time. And I found tens of IndexOutOfBoundsException in system log as follows:
 {quote}
 ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) 
 Exception in thread Thread[Thread-6056,5,main]
 java.lang.IndexOutOfBoundsException
 at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75)
 at 
 org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.lang.Thread.run(Thread.java:662)
 {quote}
 I read the source code of CompressedInputStream.java and found there surely 
 will throw IndexOutOfBoundsException in the following situation:
 {code:title=CompressedInputStream.java|borderStyle=solid}
 // Part of CompressedInputStream.java start from Line 139
 protected void runMayThrow() throws Exception
 {
 byte[] compressedWithCRC;
 while (chunks.hasNext())
 {
 CompressionMetadata.Chunk chunk = chunks.next();
 int readLength = chunk.length + 4; // read with CRC
 compressedWithCRC = new byte[readLength];
 int bufferRead = 0;
 while (bufferRead  readLength)
 bufferRead += source.read(compressedWithCRC, bufferRead, 
 readLength - bufferRead);
 dataBuffer.put(compressedWithCRC);
 }
 }
 {code}
 If read function read nothing because the end of the stream has been reached, 
 it will return -1, thus bufferRead can be negetive. In the next circle, read 
 function will throw IndexOutOfBoundsException because bufferRead is negetive. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7575) Custom 2i validation

2014-08-01 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082143#comment-14082143
 ] 

Sergio Bossa commented on CASSANDRA-7575:
-

[~adelapena], the patch doesn't apply cleanly to cassandra-2.1.

 Custom 2i validation
 

 Key: CASSANDRA-7575
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7575
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Andrés de la Peña
Assignee: Andrés de la Peña
Priority: Minor
  Labels: 2i, cql3, secondaryIndex, secondary_index, select
 Fix For: 2.1.1

 Attachments: 2i_validation.patch, 2i_validation_v2.patch


 There are several projects using custom secondary indexes as an extension 
 point to integrate C* with other systems such as Solr or Lucene. The usual 
 approach is to embed third party indexing queries in CQL clauses. 
 For example, [DSE 
 Search|http://www.datastax.com/what-we-offer/products-services/datastax-enterprise]
  embeds Solr syntax this way:
 {code}
 SELECT title FROM solr WHERE solr_query='title:natio*';
 {code}
 [Stratio platform|https://github.com/Stratio/stratio-cassandra] embeds custom 
 JSON syntax for searching in Lucene indexes:
 {code}
 SELECT * FROM tweets WHERE lucene='{
 filter : {
 type: range,
 field: time,
 lower: 2014/04/25,
 upper: 2014/04/1
 },
 query  : {
 type: phrase, 
 field: body, 
 values: [big, data]
 },
 sort  : {fields: [ {field:time, reverse:true} ] }
 }';
 {code}
 Tuplejump [Stargate|http://tuplejump.github.io/stargate/] also uses the 
 Stratio's open source JSON syntax:
 {code}
 SELECT name,company FROM PERSON WHERE stargate ='{
 filter: {
 type: range,
 field: company,
 lower: a,
 upper: p
 },
 sort:{
fields: [{field:name,reverse:true}]
 }
 }';
 {code}
 These syntaxes are validated by the corresponding 2i implementation. This 
 validation is done behind the StorageProxy command distribution. So, far as I 
 know, there is no way to give rich feedback about syntax errors to CQL users.
 I'm uploading a patch with some changes trying to improve this. I propose 
 adding an empty validation method to SecondaryIndexSearcher that can be 
 overridden by custom 2i implementations:
 {code}
 public void validate(ListIndexExpression clause) {}
 {code}
 And call it from SelectStatement#getRangeCommand:
 {code}
 ColumnFamilyStore cfs = 
 Keyspace.open(keyspace()).getColumnFamilyStore(columnFamily());
 for (SecondaryIndexSearcher searcher : 
 cfs.indexManager.getIndexSearchersForQuery(expressions))
 {
 try
 {
 searcher.validate(expressions);
 }
 catch (RuntimeException e)
 {
 String exceptionMessage = e.getMessage();
 if (exceptionMessage != null 
  !exceptionMessage.trim().isEmpty())
 throw new InvalidRequestException(
 Invalid index expression:  + e.getMessage());
 else
 throw new InvalidRequestException(
 Invalid index expression);
 }
 }
 {code}
 In this way C* allows custom 2i implementations to give feedback about syntax 
 errors.
 We are currently using these changes in a fork with no problems.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7532) Cassandra 2.0.9 crashes with ERROR [CompactionExecutor:216] 2014-07-10 14:26:08,334 CassandraDaemon.java (line 199) Exception in thread Thread[CompactionExecutor:21

2014-08-01 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082171#comment-14082171
 ] 

Marcus Eriksson commented on CASSANDRA-7532:


[~dignity] do you still have the logs from the node that threw the exception? 
Could you attach?

The exception in the log [~philipthompson] attached are from the dtest doing a 
major compaction which stops any other running compactions

 Cassandra 2.0.9 crashes with ERROR [CompactionExecutor:216] 2014-07-10 
 14:26:08,334 CassandraDaemon.java (line 199) Exception in thread 
 Thread[CompactionExecutor:216,1,main]
 -

 Key: CASSANDRA-7532
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7532
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12.04 in KVM 4core/4GB RAM/Ext4overLVM
 Oracle Java 7
 java version 1.7.0_60
 Java(TM) SE Runtime Environment (build 1.7.0_60-b19)
 Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode)
Reporter: Ivan Kudryavtsev
Assignee: Marcus Eriksson
 Fix For: 2.0.10, 2.1.1

 Attachments: 7532.log, cassandra.yaml


 System crashed with
 ERROR [CompactionExecutor:216] 2014-07-10 14:26:08,320 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:216,1,main]
 ERROR [CompactionExecutor:216] 2014-07-10 14:26:08,325 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:216,1,main]
 ERROR [CompactionExecutor:216] 2014-07-10 14:26:08,334 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:216,1,main]
 Unable to repeat yet, works after restart.
 Installation is local, not clustered. Data table schema below. Enabled 
 internal user/password authentication.
 --
 CREATE TABLE sflowdata (
   pk timeuuid,
   when bigint,
   samplesequenceno bigint,
   bgp_localpref int,
   bgp_nexthop text,
   countryfrom text,
   countryto text,
   dst_as int,
   dst_as_path text,
   dst_peer_as int,
   dstip text,
   dstmac text,
   dstsubnetmask int,
   headerbytes text,
   headerlen int,
   headerprotocol int,
   in_priority int,
   in_vlan int,
   inputport int,
   ipprotocol int,
   ipsize int,
   iptos int,
   ipttl int,
   meanskipcount int,
   my_as int,
   nexthop text,
   out_priority int,
   out_vlan int,
   outputport int,
   sampledpacketsize int,
   src_as int,
   src_peer_as int,
   srcip text,
   srcmac text,
   srcsubnetmask int,
   strippedbytes int,
   tcpdstport int,
   tcpflags int,
   tcpsrcport int,
   PRIMARY KEY ((pk), when, samplesequenceno)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.10 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX bgp_localpref_idx ON sflowdata (bgp_localpref);
 CREATE INDEX bgp_nexthop_idx ON sflowdata (bgp_nexthop);
 CREATE INDEX countryfrom_idx ON sflowdata (countryfrom);
 CREATE INDEX countryto_idx ON sflowdata (countryto);
 CREATE INDEX dstas_idx ON sflowdata (dst_as);
 CREATE INDEX dst_as_path_idx ON sflowdata (dst_as_path);
 CREATE INDEX dstpeer_idx ON sflowdata (dst_peer_as);
 CREATE INDEX dstip_idx ON sflowdata (dstip);
 CREATE INDEX dstmac_idx ON sflowdata (dstmac);
 CREATE INDEX dstsubnetmask_idx ON sflowdata (dstsubnetmask);
 CREATE INDEX headerbytes_idx ON sflowdata (headerbytes);
 CREATE INDEX headerlen_idx ON sflowdata (headerlen);
 CREATE INDEX headerprotocol_idx ON sflowdata (headerprotocol);
 CREATE INDEX in_priority_idx ON sflowdata (in_priority);
 CREATE INDEX in_vlan_idx ON sflowdata (in_vlan);
 CREATE INDEX inputport_idx ON sflowdata (inputport);
 CREATE INDEX ipprotocol_idx ON sflowdata (ipprotocol);
 CREATE INDEX ipsize_idx ON sflowdata (ipsize);
 CREATE INDEX iptos_idx ON sflowdata (iptos);
 CREATE INDEX ipttl_idx ON sflowdata (ipttl);
 CREATE INDEX meanskipcount_idx ON sflowdata (meanskipcount);
 CREATE INDEX my_as_idx ON sflowdata (my_as);
 CREATE INDEX nexthop_idx ON sflowdata (nexthop);
 CREATE INDEX out_priority_idx ON sflowdata (out_priority);
 CREATE INDEX out_vlan_idx ON sflowdata (out_vlan);
 CREATE INDEX outputport_idx ON sflowdata (outputport);
 CREATE INDEX sampledpacketsize_idx ON sflowdata (sampledpacketsize);
 CREATE INDEX src_as_idx ON sflowdata (src_as);
 CREATE INDEX src_peer_as_idx ON 

[jira] [Created] (CASSANDRA-7665) nodetool scrub fails on system schema with UDTs

2014-08-01 Thread Jonathan Halliday (JIRA)
Jonathan Halliday created CASSANDRA-7665:


 Summary: nodetool scrub fails on system schema with UDTs
 Key: CASSANDRA-7665
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7665
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.0-rc4
Reporter: Jonathan Halliday


[apache-cassandra-2.1.0-rc4]$ bin/cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.1.0-rc4 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1 };
cqlsh use test;
cqlsh:test CREATE TYPE point_t (x double, y double);
cqlsh:test exit
[apache-cassandra-2.1.0-rc4]$bin/nodetool scrub

INFO  12:34:57 Scrubbing 
SSTableReader(path='/apache-cassandra-2.1.0-rc4/bin/../data/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-1-Data.db')
 (34135 bytes)
INFO  12:34:57 Scrub of 
SSTableReader(path='/apache-cassandra-2.1.0-rc4/bin/../data/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-1-Data.db')
 complete: 2 rows in new sstable and 0 empty (tombstoned) rows dropped
INFO  12:34:57 Scrubbing 
SSTableReader(path='/apache-cassandra-2.1.0-rc4/bin/../data/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-5-Data.db')
 (12515 bytes)
WARN  12:34:57 Error reading row (stacktrace follows):
org.apache.cassandra.io.sstable.CorruptSSTableException: 
org.apache.cassandra.serializers.MarshalException: Not enough bytes to read a 
set
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:139)
 ~[apache-cassandra-2.1.0-rc4.jar:2.1.0-rc4]




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7575) Custom 2i validation

2014-08-01 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-7575:
-

Attachment: 2i_validation_v3.patch

 Custom 2i validation
 

 Key: CASSANDRA-7575
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7575
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Andrés de la Peña
Assignee: Andrés de la Peña
Priority: Minor
  Labels: 2i, cql3, secondaryIndex, secondary_index, select
 Fix For: 2.1.1

 Attachments: 2i_validation.patch, 2i_validation_v2.patch, 
 2i_validation_v3.patch


 There are several projects using custom secondary indexes as an extension 
 point to integrate C* with other systems such as Solr or Lucene. The usual 
 approach is to embed third party indexing queries in CQL clauses. 
 For example, [DSE 
 Search|http://www.datastax.com/what-we-offer/products-services/datastax-enterprise]
  embeds Solr syntax this way:
 {code}
 SELECT title FROM solr WHERE solr_query='title:natio*';
 {code}
 [Stratio platform|https://github.com/Stratio/stratio-cassandra] embeds custom 
 JSON syntax for searching in Lucene indexes:
 {code}
 SELECT * FROM tweets WHERE lucene='{
 filter : {
 type: range,
 field: time,
 lower: 2014/04/25,
 upper: 2014/04/1
 },
 query  : {
 type: phrase, 
 field: body, 
 values: [big, data]
 },
 sort  : {fields: [ {field:time, reverse:true} ] }
 }';
 {code}
 Tuplejump [Stargate|http://tuplejump.github.io/stargate/] also uses the 
 Stratio's open source JSON syntax:
 {code}
 SELECT name,company FROM PERSON WHERE stargate ='{
 filter: {
 type: range,
 field: company,
 lower: a,
 upper: p
 },
 sort:{
fields: [{field:name,reverse:true}]
 }
 }';
 {code}
 These syntaxes are validated by the corresponding 2i implementation. This 
 validation is done behind the StorageProxy command distribution. So, far as I 
 know, there is no way to give rich feedback about syntax errors to CQL users.
 I'm uploading a patch with some changes trying to improve this. I propose 
 adding an empty validation method to SecondaryIndexSearcher that can be 
 overridden by custom 2i implementations:
 {code}
 public void validate(ListIndexExpression clause) {}
 {code}
 And call it from SelectStatement#getRangeCommand:
 {code}
 ColumnFamilyStore cfs = 
 Keyspace.open(keyspace()).getColumnFamilyStore(columnFamily());
 for (SecondaryIndexSearcher searcher : 
 cfs.indexManager.getIndexSearchersForQuery(expressions))
 {
 try
 {
 searcher.validate(expressions);
 }
 catch (RuntimeException e)
 {
 String exceptionMessage = e.getMessage();
 if (exceptionMessage != null 
  !exceptionMessage.trim().isEmpty())
 throw new InvalidRequestException(
 Invalid index expression:  + e.getMessage());
 else
 throw new InvalidRequestException(
 Invalid index expression);
 }
 }
 {code}
 In this way C* allows custom 2i implementations to give feedback about syntax 
 errors.
 We are currently using these changes in a fork with no problems.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7575) Custom 2i validation

2014-08-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082204#comment-14082204
 ] 

Andrés de la Peña commented on CASSANDRA-7575:
--

[~sbtourist], I suppose that the problem is due to trailing white spaces in the 
patch file. I'm uploading a new version without trailing whitespaces. These are 
the steps I've followed to apply the patch without warnings:
{code}
git clone https://github.com/apache/cassandra.git
git checkout cassandra-2.1
git apply 2i_validation_v3.patch
{code}
Sorry for the inconvenience.

 Custom 2i validation
 

 Key: CASSANDRA-7575
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7575
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Andrés de la Peña
Assignee: Andrés de la Peña
Priority: Minor
  Labels: 2i, cql3, secondaryIndex, secondary_index, select
 Fix For: 2.1.1

 Attachments: 2i_validation.patch, 2i_validation_v2.patch, 
 2i_validation_v3.patch


 There are several projects using custom secondary indexes as an extension 
 point to integrate C* with other systems such as Solr or Lucene. The usual 
 approach is to embed third party indexing queries in CQL clauses. 
 For example, [DSE 
 Search|http://www.datastax.com/what-we-offer/products-services/datastax-enterprise]
  embeds Solr syntax this way:
 {code}
 SELECT title FROM solr WHERE solr_query='title:natio*';
 {code}
 [Stratio platform|https://github.com/Stratio/stratio-cassandra] embeds custom 
 JSON syntax for searching in Lucene indexes:
 {code}
 SELECT * FROM tweets WHERE lucene='{
 filter : {
 type: range,
 field: time,
 lower: 2014/04/25,
 upper: 2014/04/1
 },
 query  : {
 type: phrase, 
 field: body, 
 values: [big, data]
 },
 sort  : {fields: [ {field:time, reverse:true} ] }
 }';
 {code}
 Tuplejump [Stargate|http://tuplejump.github.io/stargate/] also uses the 
 Stratio's open source JSON syntax:
 {code}
 SELECT name,company FROM PERSON WHERE stargate ='{
 filter: {
 type: range,
 field: company,
 lower: a,
 upper: p
 },
 sort:{
fields: [{field:name,reverse:true}]
 }
 }';
 {code}
 These syntaxes are validated by the corresponding 2i implementation. This 
 validation is done behind the StorageProxy command distribution. So, far as I 
 know, there is no way to give rich feedback about syntax errors to CQL users.
 I'm uploading a patch with some changes trying to improve this. I propose 
 adding an empty validation method to SecondaryIndexSearcher that can be 
 overridden by custom 2i implementations:
 {code}
 public void validate(ListIndexExpression clause) {}
 {code}
 And call it from SelectStatement#getRangeCommand:
 {code}
 ColumnFamilyStore cfs = 
 Keyspace.open(keyspace()).getColumnFamilyStore(columnFamily());
 for (SecondaryIndexSearcher searcher : 
 cfs.indexManager.getIndexSearchersForQuery(expressions))
 {
 try
 {
 searcher.validate(expressions);
 }
 catch (RuntimeException e)
 {
 String exceptionMessage = e.getMessage();
 if (exceptionMessage != null 
  !exceptionMessage.trim().isEmpty())
 throw new InvalidRequestException(
 Invalid index expression:  + e.getMessage());
 else
 throw new InvalidRequestException(
 Invalid index expression);
 }
 }
 {code}
 In this way C* allows custom 2i implementations to give feedback about syntax 
 errors.
 We are currently using these changes in a fork with no problems.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/2] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5a1c374
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5a1c374
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5a1c374

Branch: refs/heads/cassandra-2.1
Commit: f5a1c374c48898fa934792da680f80ccecf92f30
Parents: b407ebc d667556
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:24:42 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:24:42 2014 -0400

--
 tools/cqlstress-counter-example.yaml|  7 +---
 tools/cqlstress-example.yaml|  8 +
 tools/cqlstress-insanity-example.yaml   | 18 +++
 .../apache/cassandra/stress/StressProfile.java  | 34 +++-
 .../org/apache/cassandra/stress/StressYaml.java |  1 -
 5 files changed, 25 insertions(+), 43 deletions(-)
--




git commit: remove seed from stress profile. cleanup yamls. ninja

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 463267372 - d667556d0


remove seed from stress profile. cleanup yamls. ninja


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d667556d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d667556d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d667556d

Branch: refs/heads/cassandra-2.1.0
Commit: d667556d060068a4a93e5331372b92468f0996d8
Parents: 4632673
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:23:02 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:23:02 2014 -0400

--
 tools/cqlstress-counter-example.yaml|  7 +---
 tools/cqlstress-example.yaml|  8 +
 tools/cqlstress-insanity-example.yaml   | 18 +++
 .../apache/cassandra/stress/StressProfile.java  | 34 +++-
 .../org/apache/cassandra/stress/StressYaml.java |  1 -
 5 files changed, 25 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d667556d/tools/cqlstress-counter-example.yaml
--
diff --git a/tools/cqlstress-counter-example.yaml 
b/tools/cqlstress-counter-example.yaml
index a65080a..cff14b6 100644
--- a/tools/cqlstress-counter-example.yaml
+++ b/tools/cqlstress-counter-example.yaml
@@ -52,12 +52,11 @@ table_definition: |
 #
 #  If preceded by ~, the distribution is inverted
 #
-# Defaults for all columns are size: uniform(1..256), identity: 
uniform(1..1024)
+# Defaults for all columns are size: uniform(4..8), population: 
uniform(1..100B), cluster: fixed(1)
 #
 
 columnspec:
   - name: name
-clustering: uniform(1..100)
 size: uniform(1..4)
   - name: count
 population: fixed(1)
@@ -79,7 +78,3 @@ insert:
 queries:
simple1: select * from counttest where name = ?
 
-#
-# In order to generate data consistently we need something to generate a 
unique key for this schema profile.
-#
-seed: changing this string changes the generated data. its hashcode is used as 
the random seed.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d667556d/tools/cqlstress-example.yaml
--
diff --git a/tools/cqlstress-example.yaml b/tools/cqlstress-example.yaml
index a997529..d5c90a2 100644
--- a/tools/cqlstress-example.yaml
+++ b/tools/cqlstress-example.yaml
@@ -62,13 +62,12 @@ table_definition: |
 #
 #  If preceded by ~, the distribution is inverted
 #
-# Defaults for all columns are size: uniform(1..256), identity: 
uniform(1..1024)
+# Defaults for all columns are size: uniform(4..8), population: 
uniform(1..100B), cluster: fixed(1)
 #
 columnspec:
   - name: name
 size: uniform(1..10)
 population: uniform(1..1M) # the range of unique values to select for 
the field (default is 100Billion)
-  - name: choice
   - name: date
 cluster: uniform(1..4)
   - name: lval
@@ -92,8 +91,3 @@ insert:
 queries:
simple1: select * from typestest where name = ? and choice = ? LIMIT 100
range1: select * from typestest where name = ? and choice = ? and date = ? 
LIMIT 100
-
-#
-# In order to generate data consistently we need something to generate a 
unique key for this schema profile.
-#
-seed: changing this string changes the generated data. its hashcode is used as 
the random seed.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d667556d/tools/cqlstress-insanity-example.yaml
--
diff --git a/tools/cqlstress-insanity-example.yaml 
b/tools/cqlstress-insanity-example.yaml
index e94c9c3..ef1bb3a 100644
--- a/tools/cqlstress-insanity-example.yaml
+++ b/tools/cqlstress-insanity-example.yaml
@@ -64,26 +64,20 @@ table_definition: |
 #
 #  If preceded by ~, the distribution is inverted
 #
-# Defaults for all columns are size: uniform(1..256), population: 
uniform(1..100B)
+# Defaults for all columns are size: uniform(4..8), population: 
uniform(1..100B), cluster: fixed(1)
 #
 columnspec:
-  - name: name
-clustering: uniform(1..4)
   - name: date
-clustering: gaussian(1..20)
+cluster: gaussian(1..20)
   - name: lval
 population: fixed(1)
-  - name: dates
-clustering: uniform(1..100)
-  - name: inets
-clustering: uniform(1..200)
-  - name: value
+
 
 insert:
   partitions: fixed(1)# number of unique partitions to update in a 
single operation
   # if perbatch  1, multiple batches will be 
used but all partitions will
   # occur in all batches (unless already 
finished); only the row counts will vary
-  pervisit: uniform(1..10)/100K   # ratio of rows each partition should update 
in a single visit 

[1/2] git commit: remove seed from stress profile. cleanup yamls. ninja

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 b407ebc89 - f5a1c374c


remove seed from stress profile. cleanup yamls. ninja


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d667556d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d667556d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d667556d

Branch: refs/heads/cassandra-2.1
Commit: d667556d060068a4a93e5331372b92468f0996d8
Parents: 4632673
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:23:02 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:23:02 2014 -0400

--
 tools/cqlstress-counter-example.yaml|  7 +---
 tools/cqlstress-example.yaml|  8 +
 tools/cqlstress-insanity-example.yaml   | 18 +++
 .../apache/cassandra/stress/StressProfile.java  | 34 +++-
 .../org/apache/cassandra/stress/StressYaml.java |  1 -
 5 files changed, 25 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d667556d/tools/cqlstress-counter-example.yaml
--
diff --git a/tools/cqlstress-counter-example.yaml 
b/tools/cqlstress-counter-example.yaml
index a65080a..cff14b6 100644
--- a/tools/cqlstress-counter-example.yaml
+++ b/tools/cqlstress-counter-example.yaml
@@ -52,12 +52,11 @@ table_definition: |
 #
 #  If preceded by ~, the distribution is inverted
 #
-# Defaults for all columns are size: uniform(1..256), identity: 
uniform(1..1024)
+# Defaults for all columns are size: uniform(4..8), population: 
uniform(1..100B), cluster: fixed(1)
 #
 
 columnspec:
   - name: name
-clustering: uniform(1..100)
 size: uniform(1..4)
   - name: count
 population: fixed(1)
@@ -79,7 +78,3 @@ insert:
 queries:
simple1: select * from counttest where name = ?
 
-#
-# In order to generate data consistently we need something to generate a 
unique key for this schema profile.
-#
-seed: changing this string changes the generated data. its hashcode is used as 
the random seed.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d667556d/tools/cqlstress-example.yaml
--
diff --git a/tools/cqlstress-example.yaml b/tools/cqlstress-example.yaml
index a997529..d5c90a2 100644
--- a/tools/cqlstress-example.yaml
+++ b/tools/cqlstress-example.yaml
@@ -62,13 +62,12 @@ table_definition: |
 #
 #  If preceded by ~, the distribution is inverted
 #
-# Defaults for all columns are size: uniform(1..256), identity: 
uniform(1..1024)
+# Defaults for all columns are size: uniform(4..8), population: 
uniform(1..100B), cluster: fixed(1)
 #
 columnspec:
   - name: name
 size: uniform(1..10)
 population: uniform(1..1M) # the range of unique values to select for 
the field (default is 100Billion)
-  - name: choice
   - name: date
 cluster: uniform(1..4)
   - name: lval
@@ -92,8 +91,3 @@ insert:
 queries:
simple1: select * from typestest where name = ? and choice = ? LIMIT 100
range1: select * from typestest where name = ? and choice = ? and date = ? 
LIMIT 100
-
-#
-# In order to generate data consistently we need something to generate a 
unique key for this schema profile.
-#
-seed: changing this string changes the generated data. its hashcode is used as 
the random seed.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d667556d/tools/cqlstress-insanity-example.yaml
--
diff --git a/tools/cqlstress-insanity-example.yaml 
b/tools/cqlstress-insanity-example.yaml
index e94c9c3..ef1bb3a 100644
--- a/tools/cqlstress-insanity-example.yaml
+++ b/tools/cqlstress-insanity-example.yaml
@@ -64,26 +64,20 @@ table_definition: |
 #
 #  If preceded by ~, the distribution is inverted
 #
-# Defaults for all columns are size: uniform(1..256), population: 
uniform(1..100B)
+# Defaults for all columns are size: uniform(4..8), population: 
uniform(1..100B), cluster: fixed(1)
 #
 columnspec:
-  - name: name
-clustering: uniform(1..4)
   - name: date
-clustering: gaussian(1..20)
+cluster: gaussian(1..20)
   - name: lval
 population: fixed(1)
-  - name: dates
-clustering: uniform(1..100)
-  - name: inets
-clustering: uniform(1..200)
-  - name: value
+
 
 insert:
   partitions: fixed(1)# number of unique partitions to update in a 
single operation
   # if perbatch  1, multiple batches will be 
used but all partitions will
   # occur in all batches (unless already 
finished); only the row counts will vary
-  pervisit: uniform(1..10)/100K   # ratio of rows each partition should update 
in a single visit to 

[2/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5a1c374
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5a1c374
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5a1c374

Branch: refs/heads/trunk
Commit: f5a1c374c48898fa934792da680f80ccecf92f30
Parents: b407ebc d667556
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:24:42 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:24:42 2014 -0400

--
 tools/cqlstress-counter-example.yaml|  7 +---
 tools/cqlstress-example.yaml|  8 +
 tools/cqlstress-insanity-example.yaml   | 18 +++
 .../apache/cassandra/stress/StressProfile.java  | 34 +++-
 .../org/apache/cassandra/stress/StressYaml.java |  1 -
 5 files changed, 25 insertions(+), 43 deletions(-)
--




[jira] [Commented] (CASSANDRA-7601) Data loss after nodetool taketoken

2014-08-01 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082226#comment-14082226
 ] 

Aleksey Yeschenko commented on CASSANDRA-7601:
--

bq. Probably a good idea to remove this from 1.2 as well since we have one more 
1.2 release in us.

Agreed.

 Data loss after nodetool taketoken
 --

 Key: CASSANDRA-7601
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7601
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tests
 Environment: Mac OSX Mavericks. Ubuntu 14.04
Reporter: Philip Thompson
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.0.10, 2.1.0

 Attachments: 7601-1.2.txt, 7601-2.0.txt, 7601-2.1.txt, 
 consistent_bootstrap_test.py, taketoken.tar.gz


 The dtest 
 consistent_bootstrap_test.py:TestBootstrapConsistency.consistent_reads_after_relocate_test
  is failing on HEAD of the git branches 2.1 and 2.1.0.
 The test performs the following actions:
 - Create a cluster of 3 nodes
 - Create a keyspace with RF 2
 - Take node 3 down
 - Write 980 rows to node 2 with CL ONE
 - Flush node 2
 - Bring node 3 back up
 - Run nodetool taketoken on node 3 to transfer 80% of node 1's tokens to node 
 3
 - Check for data loss
 When the check for data loss is performed, only ~725 rows can be read via CL 
 ALL.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/3] git commit: remove seed from stress profile. cleanup yamls. ninja

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 24e1bc768 - 7633d9205


remove seed from stress profile. cleanup yamls. ninja


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d667556d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d667556d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d667556d

Branch: refs/heads/trunk
Commit: d667556d060068a4a93e5331372b92468f0996d8
Parents: 4632673
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:23:02 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:23:02 2014 -0400

--
 tools/cqlstress-counter-example.yaml|  7 +---
 tools/cqlstress-example.yaml|  8 +
 tools/cqlstress-insanity-example.yaml   | 18 +++
 .../apache/cassandra/stress/StressProfile.java  | 34 +++-
 .../org/apache/cassandra/stress/StressYaml.java |  1 -
 5 files changed, 25 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d667556d/tools/cqlstress-counter-example.yaml
--
diff --git a/tools/cqlstress-counter-example.yaml 
b/tools/cqlstress-counter-example.yaml
index a65080a..cff14b6 100644
--- a/tools/cqlstress-counter-example.yaml
+++ b/tools/cqlstress-counter-example.yaml
@@ -52,12 +52,11 @@ table_definition: |
 #
 #  If preceded by ~, the distribution is inverted
 #
-# Defaults for all columns are size: uniform(1..256), identity: 
uniform(1..1024)
+# Defaults for all columns are size: uniform(4..8), population: 
uniform(1..100B), cluster: fixed(1)
 #
 
 columnspec:
   - name: name
-clustering: uniform(1..100)
 size: uniform(1..4)
   - name: count
 population: fixed(1)
@@ -79,7 +78,3 @@ insert:
 queries:
simple1: select * from counttest where name = ?
 
-#
-# In order to generate data consistently we need something to generate a 
unique key for this schema profile.
-#
-seed: changing this string changes the generated data. its hashcode is used as 
the random seed.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d667556d/tools/cqlstress-example.yaml
--
diff --git a/tools/cqlstress-example.yaml b/tools/cqlstress-example.yaml
index a997529..d5c90a2 100644
--- a/tools/cqlstress-example.yaml
+++ b/tools/cqlstress-example.yaml
@@ -62,13 +62,12 @@ table_definition: |
 #
 #  If preceded by ~, the distribution is inverted
 #
-# Defaults for all columns are size: uniform(1..256), identity: 
uniform(1..1024)
+# Defaults for all columns are size: uniform(4..8), population: 
uniform(1..100B), cluster: fixed(1)
 #
 columnspec:
   - name: name
 size: uniform(1..10)
 population: uniform(1..1M) # the range of unique values to select for 
the field (default is 100Billion)
-  - name: choice
   - name: date
 cluster: uniform(1..4)
   - name: lval
@@ -92,8 +91,3 @@ insert:
 queries:
simple1: select * from typestest where name = ? and choice = ? LIMIT 100
range1: select * from typestest where name = ? and choice = ? and date = ? 
LIMIT 100
-
-#
-# In order to generate data consistently we need something to generate a 
unique key for this schema profile.
-#
-seed: changing this string changes the generated data. its hashcode is used as 
the random seed.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d667556d/tools/cqlstress-insanity-example.yaml
--
diff --git a/tools/cqlstress-insanity-example.yaml 
b/tools/cqlstress-insanity-example.yaml
index e94c9c3..ef1bb3a 100644
--- a/tools/cqlstress-insanity-example.yaml
+++ b/tools/cqlstress-insanity-example.yaml
@@ -64,26 +64,20 @@ table_definition: |
 #
 #  If preceded by ~, the distribution is inverted
 #
-# Defaults for all columns are size: uniform(1..256), population: 
uniform(1..100B)
+# Defaults for all columns are size: uniform(4..8), population: 
uniform(1..100B), cluster: fixed(1)
 #
 columnspec:
-  - name: name
-clustering: uniform(1..4)
   - name: date
-clustering: gaussian(1..20)
+cluster: gaussian(1..20)
   - name: lval
 population: fixed(1)
-  - name: dates
-clustering: uniform(1..100)
-  - name: inets
-clustering: uniform(1..200)
-  - name: value
+
 
 insert:
   partitions: fixed(1)# number of unique partitions to update in a 
single operation
   # if perbatch  1, multiple batches will be 
used but all partitions will
   # occur in all batches (unless already 
finished); only the row counts will vary
-  pervisit: uniform(1..10)/100K   # ratio of rows each partition should update 
in a single visit to the partition,
+  

[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-01 Thread jake
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7633d920
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7633d920
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7633d920

Branch: refs/heads/trunk
Commit: 7633d9205889a0894bc0587522197741bbf7adbf
Parents: 24e1bc7 f5a1c37
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:25:21 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:25:21 2014 -0400

--
 tools/cqlstress-counter-example.yaml|  7 +---
 tools/cqlstress-example.yaml|  8 +
 tools/cqlstress-insanity-example.yaml   | 18 +++
 .../apache/cassandra/stress/StressProfile.java  | 34 +++-
 .../org/apache/cassandra/stress/StressYaml.java |  1 -
 5 files changed, 25 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7633d920/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
--



[jira] [Commented] (CASSANDRA-5959) CQL3 support for multi-column insert in a single operation (Batch Insert / Batch Mutate)

2014-08-01 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082234#comment-14082234
 ] 

Aleksey Yeschenko commented on CASSANDRA-5959:
--

[~snazy] they will be atomic - because C* will merge them all into a single 
Mutation before applying it (so long as they have the same partition key). And 
you can assign different timestamps to different statements to avoid 'the 
issue', and it still will be atomic.

 CQL3 support for multi-column insert in a single operation (Batch Insert / 
 Batch Mutate)
 

 Key: CASSANDRA-5959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5959
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Drivers (now out of tree)
Reporter: Les Hazlewood
  Labels: CQL

 h3. Impetus for this Request
 (from the original [question on 
 StackOverflow|http://stackoverflow.com/questions/18522191/using-cassandra-and-cql3-how-do-you-insert-an-entire-wide-row-in-a-single-reque]):
 I want to insert a single row with 50,000 columns into Cassandra 1.2.9. 
 Before inserting, I have all the data for the entire row ready to go (in 
 memory):
 {code}
 +-+--+--+--+--+---+
 | | 0| 1| 2| ...  | 4 |
 | row_id  +--+--+--+--+---+
 | | text | text | text | ...  | text  |
 +-+--+--+--|--+---+
 {code}
 The column names are integers, allowing slicing for pagination. The column 
 values are a value at that particular index.
 CQL3 table definition:
 {code}
 create table results (
 row_id text,
 index int,
 value text,
 primary key (row_id, index)
 ) 
 with compact storage;
 {code}
 As I already have the row_id and all 50,000 name/value pairs in memory, I 
 just want to insert a single row into Cassandra in a single request/operation 
 so it is as fast as possible.
 The only thing I can seem to find is to do execute the following 50,000 times:
 {code}
 INSERT INTO results (row_id, index, value) values (my_row_id, ?, ?);
 {code}
 where the first {{?}} is is an index counter ({{i}}) and the second {{?}} is 
 the text value to store at location {{i}}.
 With the Datastax Java Driver client and C* server on the same development 
 machine, this took a full minute to execute.
 Oddly enough, the same 50,000 insert statements in a [Datastax Java Driver 
 Batch|http://www.datastax.com/drivers/java/apidocs/com/datastax/driver/core/querybuilder/QueryBuilder.html#batch(com.datastax.driver.core.Statement...)]
  on the same machine took 7.5 minutes.  I thought batches were supposed to be 
 _faster_ than individual inserts?
 We tried instead with a Thrift client (Astyanax) and the same insert via a 
 [MutationBatch|http://netflix.github.io/astyanax/javadoc/com/netflix/astyanax/MutationBatch.html].
   This took _235 milliseconds_.
 h3. Feature Request
 As a result of this performance testing, this issue is to request that CQL3 
 support batch mutation operations as a single operation (statement) to ensure 
 the same speed/performance benefits as existing Thrift clients.
 Example suggested syntax (based on the above example table/column family):
 {code}
 insert into results (row_id, (index,value)) values 
 ((0,text0), (1,text1), (2,text2), ..., (N,textN));
 {code}
 Each value in the {{values}} clause is a tuple.  The first tuple element is 
 the column name, the second tuple element is the column value.  This seems to 
 be the most simple/accurate representation of what happens during a batch 
 insert/mutate.
 Not having this CQL feature forced us to remove the Datastax Java Driver 
 (which we liked) in favor of Astyanax because Astyanax supports this 
 behavior.  We desire feature/performance parity between Thrift and 
 CQL3/Datastax Java Driver, so we hope this request improves both CQL3 and the 
 Driver.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Make default stress batches logged

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 d667556d0 - 1921b9859


Make default stress batches logged


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1921b985
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1921b985
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1921b985

Branch: refs/heads/cassandra-2.1.0
Commit: 1921b98599aa5190c74737c4e8a1092c63f842dc
Parents: d667556
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:32:31 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:32:31 2014 -0400

--
 tools/stress/src/org/apache/cassandra/stress/StressProfile.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1921b985/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
--
diff --git a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java 
b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
index f24ec8c..4e09775 100644
--- a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
+++ b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
@@ -331,7 +331,7 @@ public class StressProfile implements Serializable
 partitions = 
OptionDistribution.get(!insert.containsKey(partitions) ? fixed(1) : 
insert.remove(partitions));
 pervisit = 
OptionRatioDistribution.get(!insert.containsKey(pervisit) ? fixed(1)/1 : 
insert.remove(pervisit));
 perbatch = 
OptionRatioDistribution.get(!insert.containsKey(perbatch) ? fixed(1)/1 : 
insert.remove(perbatch));
-batchType = !insert.containsKey(batchtype) ? 
BatchStatement.Type.UNLOGGED : 
BatchStatement.Type.valueOf(insert.remove(batchtype));
+batchType = !insert.containsKey(batchtype) ? 
BatchStatement.Type.LOGGED : 
BatchStatement.Type.valueOf(insert.remove(batchtype));
 if (!insert.isEmpty())
 throw new IllegalArgumentException(Unrecognised 
insert option(s):  + insert);
 



[2/2] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/169b1cf1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/169b1cf1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/169b1cf1

Branch: refs/heads/cassandra-2.1
Commit: 169b1cf19e584f106512a545f206eccee09cc7be
Parents: f5a1c37 1921b98
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:33:04 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:33:04 2014 -0400

--
 tools/stress/src/org/apache/cassandra/stress/StressProfile.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-01 Thread jake
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e3fa11bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e3fa11bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e3fa11bf

Branch: refs/heads/trunk
Commit: e3fa11bf4ab5f29a78ca17a31841abc949335ece
Parents: 7633d92 169b1cf
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:33:46 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:33:46 2014 -0400

--
 tools/stress/src/org/apache/cassandra/stress/StressProfile.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e3fa11bf/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
--



[1/3] git commit: Make default stress batches logged

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7633d9205 - e3fa11bf4


Make default stress batches logged


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1921b985
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1921b985
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1921b985

Branch: refs/heads/trunk
Commit: 1921b98599aa5190c74737c4e8a1092c63f842dc
Parents: d667556
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:32:31 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:32:31 2014 -0400

--
 tools/stress/src/org/apache/cassandra/stress/StressProfile.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1921b985/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
--
diff --git a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java 
b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
index f24ec8c..4e09775 100644
--- a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
+++ b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
@@ -331,7 +331,7 @@ public class StressProfile implements Serializable
 partitions = 
OptionDistribution.get(!insert.containsKey(partitions) ? fixed(1) : 
insert.remove(partitions));
 pervisit = 
OptionRatioDistribution.get(!insert.containsKey(pervisit) ? fixed(1)/1 : 
insert.remove(pervisit));
 perbatch = 
OptionRatioDistribution.get(!insert.containsKey(perbatch) ? fixed(1)/1 : 
insert.remove(perbatch));
-batchType = !insert.containsKey(batchtype) ? 
BatchStatement.Type.UNLOGGED : 
BatchStatement.Type.valueOf(insert.remove(batchtype));
+batchType = !insert.containsKey(batchtype) ? 
BatchStatement.Type.LOGGED : 
BatchStatement.Type.valueOf(insert.remove(batchtype));
 if (!insert.isEmpty())
 throw new IllegalArgumentException(Unrecognised 
insert option(s):  + insert);
 



[2/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/169b1cf1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/169b1cf1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/169b1cf1

Branch: refs/heads/trunk
Commit: 169b1cf19e584f106512a545f206eccee09cc7be
Parents: f5a1c37 1921b98
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:33:04 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:33:04 2014 -0400

--
 tools/stress/src/org/apache/cassandra/stress/StressProfile.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[1/2] git commit: Make default stress batches logged

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 f5a1c374c - 169b1cf19


Make default stress batches logged


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1921b985
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1921b985
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1921b985

Branch: refs/heads/cassandra-2.1
Commit: 1921b98599aa5190c74737c4e8a1092c63f842dc
Parents: d667556
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 09:32:31 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 09:32:31 2014 -0400

--
 tools/stress/src/org/apache/cassandra/stress/StressProfile.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1921b985/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
--
diff --git a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java 
b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
index f24ec8c..4e09775 100644
--- a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
+++ b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java
@@ -331,7 +331,7 @@ public class StressProfile implements Serializable
 partitions = 
OptionDistribution.get(!insert.containsKey(partitions) ? fixed(1) : 
insert.remove(partitions));
 pervisit = 
OptionRatioDistribution.get(!insert.containsKey(pervisit) ? fixed(1)/1 : 
insert.remove(pervisit));
 perbatch = 
OptionRatioDistribution.get(!insert.containsKey(perbatch) ? fixed(1)/1 : 
insert.remove(perbatch));
-batchType = !insert.containsKey(batchtype) ? 
BatchStatement.Type.UNLOGGED : 
BatchStatement.Type.valueOf(insert.remove(batchtype));
+batchType = !insert.containsKey(batchtype) ? 
BatchStatement.Type.LOGGED : 
BatchStatement.Type.valueOf(insert.remove(batchtype));
 if (!insert.isEmpty())
 throw new IllegalArgumentException(Unrecognised 
insert option(s):  + insert);
 



[jira] [Commented] (CASSANDRA-7593) Errors when upgrading through several versions to 2.1

2014-08-01 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082263#comment-14082263
 ] 

Jonathan Ellis commented on CASSANDRA-7593:
---

[~rhatch] let's open a separate ticket for that.

 Errors when upgrading through several versions to 2.1
 -

 Key: CASSANDRA-7593
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7593
 Project: Cassandra
  Issue Type: Bug
 Environment: java 1.7
Reporter: Russ Hatch
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.0

 Attachments: 0001-keep-clusteringSize-in-CompoundComposite.patch, 
 7593-v2.txt, 7593.txt


 I'm seeing two different errors cropping up in the dtest which upgrades a 
 cluster through several versions.
 This is the more common error:
 {noformat}
 ERROR [GossipStage:10] 2014-07-22 13:14:30,028 CassandraDaemon.java:168 - 
 Exception in thread Thread[GossipStage:10,5,main]
 java.lang.AssertionError: null
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.shouldInclude(SliceQueryFilter.java:347)
  ~[main/:na]
 at 
 org.apache.cassandra.db.filter.QueryFilter.shouldInclude(QueryFilter.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1873)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1681)
  ~[main/:na]
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:345) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:59)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.readLocally(SelectStatement.java:293)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:302)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:60)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:263)
  ~[main/:na]
 at 
 org.apache.cassandra.db.SystemKeyspace.getPreferredIP(SystemKeyspace.java:514)
  ~[main/:na]
 at 
 org.apache.cassandra.net.OutboundTcpConnectionPool.init(OutboundTcpConnectionPool.java:51)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:522)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:536)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:689)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:663)
  ~[main/:na]
 at 
 org.apache.cassandra.service.EchoVerbHandler.doVerb(EchoVerbHandler.java:40) 
 ~[main/:na]
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_60]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_60]
 {noformat}
 The same test sometimes fails with this exception instead:
 {noformat}
 ERROR [CompactionExecutor:4] 2014-07-22 16:18:21,008 CassandraDaemon.java:168 
 - Exception in thread Thread[CompactionExecutor:4,1,RMI Runtime]
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@7059d3e9 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@108f1504[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 95]
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) 
 ~[na:1.7.0_60]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.execute(ScheduledThreadPoolExecutor.java:619)
  ~[na:1.7.0_60]
 at 
 org.apache.cassandra.io.sstable.SSTableReader.scheduleTidy(SSTableReader.java:628)
  ~[main/:na]
 at 
 

[jira] [Commented] (CASSANDRA-5959) CQL3 support for multi-column insert in a single operation (Batch Insert / Batch Mutate)

2014-08-01 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082264#comment-14082264
 ] 

Robert Stupp commented on CASSANDRA-5959:
-

But there's no way to perform a replace partition mutation. (except with 
{{DELETE ... USING TIMESTAMP foo-1;INSERT ... USING TIMESTAMP foo;}} )

 CQL3 support for multi-column insert in a single operation (Batch Insert / 
 Batch Mutate)
 

 Key: CASSANDRA-5959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5959
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Drivers (now out of tree)
Reporter: Les Hazlewood
  Labels: CQL

 h3. Impetus for this Request
 (from the original [question on 
 StackOverflow|http://stackoverflow.com/questions/18522191/using-cassandra-and-cql3-how-do-you-insert-an-entire-wide-row-in-a-single-reque]):
 I want to insert a single row with 50,000 columns into Cassandra 1.2.9. 
 Before inserting, I have all the data for the entire row ready to go (in 
 memory):
 {code}
 +-+--+--+--+--+---+
 | | 0| 1| 2| ...  | 4 |
 | row_id  +--+--+--+--+---+
 | | text | text | text | ...  | text  |
 +-+--+--+--|--+---+
 {code}
 The column names are integers, allowing slicing for pagination. The column 
 values are a value at that particular index.
 CQL3 table definition:
 {code}
 create table results (
 row_id text,
 index int,
 value text,
 primary key (row_id, index)
 ) 
 with compact storage;
 {code}
 As I already have the row_id and all 50,000 name/value pairs in memory, I 
 just want to insert a single row into Cassandra in a single request/operation 
 so it is as fast as possible.
 The only thing I can seem to find is to do execute the following 50,000 times:
 {code}
 INSERT INTO results (row_id, index, value) values (my_row_id, ?, ?);
 {code}
 where the first {{?}} is is an index counter ({{i}}) and the second {{?}} is 
 the text value to store at location {{i}}.
 With the Datastax Java Driver client and C* server on the same development 
 machine, this took a full minute to execute.
 Oddly enough, the same 50,000 insert statements in a [Datastax Java Driver 
 Batch|http://www.datastax.com/drivers/java/apidocs/com/datastax/driver/core/querybuilder/QueryBuilder.html#batch(com.datastax.driver.core.Statement...)]
  on the same machine took 7.5 minutes.  I thought batches were supposed to be 
 _faster_ than individual inserts?
 We tried instead with a Thrift client (Astyanax) and the same insert via a 
 [MutationBatch|http://netflix.github.io/astyanax/javadoc/com/netflix/astyanax/MutationBatch.html].
   This took _235 milliseconds_.
 h3. Feature Request
 As a result of this performance testing, this issue is to request that CQL3 
 support batch mutation operations as a single operation (statement) to ensure 
 the same speed/performance benefits as existing Thrift clients.
 Example suggested syntax (based on the above example table/column family):
 {code}
 insert into results (row_id, (index,value)) values 
 ((0,text0), (1,text1), (2,text2), ..., (N,textN));
 {code}
 Each value in the {{values}} clause is a tuple.  The first tuple element is 
 the column name, the second tuple element is the column value.  This seems to 
 be the most simple/accurate representation of what happens during a batch 
 insert/mutate.
 Not having this CQL feature forced us to remove the Datastax Java Driver 
 (which we liked) in favor of Astyanax because Astyanax supports this 
 behavior.  We desire feature/performance parity between Thrift and 
 CQL3/Datastax Java Driver, so we hope this request improves both CQL3 and the 
 Driver.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-7665) nodetool scrub fails on system schema with UDTs

2014-08-01 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-7665:
-

Assignee: Marcus Eriksson

 nodetool scrub fails on system schema with UDTs
 ---

 Key: CASSANDRA-7665
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7665
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.0-rc4
Reporter: Jonathan Halliday
Assignee: Marcus Eriksson

 [apache-cassandra-2.1.0-rc4]$ bin/cqlsh
 Connected to Test Cluster at 127.0.0.1:9042.
 [cqlsh 5.0.1 | Cassandra 2.1.0-rc4 | CQL spec 3.2.0 | Native protocol v3]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1 };
 cqlsh use test;
 cqlsh:test CREATE TYPE point_t (x double, y double);
 cqlsh:test exit
 [apache-cassandra-2.1.0-rc4]$bin/nodetool scrub
 INFO  12:34:57 Scrubbing 
 SSTableReader(path='/apache-cassandra-2.1.0-rc4/bin/../data/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-1-Data.db')
  (34135 bytes)
 INFO  12:34:57 Scrub of 
 SSTableReader(path='/apache-cassandra-2.1.0-rc4/bin/../data/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-1-Data.db')
  complete: 2 rows in new sstable and 0 empty (tombstoned) rows dropped
 INFO  12:34:57 Scrubbing 
 SSTableReader(path='/apache-cassandra-2.1.0-rc4/bin/../data/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-5-Data.db')
  (12515 bytes)
 WARN  12:34:57 Error reading row (stacktrace follows):
 org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.serializers.MarshalException: Not enough bytes to read a 
 set
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:139)
  ~[apache-cassandra-2.1.0-rc4.jar:2.1.0-rc4]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7593) Min/max column name collection broken with range tombstones

2014-08-01 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-7593:
---

Summary: Min/max column name collection broken with range tombstones  (was: 
Errors when upgrading through several versions to 2.1)

 Min/max column name collection broken with range tombstones
 ---

 Key: CASSANDRA-7593
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7593
 Project: Cassandra
  Issue Type: Bug
 Environment: java 1.7
Reporter: Russ Hatch
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.0

 Attachments: 0001-keep-clusteringSize-in-CompoundComposite.patch, 
 7593-v2.txt, 7593.txt


 I'm seeing two different errors cropping up in the dtest which upgrades a 
 cluster through several versions.
 This is the more common error:
 {noformat}
 ERROR [GossipStage:10] 2014-07-22 13:14:30,028 CassandraDaemon.java:168 - 
 Exception in thread Thread[GossipStage:10,5,main]
 java.lang.AssertionError: null
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.shouldInclude(SliceQueryFilter.java:347)
  ~[main/:na]
 at 
 org.apache.cassandra.db.filter.QueryFilter.shouldInclude(QueryFilter.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1873)
  ~[main/:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1681)
  ~[main/:na]
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:345) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:59)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.readLocally(SelectStatement.java:293)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:302)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:60)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:263)
  ~[main/:na]
 at 
 org.apache.cassandra.db.SystemKeyspace.getPreferredIP(SystemKeyspace.java:514)
  ~[main/:na]
 at 
 org.apache.cassandra.net.OutboundTcpConnectionPool.init(OutboundTcpConnectionPool.java:51)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.getConnectionPool(MessagingService.java:522)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:536)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:689)
  ~[main/:na]
 at 
 org.apache.cassandra.net.MessagingService.sendReply(MessagingService.java:663)
  ~[main/:na]
 at 
 org.apache.cassandra.service.EchoVerbHandler.doVerb(EchoVerbHandler.java:40) 
 ~[main/:na]
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
 ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_60]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_60]
 {noformat}
 The same test sometimes fails with this exception instead:
 {noformat}
 ERROR [CompactionExecutor:4] 2014-07-22 16:18:21,008 CassandraDaemon.java:168 
 - Exception in thread Thread[CompactionExecutor:4,1,RMI Runtime]
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@7059d3e9 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@108f1504[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 95]
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) 
 ~[na:1.7.0_60]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.execute(ScheduledThreadPoolExecutor.java:619)
  ~[na:1.7.0_60]
 at 
 

[jira] [Commented] (CASSANDRA-5959) CQL3 support for multi-column insert in a single operation (Batch Insert / Batch Mutate)

2014-08-01 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082272#comment-14082272
 ] 

Aleksey Yeschenko commented on CASSANDRA-5959:
--

Yeah, but you had to the exact same thing using Thrift, including the same 
timestamp trick, just using a different API.

 CQL3 support for multi-column insert in a single operation (Batch Insert / 
 Batch Mutate)
 

 Key: CASSANDRA-5959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5959
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Drivers (now out of tree)
Reporter: Les Hazlewood
  Labels: CQL

 h3. Impetus for this Request
 (from the original [question on 
 StackOverflow|http://stackoverflow.com/questions/18522191/using-cassandra-and-cql3-how-do-you-insert-an-entire-wide-row-in-a-single-reque]):
 I want to insert a single row with 50,000 columns into Cassandra 1.2.9. 
 Before inserting, I have all the data for the entire row ready to go (in 
 memory):
 {code}
 +-+--+--+--+--+---+
 | | 0| 1| 2| ...  | 4 |
 | row_id  +--+--+--+--+---+
 | | text | text | text | ...  | text  |
 +-+--+--+--|--+---+
 {code}
 The column names are integers, allowing slicing for pagination. The column 
 values are a value at that particular index.
 CQL3 table definition:
 {code}
 create table results (
 row_id text,
 index int,
 value text,
 primary key (row_id, index)
 ) 
 with compact storage;
 {code}
 As I already have the row_id and all 50,000 name/value pairs in memory, I 
 just want to insert a single row into Cassandra in a single request/operation 
 so it is as fast as possible.
 The only thing I can seem to find is to do execute the following 50,000 times:
 {code}
 INSERT INTO results (row_id, index, value) values (my_row_id, ?, ?);
 {code}
 where the first {{?}} is is an index counter ({{i}}) and the second {{?}} is 
 the text value to store at location {{i}}.
 With the Datastax Java Driver client and C* server on the same development 
 machine, this took a full minute to execute.
 Oddly enough, the same 50,000 insert statements in a [Datastax Java Driver 
 Batch|http://www.datastax.com/drivers/java/apidocs/com/datastax/driver/core/querybuilder/QueryBuilder.html#batch(com.datastax.driver.core.Statement...)]
  on the same machine took 7.5 minutes.  I thought batches were supposed to be 
 _faster_ than individual inserts?
 We tried instead with a Thrift client (Astyanax) and the same insert via a 
 [MutationBatch|http://netflix.github.io/astyanax/javadoc/com/netflix/astyanax/MutationBatch.html].
   This took _235 milliseconds_.
 h3. Feature Request
 As a result of this performance testing, this issue is to request that CQL3 
 support batch mutation operations as a single operation (statement) to ensure 
 the same speed/performance benefits as existing Thrift clients.
 Example suggested syntax (based on the above example table/column family):
 {code}
 insert into results (row_id, (index,value)) values 
 ((0,text0), (1,text1), (2,text2), ..., (N,textN));
 {code}
 Each value in the {{values}} clause is a tuple.  The first tuple element is 
 the column name, the second tuple element is the column value.  This seems to 
 be the most simple/accurate representation of what happens during a batch 
 insert/mutate.
 Not having this CQL feature forced us to remove the Datastax Java Driver 
 (which we liked) in favor of Astyanax because Astyanax supports this 
 behavior.  We desire feature/performance parity between Thrift and 
 CQL3/Datastax Java Driver, so we hope this request improves both CQL3 and the 
 Driver.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7511) Commit log grows infinitely after truncate

2014-08-01 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-7511:
---

Attachment: 7511-v3-remove-renewMemtable.txt

Also remove renewMemtable from DataTracker since it is now dead code.

 Commit log grows infinitely after truncate
 --

 Key: CASSANDRA-7511
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7511
 Project: Cassandra
  Issue Type: Bug
 Environment: CentOS 6.5, Oracle Java 7u60, C* 2.0.6, 2.0.9, including 
 earlier 1.0.* versions.
Reporter: Viktor Jevdokimov
Assignee: Jeremiah Jordan
Priority: Minor
  Labels: commitlog
 Fix For: 2.0.10

 Attachments: 7511-2.0-v2.txt, 7511-v3-remove-renewMemtable.txt, 
 7511-v3-test.txt, 7511-v3.txt, 7511.txt


 Commit log grows infinitely after CF truncate operation via cassandra-cli, 
 regardless CF receives writes or not thereafter.
 CF's could be non-CQL Standard and Super column type. Creation of snapshots 
 after truncate is turned off.
 Commit log may start grow promptly, may start grow later, on a few only or on 
 all nodes at once.
 Nothing special in the system log. No idea how to reproduce.
 After rolling restart commit logs are cleared and back to normal. Just 
 annoying to do rolling restart after each truncate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5959) CQL3 support for multi-column insert in a single operation (Batch Insert / Batch Mutate)

2014-08-01 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082284#comment-14082284
 ] 

Robert Stupp commented on CASSANDRA-5959:
-

bq. same thing using Thrift

Okay. So this would be a completly new feature.
One last question before I leave this thing alone: Are multiple DMLs against 
the same partition but different timestamps merged into a single mutation? I 
guess not - but I'm not sure (didn't dig that deep into the code yet).

 CQL3 support for multi-column insert in a single operation (Batch Insert / 
 Batch Mutate)
 

 Key: CASSANDRA-5959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5959
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Drivers (now out of tree)
Reporter: Les Hazlewood
  Labels: CQL

 h3. Impetus for this Request
 (from the original [question on 
 StackOverflow|http://stackoverflow.com/questions/18522191/using-cassandra-and-cql3-how-do-you-insert-an-entire-wide-row-in-a-single-reque]):
 I want to insert a single row with 50,000 columns into Cassandra 1.2.9. 
 Before inserting, I have all the data for the entire row ready to go (in 
 memory):
 {code}
 +-+--+--+--+--+---+
 | | 0| 1| 2| ...  | 4 |
 | row_id  +--+--+--+--+---+
 | | text | text | text | ...  | text  |
 +-+--+--+--|--+---+
 {code}
 The column names are integers, allowing slicing for pagination. The column 
 values are a value at that particular index.
 CQL3 table definition:
 {code}
 create table results (
 row_id text,
 index int,
 value text,
 primary key (row_id, index)
 ) 
 with compact storage;
 {code}
 As I already have the row_id and all 50,000 name/value pairs in memory, I 
 just want to insert a single row into Cassandra in a single request/operation 
 so it is as fast as possible.
 The only thing I can seem to find is to do execute the following 50,000 times:
 {code}
 INSERT INTO results (row_id, index, value) values (my_row_id, ?, ?);
 {code}
 where the first {{?}} is is an index counter ({{i}}) and the second {{?}} is 
 the text value to store at location {{i}}.
 With the Datastax Java Driver client and C* server on the same development 
 machine, this took a full minute to execute.
 Oddly enough, the same 50,000 insert statements in a [Datastax Java Driver 
 Batch|http://www.datastax.com/drivers/java/apidocs/com/datastax/driver/core/querybuilder/QueryBuilder.html#batch(com.datastax.driver.core.Statement...)]
  on the same machine took 7.5 minutes.  I thought batches were supposed to be 
 _faster_ than individual inserts?
 We tried instead with a Thrift client (Astyanax) and the same insert via a 
 [MutationBatch|http://netflix.github.io/astyanax/javadoc/com/netflix/astyanax/MutationBatch.html].
   This took _235 milliseconds_.
 h3. Feature Request
 As a result of this performance testing, this issue is to request that CQL3 
 support batch mutation operations as a single operation (statement) to ensure 
 the same speed/performance benefits as existing Thrift clients.
 Example suggested syntax (based on the above example table/column family):
 {code}
 insert into results (row_id, (index,value)) values 
 ((0,text0), (1,text1), (2,text2), ..., (N,textN));
 {code}
 Each value in the {{values}} clause is a tuple.  The first tuple element is 
 the column name, the second tuple element is the column value.  This seems to 
 be the most simple/accurate representation of what happens during a batch 
 insert/mutate.
 Not having this CQL feature forced us to remove the Datastax Java Driver 
 (which we liked) in favor of Astyanax because Astyanax supports this 
 behavior.  We desire feature/performance parity between Thrift and 
 CQL3/Datastax Java Driver, so we hope this request improves both CQL3 and the 
 Driver.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7511) Commit log grows infinitely after truncate (when auto_snapshot is false)

2014-08-01 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-7511:
---

Summary: Commit log grows infinitely after truncate (when auto_snapshot is 
false)  (was: Commit log grows infinitely after truncate)

 Commit log grows infinitely after truncate (when auto_snapshot is false)
 

 Key: CASSANDRA-7511
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7511
 Project: Cassandra
  Issue Type: Bug
 Environment: CentOS 6.5, Oracle Java 7u60, C* 2.0.6, 2.0.9, including 
 earlier 1.0.* versions.
Reporter: Viktor Jevdokimov
Assignee: Jeremiah Jordan
Priority: Minor
  Labels: commitlog
 Fix For: 2.0.10

 Attachments: 7511-2.0-v2.txt, 7511-v3-remove-renewMemtable.txt, 
 7511-v3-test.txt, 7511-v3.txt, 7511.txt


 Commit log grows infinitely after CF truncate operation via cassandra-cli, 
 regardless CF receives writes or not thereafter.
 CF's could be non-CQL Standard and Super column type. Creation of snapshots 
 after truncate is turned off.
 Commit log may start grow promptly, may start grow later, on a few only or on 
 all nodes at once.
 Nothing special in the system log. No idea how to reproduce.
 After rolling restart commit logs are cleared and back to normal. Just 
 annoying to do rolling restart after each truncate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5959) CQL3 support for multi-column insert in a single operation (Batch Insert / Batch Mutate)

2014-08-01 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082289#comment-14082289
 ] 

Aleksey Yeschenko commented on CASSANDRA-5959:
--

bq. Are multiple DMLs against the same partition but different timestamps 
merged into a single mutation?

Yup, they absolutely are, each retaining the specified timestamp. In fact, all 
the DMLs that 1) belong to the same keyspace and 2) have the same parition key 
- all get merged into a single mutation.

 CQL3 support for multi-column insert in a single operation (Batch Insert / 
 Batch Mutate)
 

 Key: CASSANDRA-5959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5959
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Drivers (now out of tree)
Reporter: Les Hazlewood
  Labels: CQL

 h3. Impetus for this Request
 (from the original [question on 
 StackOverflow|http://stackoverflow.com/questions/18522191/using-cassandra-and-cql3-how-do-you-insert-an-entire-wide-row-in-a-single-reque]):
 I want to insert a single row with 50,000 columns into Cassandra 1.2.9. 
 Before inserting, I have all the data for the entire row ready to go (in 
 memory):
 {code}
 +-+--+--+--+--+---+
 | | 0| 1| 2| ...  | 4 |
 | row_id  +--+--+--+--+---+
 | | text | text | text | ...  | text  |
 +-+--+--+--|--+---+
 {code}
 The column names are integers, allowing slicing for pagination. The column 
 values are a value at that particular index.
 CQL3 table definition:
 {code}
 create table results (
 row_id text,
 index int,
 value text,
 primary key (row_id, index)
 ) 
 with compact storage;
 {code}
 As I already have the row_id and all 50,000 name/value pairs in memory, I 
 just want to insert a single row into Cassandra in a single request/operation 
 so it is as fast as possible.
 The only thing I can seem to find is to do execute the following 50,000 times:
 {code}
 INSERT INTO results (row_id, index, value) values (my_row_id, ?, ?);
 {code}
 where the first {{?}} is is an index counter ({{i}}) and the second {{?}} is 
 the text value to store at location {{i}}.
 With the Datastax Java Driver client and C* server on the same development 
 machine, this took a full minute to execute.
 Oddly enough, the same 50,000 insert statements in a [Datastax Java Driver 
 Batch|http://www.datastax.com/drivers/java/apidocs/com/datastax/driver/core/querybuilder/QueryBuilder.html#batch(com.datastax.driver.core.Statement...)]
  on the same machine took 7.5 minutes.  I thought batches were supposed to be 
 _faster_ than individual inserts?
 We tried instead with a Thrift client (Astyanax) and the same insert via a 
 [MutationBatch|http://netflix.github.io/astyanax/javadoc/com/netflix/astyanax/MutationBatch.html].
   This took _235 milliseconds_.
 h3. Feature Request
 As a result of this performance testing, this issue is to request that CQL3 
 support batch mutation operations as a single operation (statement) to ensure 
 the same speed/performance benefits as existing Thrift clients.
 Example suggested syntax (based on the above example table/column family):
 {code}
 insert into results (row_id, (index,value)) values 
 ((0,text0), (1,text1), (2,text2), ..., (N,textN));
 {code}
 Each value in the {{values}} clause is a tuple.  The first tuple element is 
 the column name, the second tuple element is the column value.  This seems to 
 be the most simple/accurate representation of what happens during a batch 
 insert/mutate.
 Not having this CQL feature forced us to remove the Datastax Java Driver 
 (which we liked) in favor of Astyanax because Astyanax supports this 
 behavior.  We desire feature/performance parity between Thrift and 
 CQL3/Datastax Java Driver, so we hope this request improves both CQL3 and the 
 Driver.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7511) Always flush on TRUNCATE

2014-08-01 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-7511:
---

Summary: Always flush on TRUNCATE  (was: Commit log grows infinitely after 
truncate (when auto_snapshot is false))

 Always flush on TRUNCATE
 

 Key: CASSANDRA-7511
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7511
 Project: Cassandra
  Issue Type: Bug
 Environment: CentOS 6.5, Oracle Java 7u60, C* 2.0.6, 2.0.9, including 
 earlier 1.0.* versions.
Reporter: Viktor Jevdokimov
Assignee: Jeremiah Jordan
Priority: Minor
  Labels: commitlog
 Fix For: 2.0.10

 Attachments: 7511-2.0-v2.txt, 7511-v3-remove-renewMemtable.txt, 
 7511-v3-test.txt, 7511-v3.txt, 7511.txt


 Commit log grows infinitely after CF truncate operation via cassandra-cli, 
 regardless CF receives writes or not thereafter.
 CF's could be non-CQL Standard and Super column type. Creation of snapshots 
 after truncate is turned off.
 Commit log may start grow promptly, may start grow later, on a few only or on 
 all nodes at once.
 Nothing special in the system log. No idea how to reproduce.
 After rolling restart commit logs are cleared and back to normal. Just 
 annoying to do rolling restart after each truncate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7511) Always flush on TRUNCATE

2014-08-01 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082295#comment-14082295
 ] 

Benedict commented on CASSANDRA-7511:
-

Looking at 2.1, it is actually still affected by this bug. I don't mind which 
solution we go for in 2.1; always flush, or grab the last replay position from 
the memtable (either are pretty trivial)

 Always flush on TRUNCATE
 

 Key: CASSANDRA-7511
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7511
 Project: Cassandra
  Issue Type: Bug
 Environment: CentOS 6.5, Oracle Java 7u60, C* 2.0.6, 2.0.9, including 
 earlier 1.0.* versions.
Reporter: Viktor Jevdokimov
Assignee: Jeremiah Jordan
Priority: Minor
  Labels: commitlog
 Fix For: 2.0.10

 Attachments: 7511-2.0-v2.txt, 7511-v3-remove-renewMemtable.txt, 
 7511-v3-test.txt, 7511-v3.txt, 7511.txt


 Commit log grows infinitely after CF truncate operation via cassandra-cli, 
 regardless CF receives writes or not thereafter.
 CF's could be non-CQL Standard and Super column type. Creation of snapshots 
 after truncate is turned off.
 Commit log may start grow promptly, may start grow later, on a few only or on 
 all nodes at once.
 Nothing special in the system log. No idea how to reproduce.
 After rolling restart commit logs are cleared and back to normal. Just 
 annoying to do rolling restart after each truncate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7568) Replacing a dead node using replace_address fails

2014-08-01 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7568:


Fix Version/s: 2.1.0

 Replacing a dead node using replace_address fails
 -

 Key: CASSANDRA-7568
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7568
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Ala' Alkhaldi
Assignee: Brandon Williams
Priority: Minor
  Labels: qa-resolved
 Fix For: 2.1.0

 Attachments: 7568.txt


 Failed assertion
 {code}
 ERROR [main] 2014-07-17 10:24:21,171 CassandraDaemon.java:474 - Exception 
 encountered during startup
 java.lang.AssertionError: Expected 1 endpoint but found 0
 at 
 org.apache.cassandra.dht.RangeStreamer.getAllRangesWithStrictSourcesFor(RangeStreamer.java:222)
  ~[main/:na]
 at 
 org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:131) 
 ~[main/:na]
 at 
 org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:72) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1049)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:811)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:626)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:511)
  ~[main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) 
 [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457)
  [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) 
 [main/:na]
 {code}
 To replicate the bug run the replace_address_test.replace_stopped_node_test 
 dtest



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7568) Replacing a dead node using replace_address fails

2014-08-01 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7568:


Attachment: 7568.txt

Don't use consistent rangemovement when replacing, since the way that works is 
impossible.

 Replacing a dead node using replace_address fails
 -

 Key: CASSANDRA-7568
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7568
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Ala' Alkhaldi
Assignee: Brandon Williams
Priority: Minor
  Labels: qa-resolved
 Fix For: 2.1.0

 Attachments: 7568.txt


 Failed assertion
 {code}
 ERROR [main] 2014-07-17 10:24:21,171 CassandraDaemon.java:474 - Exception 
 encountered during startup
 java.lang.AssertionError: Expected 1 endpoint but found 0
 at 
 org.apache.cassandra.dht.RangeStreamer.getAllRangesWithStrictSourcesFor(RangeStreamer.java:222)
  ~[main/:na]
 at 
 org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:131) 
 ~[main/:na]
 at 
 org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:72) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1049)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:811)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:626)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:511)
  ~[main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) 
 [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457)
  [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) 
 [main/:na]
 {code}
 To replicate the bug run the replace_address_test.replace_stopped_node_test 
 dtest



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Fix truncate to always call flush on table

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 1879d9928 - 60eab4e45


Fix truncate to always call flush on table

Patch by Jeremiah Jordan; reviewed by tjake for (CASSANDRA-7511)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60eab4e4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60eab4e4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60eab4e4

Branch: refs/heads/cassandra-2.0
Commit: 60eab4e45e18d6b08350187acf56deed9654fda7
Parents: 1879d99
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 10:30:48 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 10:30:48 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../apache/cassandra/db/ColumnFamilyStore.java  | 29 +++---
 .../org/apache/cassandra/db/DataTracker.java| 18 ---
 test/conf/cassandra.yaml|  1 +
 .../org/apache/cassandra/db/CommitLogTest.java  | 32 
 6 files changed, 44 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1fcb556..33bab82 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.10
+ * Fix truncate to always flush (CASSANDRA-7511)
  * Remove shuffle and taketoken (CASSANDRA-7601)
  * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
  * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index bf0307b..d4c1f26 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1131,6 +1131,11 @@ public class DatabaseDescriptor
 return conf.auto_snapshot;
 }
 
+@VisibleForTesting
+public static void setAutoSnapshot(boolean autoSnapshot) {
+conf.auto_snapshot = autoSnapshot;
+}
+
 public static boolean isAutoBootstrap()
 {
 return conf.auto_bootstrap;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 2824924..a3c080a 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2002,31 +2002,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 // position in the System keyspace.
 logger.debug(truncating {}, name);
 
-if (DatabaseDescriptor.isAutoSnapshot())
-{
-// flush the CF being truncated before forcing the new segment
-forceBlockingFlush();
-
-// sleep a little to make sure that our truncatedAt comes after 
any sstable
-// that was part of the flushed we forced; otherwise on a tie, it 
won't get deleted.
-Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
-}
+// flush the CF being truncated before forcing the new segment
+forceBlockingFlush();
 
-// nuke the memtable data w/o writing to disk first
-Keyspace.switchLock.writeLock().lock();
-try
-{
-for (ColumnFamilyStore cfs : concatWithIndexes())
-{
-Memtable mt = cfs.getMemtableThreadSafe();
-if (!mt.isClean())
-mt.cfs.data.renewMemtable();
-}
-}
-finally
-{
-Keyspace.switchLock.writeLock().unlock();
-}
+// sleep a little to make sure that our truncatedAt comes after any 
sstable
+// that was part of the flushed we forced; otherwise on a tie, it 
won't get deleted.
+Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
 
 Runnable truncateRunnable = new Runnable()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index 

[jira] [Commented] (CASSANDRA-5959) CQL3 support for multi-column insert in a single operation (Batch Insert / Batch Mutate)

2014-08-01 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082303#comment-14082303
 ] 

Robert Stupp commented on CASSANDRA-5959:
-

Interesting. Thx :)

 CQL3 support for multi-column insert in a single operation (Batch Insert / 
 Batch Mutate)
 

 Key: CASSANDRA-5959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5959
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Drivers (now out of tree)
Reporter: Les Hazlewood
  Labels: CQL

 h3. Impetus for this Request
 (from the original [question on 
 StackOverflow|http://stackoverflow.com/questions/18522191/using-cassandra-and-cql3-how-do-you-insert-an-entire-wide-row-in-a-single-reque]):
 I want to insert a single row with 50,000 columns into Cassandra 1.2.9. 
 Before inserting, I have all the data for the entire row ready to go (in 
 memory):
 {code}
 +-+--+--+--+--+---+
 | | 0| 1| 2| ...  | 4 |
 | row_id  +--+--+--+--+---+
 | | text | text | text | ...  | text  |
 +-+--+--+--|--+---+
 {code}
 The column names are integers, allowing slicing for pagination. The column 
 values are a value at that particular index.
 CQL3 table definition:
 {code}
 create table results (
 row_id text,
 index int,
 value text,
 primary key (row_id, index)
 ) 
 with compact storage;
 {code}
 As I already have the row_id and all 50,000 name/value pairs in memory, I 
 just want to insert a single row into Cassandra in a single request/operation 
 so it is as fast as possible.
 The only thing I can seem to find is to do execute the following 50,000 times:
 {code}
 INSERT INTO results (row_id, index, value) values (my_row_id, ?, ?);
 {code}
 where the first {{?}} is is an index counter ({{i}}) and the second {{?}} is 
 the text value to store at location {{i}}.
 With the Datastax Java Driver client and C* server on the same development 
 machine, this took a full minute to execute.
 Oddly enough, the same 50,000 insert statements in a [Datastax Java Driver 
 Batch|http://www.datastax.com/drivers/java/apidocs/com/datastax/driver/core/querybuilder/QueryBuilder.html#batch(com.datastax.driver.core.Statement...)]
  on the same machine took 7.5 minutes.  I thought batches were supposed to be 
 _faster_ than individual inserts?
 We tried instead with a Thrift client (Astyanax) and the same insert via a 
 [MutationBatch|http://netflix.github.io/astyanax/javadoc/com/netflix/astyanax/MutationBatch.html].
   This took _235 milliseconds_.
 h3. Feature Request
 As a result of this performance testing, this issue is to request that CQL3 
 support batch mutation operations as a single operation (statement) to ensure 
 the same speed/performance benefits as existing Thrift clients.
 Example suggested syntax (based on the above example table/column family):
 {code}
 insert into results (row_id, (index,value)) values 
 ((0,text0), (1,text1), (2,text2), ..., (N,textN));
 {code}
 Each value in the {{values}} clause is a tuple.  The first tuple element is 
 the column name, the second tuple element is the column value.  This seems to 
 be the most simple/accurate representation of what happens during a batch 
 insert/mutate.
 Not having this CQL feature forced us to remove the Datastax Java Driver 
 (which we liked) in favor of Astyanax because Astyanax supports this 
 behavior.  We desire feature/performance parity between Thrift and 
 CQL3/Datastax Java Driver, so we hope this request improves both CQL3 and the 
 Driver.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7663) Removing a seed causes previously removed seeds to reappear

2014-08-01 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082315#comment-14082315
 ] 

Jonathan Ellis commented on CASSANDRA-7663:
---

+1

 Removing a seed causes previously removed seeds to reappear
 ---

 Key: CASSANDRA-7663
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7663
 Project: Cassandra
  Issue Type: Bug
Reporter: Richard Low
Assignee: Brandon Williams
 Fix For: 1.2.19, 2.0.10

 Attachments: 7663.txt


 When you remove a seed from a cluster, Gossiper.removeEndpoint ensures it is 
 removed from the seed list. However, it also resets the seed list to be the 
 original list, which would bring back any previously removed seeds. What is 
 the reasoning for having the call to buildSeedsList()? If it wasn’t there 
 then I think the problem would be solved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7568) Replacing a dead node using replace_address fails

2014-08-01 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082313#comment-14082313
 ] 

T Jake Luciani commented on CASSANDRA-7568:
---

+1

 Replacing a dead node using replace_address fails
 -

 Key: CASSANDRA-7568
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7568
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Ala' Alkhaldi
Assignee: Brandon Williams
Priority: Minor
  Labels: qa-resolved
 Fix For: 2.1.0

 Attachments: 7568.txt


 Failed assertion
 {code}
 ERROR [main] 2014-07-17 10:24:21,171 CassandraDaemon.java:474 - Exception 
 encountered during startup
 java.lang.AssertionError: Expected 1 endpoint but found 0
 at 
 org.apache.cassandra.dht.RangeStreamer.getAllRangesWithStrictSourcesFor(RangeStreamer.java:222)
  ~[main/:na]
 at 
 org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:131) 
 ~[main/:na]
 at 
 org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:72) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1049)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:811)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:626)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:511)
  ~[main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) 
 [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457)
  [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) 
 [main/:na]
 {code}
 To replicate the bug run the replace_address_test.replace_stopped_node_test 
 dtest



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7638) Revisit GCInspector

2014-08-01 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7638:


Attachment: (was: 7638.txt)

 Revisit GCInspector
 ---

 Key: CASSANDRA-7638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7638
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.0.10


 In CASSANDRA-2868 we had to change the api that GCI uses to avoid the native 
 memory leak, but this caused GCI to be less reliable and more 'best effort' 
 than before where it was 100% reliable.  Let's revisit this and see if the 
 native memory leak is fixed in java7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7638) Revisit GCInspector

2014-08-01 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7638:


Attachment: 7638.txt

If we're going to do it in 2.1, we should do it in 2.1.0 so we don't change 
formats in a minor; having a new format in a major isn't too surprising.  
Rebased Yuki's approach to 2.1 and added shell to detect jvms before u25.

 Revisit GCInspector
 ---

 Key: CASSANDRA-7638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7638
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.1.0

 Attachments: 7638.txt


 In CASSANDRA-2868 we had to change the api that GCI uses to avoid the native 
 memory leak, but this caused GCI to be less reliable and more 'best effort' 
 than before where it was 100% reliable.  Let's revisit this and see if the 
 native memory leak is fixed in java7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7638) Revisit GCInspector

2014-08-01 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7638:


Fix Version/s: (was: 2.0.10)
   2.1.0

 Revisit GCInspector
 ---

 Key: CASSANDRA-7638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7638
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.1.0

 Attachments: 7638.txt


 In CASSANDRA-2868 we had to change the api that GCI uses to avoid the native 
 memory leak, but this caused GCI to be less reliable and more 'best effort' 
 than before where it was 100% reliable.  Let's revisit this and see if the 
 native memory leak is fixed in java7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/2] git commit: Fix truncate to always call flush on table

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 1921b9859 - 18ce8c72a


Fix truncate to always call flush on table

Patch by Jeremiah Jordan; reviewed by tjake for (CASSANDRA-7511)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60eab4e4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60eab4e4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60eab4e4

Branch: refs/heads/cassandra-2.1.0
Commit: 60eab4e45e18d6b08350187acf56deed9654fda7
Parents: 1879d99
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 10:30:48 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 10:30:48 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../apache/cassandra/db/ColumnFamilyStore.java  | 29 +++---
 .../org/apache/cassandra/db/DataTracker.java| 18 ---
 test/conf/cassandra.yaml|  1 +
 .../org/apache/cassandra/db/CommitLogTest.java  | 32 
 6 files changed, 44 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1fcb556..33bab82 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.10
+ * Fix truncate to always flush (CASSANDRA-7511)
  * Remove shuffle and taketoken (CASSANDRA-7601)
  * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
  * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index bf0307b..d4c1f26 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1131,6 +1131,11 @@ public class DatabaseDescriptor
 return conf.auto_snapshot;
 }
 
+@VisibleForTesting
+public static void setAutoSnapshot(boolean autoSnapshot) {
+conf.auto_snapshot = autoSnapshot;
+}
+
 public static boolean isAutoBootstrap()
 {
 return conf.auto_bootstrap;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 2824924..a3c080a 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2002,31 +2002,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 // position in the System keyspace.
 logger.debug(truncating {}, name);
 
-if (DatabaseDescriptor.isAutoSnapshot())
-{
-// flush the CF being truncated before forcing the new segment
-forceBlockingFlush();
-
-// sleep a little to make sure that our truncatedAt comes after 
any sstable
-// that was part of the flushed we forced; otherwise on a tie, it 
won't get deleted.
-Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
-}
+// flush the CF being truncated before forcing the new segment
+forceBlockingFlush();
 
-// nuke the memtable data w/o writing to disk first
-Keyspace.switchLock.writeLock().lock();
-try
-{
-for (ColumnFamilyStore cfs : concatWithIndexes())
-{
-Memtable mt = cfs.getMemtableThreadSafe();
-if (!mt.isClean())
-mt.cfs.data.renewMemtable();
-}
-}
-finally
-{
-Keyspace.switchLock.writeLock().unlock();
-}
+// sleep a little to make sure that our truncatedAt comes after any 
sstable
+// that was part of the flushed we forced; otherwise on a tie, it 
won't get deleted.
+Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
 
 Runnable truncateRunnable = new Runnable()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index 

[3/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5f0413be
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5f0413be
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5f0413be

Branch: refs/heads/cassandra-2.1
Commit: 5f0413bef546b3489525709cececd1947be34b67
Parents: 169b1cf 18ce8c7
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 11:17:23 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 11:17:23 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../apache/cassandra/db/ColumnFamilyStore.java  | 23 +++---
 test/conf/cassandra.yaml|  1 +
 .../org/apache/cassandra/db/CommitLogTest.java  | 32 
 5 files changed, 44 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f0413be/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f0413be/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f0413be/test/conf/cassandra.yaml
--



[1/3] git commit: Fix truncate to always call flush on table

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 169b1cf19 - 5f0413bef


Fix truncate to always call flush on table

Patch by Jeremiah Jordan; reviewed by tjake for (CASSANDRA-7511)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60eab4e4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60eab4e4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60eab4e4

Branch: refs/heads/cassandra-2.1
Commit: 60eab4e45e18d6b08350187acf56deed9654fda7
Parents: 1879d99
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 10:30:48 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 10:30:48 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../apache/cassandra/db/ColumnFamilyStore.java  | 29 +++---
 .../org/apache/cassandra/db/DataTracker.java| 18 ---
 test/conf/cassandra.yaml|  1 +
 .../org/apache/cassandra/db/CommitLogTest.java  | 32 
 6 files changed, 44 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1fcb556..33bab82 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.10
+ * Fix truncate to always flush (CASSANDRA-7511)
  * Remove shuffle and taketoken (CASSANDRA-7601)
  * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
  * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index bf0307b..d4c1f26 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1131,6 +1131,11 @@ public class DatabaseDescriptor
 return conf.auto_snapshot;
 }
 
+@VisibleForTesting
+public static void setAutoSnapshot(boolean autoSnapshot) {
+conf.auto_snapshot = autoSnapshot;
+}
+
 public static boolean isAutoBootstrap()
 {
 return conf.auto_bootstrap;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 2824924..a3c080a 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2002,31 +2002,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 // position in the System keyspace.
 logger.debug(truncating {}, name);
 
-if (DatabaseDescriptor.isAutoSnapshot())
-{
-// flush the CF being truncated before forcing the new segment
-forceBlockingFlush();
-
-// sleep a little to make sure that our truncatedAt comes after 
any sstable
-// that was part of the flushed we forced; otherwise on a tie, it 
won't get deleted.
-Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
-}
+// flush the CF being truncated before forcing the new segment
+forceBlockingFlush();
 
-// nuke the memtable data w/o writing to disk first
-Keyspace.switchLock.writeLock().lock();
-try
-{
-for (ColumnFamilyStore cfs : concatWithIndexes())
-{
-Memtable mt = cfs.getMemtableThreadSafe();
-if (!mt.isClean())
-mt.cfs.data.renewMemtable();
-}
-}
-finally
-{
-Keyspace.switchLock.writeLock().unlock();
-}
+// sleep a little to make sure that our truncatedAt comes after any 
sstable
+// that was part of the flushed we forced; otherwise on a tie, it 
won't get deleted.
+Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
 
 Runnable truncateRunnable = new Runnable()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index 

[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-01 Thread jake
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/db/ColumnFamilyStore.java
src/java/org/apache/cassandra/db/DataTracker.java
test/unit/org/apache/cassandra/db/CommitLogTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18ce8c72
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18ce8c72
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18ce8c72

Branch: refs/heads/cassandra-2.1
Commit: 18ce8c72a355949ffd8cdc8c083f3edf85c449d1
Parents: 1921b98 60eab4e
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 11:16:57 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 11:16:57 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../apache/cassandra/db/ColumnFamilyStore.java  | 23 +++---
 test/conf/cassandra.yaml|  1 +
 .../org/apache/cassandra/db/CommitLogTest.java  | 32 
 5 files changed, 44 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18ce8c72/CHANGES.txt
--
diff --cc CHANGES.txt
index fcd,33bab82..7823f87
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,48 -1,11 +1,49 @@@
 -2.0.10
 - * Fix truncate to always flush (CASSANDRA-7511)
 +2.1.0-final
 + * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
 + * Filter cached results correctly (CASSANDRA-7636)
 + * Fix tracing on the new SEPExecutor (CASSANDRA-7644)
   * Remove shuffle and taketoken (CASSANDRA-7601)
 - * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 - * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 - * Always merge ranges owned by a single node (CASSANDRA-6930)
 - * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Clean up Windows batch scripts (CASSANDRA-7619)
 + * Fix native protocol drop user type notification (CASSANDRA-7571)
 + * Give read access to system.schema_usertypes to all authenticated users
 +   (CASSANDRA-7578)
 + * (cqlsh) Fix cqlsh display when zero rows are returned (CASSANDRA-7580)
 + * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572)
 + * Fix NPE when dropping index from non-existent keyspace, AssertionError when
 +   dropping non-existent index with IF EXISTS (CASSANDRA-7590)
 + * Fix sstablelevelresetter hang (CASSANDRA-7614)
 + * (cqlsh) Fix deserialization of blobs (CASSANDRA-7603)
 + * Use keyspace updated schema change message for UDT changes in v1 and
 +   v2 protocols (CASSANDRA-7617)
 + * Fix tracing of range slices and secondary index lookups that are local
 +   to the coordinator (CASSANDRA-7599)
 + * Set -Dcassandra.storagedir for all tool shell scripts (CASSANDRA-7587)
 + * Don't swap max/min col names when mutating sstable metadata 
(CASSANDRA-7596)
 + * (cqlsh) Correctly handle paged result sets (CASSANDRA-7625)
 + * (cqlsh) Improve waiting for a trace to complete (CASSANDRA-7626)
 + * Fix tracing of concurrent range slices and 2ary index queries 
(CASSANDRA-7626)
 +Merged from 2.0:
++ * Always flush on truncate (CASSANDRA-7511)
   * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +
 +
 +2.1.0-rc4
 + * Fix word count hadoop example (CASSANDRA-7200)
 + * Updated memtable_cleanup_threshold and memtable_flush_writers defaults 
 +   (CASSANDRA-7551)
 + * (Windows) fix startup when WMI memory query fails (CASSANDRA-7505)
 + * Anti-compaction proceeds if any part of the repair failed (CASANDRA-7521)
 + * Add missing table name to DROP INDEX responses and notifications 
(CASSANDRA-7539)
 + * Bump CQL version to 3.2.0 and update CQL documentation (CASSANDRA-7527)
 + * Fix configuration error message when running nodetool ring (CASSANDRA-7508)
 + * Support conditional updates, tuple type, and the v3 protocol in cqlsh 
(CASSANDRA-7509)
 + * Handle queries on multiple secondary index types (CASSANDRA-7525)
 + * Fix cqlsh authentication with v3 native protocol (CASSANDRA-7564)
 + * Fix NPE when unknown prepared statement ID is used (CASSANDRA-7454)
 +Merged from 2.0:
   * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
   * Fix range merging when DES scores are zero (CASSANDRA-7535)
   * Warn when SSL certificates have expired (CASSANDRA-7528)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18ce8c72/src/java/org/apache/cassandra/config/DatabaseDescriptor.java

[1/4] git commit: Fix truncate to always call flush on table

2014-08-01 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk e3fa11bf4 - 87321cd6d


Fix truncate to always call flush on table

Patch by Jeremiah Jordan; reviewed by tjake for (CASSANDRA-7511)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60eab4e4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60eab4e4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60eab4e4

Branch: refs/heads/trunk
Commit: 60eab4e45e18d6b08350187acf56deed9654fda7
Parents: 1879d99
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 10:30:48 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 10:30:48 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../apache/cassandra/db/ColumnFamilyStore.java  | 29 +++---
 .../org/apache/cassandra/db/DataTracker.java| 18 ---
 test/conf/cassandra.yaml|  1 +
 .../org/apache/cassandra/db/CommitLogTest.java  | 32 
 6 files changed, 44 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1fcb556..33bab82 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.10
+ * Fix truncate to always flush (CASSANDRA-7511)
  * Remove shuffle and taketoken (CASSANDRA-7601)
  * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
  * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index bf0307b..d4c1f26 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1131,6 +1131,11 @@ public class DatabaseDescriptor
 return conf.auto_snapshot;
 }
 
+@VisibleForTesting
+public static void setAutoSnapshot(boolean autoSnapshot) {
+conf.auto_snapshot = autoSnapshot;
+}
+
 public static boolean isAutoBootstrap()
 {
 return conf.auto_bootstrap;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 2824924..a3c080a 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2002,31 +2002,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 // position in the System keyspace.
 logger.debug(truncating {}, name);
 
-if (DatabaseDescriptor.isAutoSnapshot())
-{
-// flush the CF being truncated before forcing the new segment
-forceBlockingFlush();
-
-// sleep a little to make sure that our truncatedAt comes after 
any sstable
-// that was part of the flushed we forced; otherwise on a tie, it 
won't get deleted.
-Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
-}
+// flush the CF being truncated before forcing the new segment
+forceBlockingFlush();
 
-// nuke the memtable data w/o writing to disk first
-Keyspace.switchLock.writeLock().lock();
-try
-{
-for (ColumnFamilyStore cfs : concatWithIndexes())
-{
-Memtable mt = cfs.getMemtableThreadSafe();
-if (!mt.isClean())
-mt.cfs.data.renewMemtable();
-}
-}
-finally
-{
-Keyspace.switchLock.writeLock().unlock();
-}
+// sleep a little to make sure that our truncatedAt comes after any 
sstable
+// that was part of the flushed we forced; otherwise on a tie, it 
won't get deleted.
+Uninterruptibles.sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
 
 Runnable truncateRunnable = new Runnable()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/60eab4e4/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index a9eef98..a0f880a 100644

[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-01 Thread jake
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/db/ColumnFamilyStore.java
src/java/org/apache/cassandra/db/DataTracker.java
test/unit/org/apache/cassandra/db/CommitLogTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18ce8c72
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18ce8c72
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18ce8c72

Branch: refs/heads/trunk
Commit: 18ce8c72a355949ffd8cdc8c083f3edf85c449d1
Parents: 1921b98 60eab4e
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 11:16:57 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 11:16:57 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../apache/cassandra/db/ColumnFamilyStore.java  | 23 +++---
 test/conf/cassandra.yaml|  1 +
 .../org/apache/cassandra/db/CommitLogTest.java  | 32 
 5 files changed, 44 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18ce8c72/CHANGES.txt
--
diff --cc CHANGES.txt
index fcd,33bab82..7823f87
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,48 -1,11 +1,49 @@@
 -2.0.10
 - * Fix truncate to always flush (CASSANDRA-7511)
 +2.1.0-final
 + * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
 + * Filter cached results correctly (CASSANDRA-7636)
 + * Fix tracing on the new SEPExecutor (CASSANDRA-7644)
   * Remove shuffle and taketoken (CASSANDRA-7601)
 - * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 - * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 - * Always merge ranges owned by a single node (CASSANDRA-6930)
 - * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Clean up Windows batch scripts (CASSANDRA-7619)
 + * Fix native protocol drop user type notification (CASSANDRA-7571)
 + * Give read access to system.schema_usertypes to all authenticated users
 +   (CASSANDRA-7578)
 + * (cqlsh) Fix cqlsh display when zero rows are returned (CASSANDRA-7580)
 + * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572)
 + * Fix NPE when dropping index from non-existent keyspace, AssertionError when
 +   dropping non-existent index with IF EXISTS (CASSANDRA-7590)
 + * Fix sstablelevelresetter hang (CASSANDRA-7614)
 + * (cqlsh) Fix deserialization of blobs (CASSANDRA-7603)
 + * Use keyspace updated schema change message for UDT changes in v1 and
 +   v2 protocols (CASSANDRA-7617)
 + * Fix tracing of range slices and secondary index lookups that are local
 +   to the coordinator (CASSANDRA-7599)
 + * Set -Dcassandra.storagedir for all tool shell scripts (CASSANDRA-7587)
 + * Don't swap max/min col names when mutating sstable metadata 
(CASSANDRA-7596)
 + * (cqlsh) Correctly handle paged result sets (CASSANDRA-7625)
 + * (cqlsh) Improve waiting for a trace to complete (CASSANDRA-7626)
 + * Fix tracing of concurrent range slices and 2ary index queries 
(CASSANDRA-7626)
 +Merged from 2.0:
++ * Always flush on truncate (CASSANDRA-7511)
   * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +
 +
 +2.1.0-rc4
 + * Fix word count hadoop example (CASSANDRA-7200)
 + * Updated memtable_cleanup_threshold and memtable_flush_writers defaults 
 +   (CASSANDRA-7551)
 + * (Windows) fix startup when WMI memory query fails (CASSANDRA-7505)
 + * Anti-compaction proceeds if any part of the repair failed (CASANDRA-7521)
 + * Add missing table name to DROP INDEX responses and notifications 
(CASSANDRA-7539)
 + * Bump CQL version to 3.2.0 and update CQL documentation (CASSANDRA-7527)
 + * Fix configuration error message when running nodetool ring (CASSANDRA-7508)
 + * Support conditional updates, tuple type, and the v3 protocol in cqlsh 
(CASSANDRA-7509)
 + * Handle queries on multiple secondary index types (CASSANDRA-7525)
 + * Fix cqlsh authentication with v3 native protocol (CASSANDRA-7564)
 + * Fix NPE when unknown prepared statement ID is used (CASSANDRA-7454)
 +Merged from 2.0:
   * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
   * Fix range merging when DES scores are zero (CASSANDRA-7535)
   * Warn when SSL certificates have expired (CASSANDRA-7528)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18ce8c72/src/java/org/apache/cassandra/config/DatabaseDescriptor.java

[3/4] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread jake
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5f0413be
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5f0413be
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5f0413be

Branch: refs/heads/trunk
Commit: 5f0413bef546b3489525709cececd1947be34b67
Parents: 169b1cf 18ce8c7
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 11:17:23 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 11:17:23 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../apache/cassandra/db/ColumnFamilyStore.java  | 23 +++---
 test/conf/cassandra.yaml|  1 +
 .../org/apache/cassandra/db/CommitLogTest.java  | 32 
 5 files changed, 44 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f0413be/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f0413be/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f0413be/test/conf/cassandra.yaml
--



[4/4] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-01 Thread jake
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87321cd6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87321cd6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87321cd6

Branch: refs/heads/trunk
Commit: 87321cd6d0b07c376899854d1bfc6355220ae1cf
Parents: e3fa11b 5f0413b
Author: Jake Luciani j...@apache.org
Authored: Fri Aug 1 11:17:41 2014 -0400
Committer: Jake Luciani j...@apache.org
Committed: Fri Aug 1 11:17:41 2014 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../apache/cassandra/db/ColumnFamilyStore.java  | 23 +++---
 test/conf/cassandra.yaml|  1 +
 .../org/apache/cassandra/db/CommitLogTest.java  | 32 
 5 files changed, 44 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/87321cd6/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87321cd6/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87321cd6/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87321cd6/test/unit/org/apache/cassandra/db/CommitLogTest.java
--
diff --cc test/unit/org/apache/cassandra/db/CommitLogTest.java
index f8cb8c8,a58549a..a919c85
--- a/test/unit/org/apache/cassandra/db/CommitLogTest.java
+++ b/test/unit/org/apache/cassandra/db/CommitLogTest.java
@@@ -35,15 -35,14 +36,17 @@@ import org.apache.cassandra.SchemaLoade
  import org.apache.cassandra.Util;
  import org.apache.cassandra.config.Config;
  import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.config.KSMetaData;
  import org.apache.cassandra.db.commitlog.CommitLog;
  import org.apache.cassandra.db.commitlog.CommitLogDescriptor;
+ import org.apache.cassandra.db.commitlog.ReplayPosition;
  import org.apache.cassandra.db.commitlog.CommitLogSegment;
  import org.apache.cassandra.db.composites.CellName;
 +import org.apache.cassandra.exceptions.ConfigurationException;
 +import org.apache.cassandra.locator.SimpleStrategy;
  import org.apache.cassandra.net.MessagingService;
  import org.apache.cassandra.service.StorageService;
+ import org.apache.cassandra.utils.FBUtilities;
  
  import static org.apache.cassandra.utils.ByteBufferUtil.bytes;
  



[jira] [Commented] (CASSANDRA-7638) Revisit GCInspector

2014-08-01 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082340#comment-14082340
 ] 

Jeremiah Jordan commented on CASSANDRA-7638:


I don't think requiring = 1.7u24 is too much of an issue.  Especially in 2.1.  
1.7 before u25 had a bunch of problems in my experience anyway.

 Revisit GCInspector
 ---

 Key: CASSANDRA-7638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7638
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.1.0

 Attachments: 7638.txt


 In CASSANDRA-2868 we had to change the api that GCI uses to avoid the native 
 memory leak, but this caused GCI to be less reliable and more 'best effort' 
 than before where it was 100% reliable.  Let's revisit this and see if the 
 native memory leak is fixed in java7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7395) Support for pure user-defined functions (UDF)

2014-08-01 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082342#comment-14082342
 ] 

Tyler Hobbs commented on CASSANDRA-7395:


bq. BTW: Is there something, that I can reuse to add a unit test for schema 
migration in a cluster? E.g. Some unit test that creates a function on node A 
and checks if it can execute it on node B.

I recommend doing multi-node testing in the 
[dtests|https://github.com/riptano/cassandra-dtest].

 Support for pure user-defined functions (UDF)
 -

 Key: CASSANDRA-7395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7395
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Robert Stupp
  Labels: cql
 Fix For: 3.0

 Attachments: 7395.txt, udf-create-syntax.png, udf-drop-syntax.png


 We have some tickets for various aspects of UDF (CASSANDRA-4914, 
 CASSANDRA-5970, CASSANDRA-4998) but they all suffer from various degrees of 
 ocean-boiling.
 Let's start with something simple: allowing pure user-defined functions in 
 the SELECT clause of a CQL query.  That's it.
 By pure I mean, must depend only on the input parameters.  No side effects. 
  No exposure to C* internals.  Column values in, result out.  
 http://en.wikipedia.org/wiki/Pure_function



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7293) Not able to delete a cell with timeuuid as part of clustering key

2014-08-01 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7293:


Reproduced In: 2.0.7, 2.0.3  (was: 2.0.3, 2.0.7)
   Labels: qa-resolved  (was: )

 Not able to delete a cell with timeuuid as part of clustering key
 -

 Key: CASSANDRA-7293
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7293
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Java
Reporter: Ananthkumar K S
Priority: Minor
  Labels: qa-resolved

 **My keyspace definition**
 aa
 {
  classname text,
   jobid timeuuid,
   jobdata text,
 }
 **Values in it now:**
 classname  | jobid
 | jobdata
 +--+
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 Now when i delete this with following query
 **delete from aa where classname='' and jobid = 
 047a6130-e25a-11e3-83a5-8d12971ccb90;**
 **Result is :**
 classname | jobid | jobdata
 --
   | 047a6130-e25a-11e3-83a5-8d12971ccb90 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb91 | {}
  v | 3d176010-e250-11e3-83a5-8d12971ccb92 | {}
 The portion never got deleted. When i use a long value instead of timeuuid, 
 it works.
 Any problem with respect to timeuuid in deletion
 **Cassandra version : 2.0.3**



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Fix min/max cell name collection with 2.0 range tombstones

2014-08-01 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 18ce8c72a - c46477b8e


Fix min/max cell name collection with 2.0 range tombstones

Patch by Tyler Hobbs; review by Marcus Eriksson for CASSANDRA-7593


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c46477b8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c46477b8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c46477b8

Branch: refs/heads/cassandra-2.1.0
Commit: c46477b8e3dd74306ebe588801167d0e45ae4556
Parents: 18ce8c7
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Aug 1 10:35:28 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Aug 1 10:35:28 2014 -0500

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java | 4 ++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c46477b8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7823f87..c2ae6dc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.0-final
+ * Fix min/max cell name collection on 2.0 SSTables with range
+   tombstones (CASSANDRA-7593)
  * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
  * Filter cached results correctly (CASSANDRA-7636)
  * Fix tracing on the new SEPExecutor (CASSANDRA-7644)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c46477b8/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java 
b/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
index d390518..f74b86f 100644
--- a/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
+++ b/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
@@ -64,7 +64,7 @@ public class ColumnNameHelper
 {
 // For a cell name, no reason to look more than the clustering prefix
 // (and comparing the collection element would actually crash)
-int size = candidate instanceof CellName ? 
((CellName)candidate).clusteringSize() : candidate.size();
+int size = Math.min(candidate.size(), 
comparator.clusteringPrefixSize());
 
 if (maxSeen.isEmpty())
 return getComponents(candidate, size);
@@ -92,7 +92,7 @@ public class ColumnNameHelper
 {
 // For a cell name, no reason to look more than the clustering prefix
 // (and comparing the collection element would actually crash)
-int size = candidate instanceof CellName ? 
((CellName)candidate).clusteringSize() : candidate.size();
+int size = Math.min(candidate.size(), 
comparator.clusteringPrefixSize());
 
 if (minSeen.isEmpty())
 return getComponents(candidate, size);



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-01 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/149d151f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/149d151f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/149d151f

Branch: refs/heads/trunk
Commit: 149d151fadbba296c32f75008b54b96714e3d282
Parents: 87321cd 8b5990a
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Aug 1 10:37:09 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Aug 1 10:37:09 2014 -0500

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java | 4 ++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/149d151f/CHANGES.txt
--



[1/2] git commit: Fix min/max cell name collection with 2.0 range tombstones

2014-08-01 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 5f0413bef - 8b5990ae9


Fix min/max cell name collection with 2.0 range tombstones

Patch by Tyler Hobbs; review by Marcus Eriksson for CASSANDRA-7593


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c46477b8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c46477b8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c46477b8

Branch: refs/heads/cassandra-2.1
Commit: c46477b8e3dd74306ebe588801167d0e45ae4556
Parents: 18ce8c7
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Aug 1 10:35:28 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Aug 1 10:35:28 2014 -0500

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java | 4 ++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c46477b8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7823f87..c2ae6dc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.0-final
+ * Fix min/max cell name collection on 2.0 SSTables with range
+   tombstones (CASSANDRA-7593)
  * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
  * Filter cached results correctly (CASSANDRA-7636)
  * Fix tracing on the new SEPExecutor (CASSANDRA-7644)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c46477b8/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java 
b/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
index d390518..f74b86f 100644
--- a/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
+++ b/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
@@ -64,7 +64,7 @@ public class ColumnNameHelper
 {
 // For a cell name, no reason to look more than the clustering prefix
 // (and comparing the collection element would actually crash)
-int size = candidate instanceof CellName ? 
((CellName)candidate).clusteringSize() : candidate.size();
+int size = Math.min(candidate.size(), 
comparator.clusteringPrefixSize());
 
 if (maxSeen.isEmpty())
 return getComponents(candidate, size);
@@ -92,7 +92,7 @@ public class ColumnNameHelper
 {
 // For a cell name, no reason to look more than the clustering prefix
 // (and comparing the collection element would actually crash)
-int size = candidate instanceof CellName ? 
((CellName)candidate).clusteringSize() : candidate.size();
+int size = Math.min(candidate.size(), 
comparator.clusteringPrefixSize());
 
 if (minSeen.isEmpty())
 return getComponents(candidate, size);



[2/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread tylerhobbs
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b5990ae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b5990ae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b5990ae

Branch: refs/heads/trunk
Commit: 8b5990ae99697aa265e1c6664e436cb44f65756f
Parents: 5f0413b c46477b
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Aug 1 10:36:46 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Aug 1 10:36:46 2014 -0500

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java | 4 ++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b5990ae/CHANGES.txt
--
diff --cc CHANGES.txt
index 2d9d912,c2ae6dc..a7c1ab0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,17 -1,6 +1,19 @@@
 +2.1.1
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Add listen_interface and rpc_interface options (CASSANDRA-7417)
 + * Improve schema merge performance (CASSANDRA-7444)
 + * Adjust MT depth based on # of partition validating (CASSANDRA-5263)
 + * Optimise NativeCell comparisons (CASSANDRA-6755)
 + * Configurable client timeout for cqlsh (CASSANDRA-7516)
 + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111)
 +Merged from 2.0:
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Catch errors when the JVM pulls the rug out from GCInspector 
(CASSANDRA-5345)
 +
 +
  2.1.0-final
+  * Fix min/max cell name collection on 2.0 SSTables with range
+tombstones (CASSANDRA-7593)
   * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
   * Filter cached results correctly (CASSANDRA-7636)
   * Fix tracing on the new SEPExecutor (CASSANDRA-7644)



[2/2] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread tylerhobbs
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b5990ae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b5990ae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b5990ae

Branch: refs/heads/cassandra-2.1
Commit: 8b5990ae99697aa265e1c6664e436cb44f65756f
Parents: 5f0413b c46477b
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Aug 1 10:36:46 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Aug 1 10:36:46 2014 -0500

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java | 4 ++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b5990ae/CHANGES.txt
--
diff --cc CHANGES.txt
index 2d9d912,c2ae6dc..a7c1ab0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,17 -1,6 +1,19 @@@
 +2.1.1
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Add listen_interface and rpc_interface options (CASSANDRA-7417)
 + * Improve schema merge performance (CASSANDRA-7444)
 + * Adjust MT depth based on # of partition validating (CASSANDRA-5263)
 + * Optimise NativeCell comparisons (CASSANDRA-6755)
 + * Configurable client timeout for cqlsh (CASSANDRA-7516)
 + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111)
 +Merged from 2.0:
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Catch errors when the JVM pulls the rug out from GCInspector 
(CASSANDRA-5345)
 +
 +
  2.1.0-final
+  * Fix min/max cell name collection on 2.0 SSTables with range
+tombstones (CASSANDRA-7593)
   * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
   * Filter cached results correctly (CASSANDRA-7636)
   * Fix tracing on the new SEPExecutor (CASSANDRA-7644)



[1/3] git commit: Fix min/max cell name collection with 2.0 range tombstones

2014-08-01 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 87321cd6d - 149d151fa


Fix min/max cell name collection with 2.0 range tombstones

Patch by Tyler Hobbs; review by Marcus Eriksson for CASSANDRA-7593


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c46477b8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c46477b8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c46477b8

Branch: refs/heads/trunk
Commit: c46477b8e3dd74306ebe588801167d0e45ae4556
Parents: 18ce8c7
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Aug 1 10:35:28 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Aug 1 10:35:28 2014 -0500

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java | 4 ++--
 2 files changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c46477b8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7823f87..c2ae6dc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.0-final
+ * Fix min/max cell name collection on 2.0 SSTables with range
+   tombstones (CASSANDRA-7593)
  * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
  * Filter cached results correctly (CASSANDRA-7636)
  * Fix tracing on the new SEPExecutor (CASSANDRA-7644)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c46477b8/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java 
b/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
index d390518..f74b86f 100644
--- a/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
+++ b/src/java/org/apache/cassandra/io/sstable/ColumnNameHelper.java
@@ -64,7 +64,7 @@ public class ColumnNameHelper
 {
 // For a cell name, no reason to look more than the clustering prefix
 // (and comparing the collection element would actually crash)
-int size = candidate instanceof CellName ? 
((CellName)candidate).clusteringSize() : candidate.size();
+int size = Math.min(candidate.size(), 
comparator.clusteringPrefixSize());
 
 if (maxSeen.isEmpty())
 return getComponents(candidate, size);
@@ -92,7 +92,7 @@ public class ColumnNameHelper
 {
 // For a cell name, no reason to look more than the clustering prefix
 // (and comparing the collection element would actually crash)
-int size = candidate instanceof CellName ? 
((CellName)candidate).clusteringSize() : candidate.size();
+int size = Math.min(candidate.size(), 
comparator.clusteringPrefixSize());
 
 if (minSeen.isEmpty())
 return getComponents(candidate, size);



[jira] [Created] (CASSANDRA-7666) Range-segmented sstables

2014-08-01 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-7666:
-

 Summary: Range-segmented sstables
 Key: CASSANDRA-7666
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7666
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Sam Tunnicliffe
 Fix For: 3.0






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7056) Add RAMP transactions

2014-08-01 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082361#comment-14082361
 ] 

Jonathan Ellis commented on CASSANDRA-7056:
---

There really isn't much of a use case for unlogged batches now that we have 
async drivers.  So I'd rather keep logged/ramp the default.

 Add RAMP transactions
 -

 Key: CASSANDRA-7056
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7056
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Tupshin Harper
Priority: Minor

 We should take a look at 
 [RAMP|http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/]
  transactions, and figure out if they can be used to provide more efficient 
 LWT (or LWT-like) operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7475) Dtest: Windows - various cqlsh_tests errors

2014-08-01 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7475:
---

Component/s: Tests

 Dtest: Windows - various cqlsh_tests errors
 ---

 Key: CASSANDRA-7475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7475
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows

 Have a few windows-specific failures in this test.
 {code:title=test_eat_glass}
 ==
 ERROR: test_eat_glass (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 158, in test_eat_glass
 .encode(utf-8))
   File build\bdist.win32\egg\ccmlib\node.py, line 613, in run_cqlsh
 p.stdin.write(cmd + ';\n')
 IOError: [Errno 22] Invalid argument
 {code}
 {code:title=test_simple_insert}
 ==
 ERROR: test_simple_insert (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 35, in test_simple_insert
 cursor.execute(select id, value from simple.simple);
   File c:\src\cassandra-dbapi2\cql\cursor.py, line 80, in execute
 response = self.get_response(prepared_q, cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 77, in get_response
 return self.handle_cql_execution_errors(doquery, compressed_q, compress, 
 cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 98, in 
 handle_cql_execution_errors
 raise cql.ProgrammingError(Bad Request: %s % ire.why)
 ProgrammingError: Bad Request: Keyspace simple does not exist
 {code}
 {code:title=test_with_empty_values}
 ==
 ERROR: test_with_empty_values (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 347, in 
 test_with_empty_values
 output = self.run_cqlsh(node1, select intcol, bigintcol, varintcol from 
 CASSANDRA_7196.has_all_types where num in (0, 1, 2, 3, 4))
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 373, in run_cqlsh
 p = subprocess.Popen([ cli ] + args, env=env, stdin=subprocess.PIPE, 
 stderr=subprocess.PIPE, stdout=subprocess.PIPE)
   File C:\Python27\lib\subprocess.py, line 710, in __init__
 errread, errwrite)
   File C:\Python27\lib\subprocess.py, line 958, in _execute_child
 startupinfo)
 WindowsError: [Error 193] %1 is not a valid Win32 application
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-7475) Dtest: Windows - various cqlsh_tests errors

2014-08-01 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-7475:
--

Assignee: Philip Thompson  (was: Ryan McGuire)

 Dtest: Windows - various cqlsh_tests errors
 ---

 Key: CASSANDRA-7475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7475
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows

 Have a few windows-specific failures in this test.
 {code:title=test_eat_glass}
 ==
 ERROR: test_eat_glass (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 158, in test_eat_glass
 .encode(utf-8))
   File build\bdist.win32\egg\ccmlib\node.py, line 613, in run_cqlsh
 p.stdin.write(cmd + ';\n')
 IOError: [Errno 22] Invalid argument
 {code}
 {code:title=test_simple_insert}
 ==
 ERROR: test_simple_insert (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 35, in test_simple_insert
 cursor.execute(select id, value from simple.simple);
   File c:\src\cassandra-dbapi2\cql\cursor.py, line 80, in execute
 response = self.get_response(prepared_q, cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 77, in get_response
 return self.handle_cql_execution_errors(doquery, compressed_q, compress, 
 cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 98, in 
 handle_cql_execution_errors
 raise cql.ProgrammingError(Bad Request: %s % ire.why)
 ProgrammingError: Bad Request: Keyspace simple does not exist
 {code}
 {code:title=test_with_empty_values}
 ==
 ERROR: test_with_empty_values (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 347, in 
 test_with_empty_values
 output = self.run_cqlsh(node1, select intcol, bigintcol, varintcol from 
 CASSANDRA_7196.has_all_types where num in (0, 1, 2, 3, 4))
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 373, in run_cqlsh
 p = subprocess.Popen([ cli ] + args, env=env, stdin=subprocess.PIPE, 
 stderr=subprocess.PIPE, stdout=subprocess.PIPE)
   File C:\Python27\lib\subprocess.py, line 710, in __init__
 errread, errwrite)
   File C:\Python27\lib\subprocess.py, line 958, in _execute_child
 startupinfo)
 WindowsError: [Error 193] %1 is not a valid Win32 application
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7475) Dtest: Windows - various cqlsh_tests errors

2014-08-01 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7475:
---

Tester: Philip Thompson

 Dtest: Windows - various cqlsh_tests errors
 ---

 Key: CASSANDRA-7475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7475
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows

 Have a few windows-specific failures in this test.
 {code:title=test_eat_glass}
 ==
 ERROR: test_eat_glass (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 158, in test_eat_glass
 .encode(utf-8))
   File build\bdist.win32\egg\ccmlib\node.py, line 613, in run_cqlsh
 p.stdin.write(cmd + ';\n')
 IOError: [Errno 22] Invalid argument
 {code}
 {code:title=test_simple_insert}
 ==
 ERROR: test_simple_insert (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 35, in test_simple_insert
 cursor.execute(select id, value from simple.simple);
   File c:\src\cassandra-dbapi2\cql\cursor.py, line 80, in execute
 response = self.get_response(prepared_q, cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 77, in get_response
 return self.handle_cql_execution_errors(doquery, compressed_q, compress, 
 cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 98, in 
 handle_cql_execution_errors
 raise cql.ProgrammingError(Bad Request: %s % ire.why)
 ProgrammingError: Bad Request: Keyspace simple does not exist
 {code}
 {code:title=test_with_empty_values}
 ==
 ERROR: test_with_empty_values (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 347, in 
 test_with_empty_values
 output = self.run_cqlsh(node1, select intcol, bigintcol, varintcol from 
 CASSANDRA_7196.has_all_types where num in (0, 1, 2, 3, 4))
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 373, in run_cqlsh
 p = subprocess.Popen([ cli ] + args, env=env, stdin=subprocess.PIPE, 
 stderr=subprocess.PIPE, stdout=subprocess.PIPE)
   File C:\Python27\lib\subprocess.py, line 710, in __init__
 errread, errwrite)
   File C:\Python27\lib\subprocess.py, line 958, in _execute_child
 startupinfo)
 WindowsError: [Error 193] %1 is not a valid Win32 application
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[03/15] git commit: SSP doesn't cache seeds forever

2014-08-01 Thread brandonwilliams
SSP doesn't cache seeds forever

Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-7663


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb92a9fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb92a9fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb92a9fc

Branch: refs/heads/cassandra-2.1
Commit: eb92a9fca76f51c91c9eebaddfd439897a14a6e0
Parents: 249bbfc
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:40:53 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:40:53 2014 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 43 +---
 3 files changed, 30 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 676c4e5..0ad02c1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.19
+ * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
  * Set correct stream ID on responses when non-Exception Throwables
are thrown while handling native protocol messages (CASSANDRA-7470)
  * Fix row size miscalculation in LazilyCompactedRow (CASSANDRA-7543)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 1e534f9..3079283 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -103,7 +103,7 @@ public class DatabaseDescriptor
 /**
  * Inspect the classpath to find storage configuration file
  */
-static URL getStorageConfigURL() throws ConfigurationException
+public static URL getStorageConfigURL() throws ConfigurationException
 {
 String configUrl = System.getProperty(cassandra.config);
 if (configUrl == null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
--
diff --git a/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java 
b/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
index a3031fa..9f491f3 100644
--- a/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
+++ b/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
@@ -17,26 +17,50 @@
  */
 package org.apache.cassandra.locator;
 
+import java.io.InputStream;
 import java.net.InetAddress;
+import java.net.URL;
 import java.net.UnknownHostException;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
 import java.util.Map;
 
+import org.apache.cassandra.config.Config;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.config.SeedProviderDef;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import org.yaml.snakeyaml.Loader;
+import org.yaml.snakeyaml.TypeDescription;
+import org.yaml.snakeyaml.Yaml;
 
 public class SimpleSeedProvider implements SeedProvider
 {
 private static final Logger logger = 
LoggerFactory.getLogger(SimpleSeedProvider.class);
 
-private final ListInetAddress seeds;
+public SimpleSeedProvider(MapString, String args) {}
 
-public SimpleSeedProvider(MapString, String args)
+public ListInetAddress getSeeds()
 {
-String[] hosts = args.get(seeds).split(,, -1);
-seeds = new ArrayListInetAddress(hosts.length);
+InputStream input;
+try
+{
+URL url = DatabaseDescriptor.getStorageConfigURL();
+input = url.openStream();
+}
+catch (Exception e)
+{
+throw new AssertionError(e);
+}
+org.yaml.snakeyaml.constructor.Constructor constructor = new 
org.yaml.snakeyaml.constructor.Constructor(Config.class);
+TypeDescription seedDesc = new TypeDescription(SeedProviderDef.class);
+seedDesc.putMapPropertyType(parameters, String.class, String.class);
+constructor.addTypeDescription(seedDesc);
+Yaml yaml = new Yaml(new Loader(constructor));
+Config conf = (Config)yaml.load(input);
+String[] hosts = conf.seed_provider.parameters.get(seeds).split(,, 
-1);
+ListInetAddress seeds = new ArrayListInetAddress(hosts.length);
 for (String host : hosts)
 {
 

[14/15] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfa6b980
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfa6b980
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfa6b980

Branch: refs/heads/cassandra-2.1
Commit: dfa6b980229297fe9c1ac161189d29a32b3e3d64
Parents: 8b5990a 080aa94
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:47:50 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:47:50 2014 -0500

--
 CHANGES.txt | 13 +++
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 35 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfa6b980/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfa6b980/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--



[10/15] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/080aa94c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/080aa94c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/080aa94c

Branch: refs/heads/trunk
Commit: 080aa94c06236ca0ac0d28481481c2cde1640713
Parents: c46477b a26ac36
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:47:41 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:47:41 2014 -0500

--
 CHANGES.txt | 13 +++
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 35 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/080aa94c/CHANGES.txt
--
diff --cc CHANGES.txt
index c2ae6dc,a5b49c5..f6f30fa
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,11 +1,52 @@@
 -2.0.10
 - * Fix truncate to always flush (CASSANDRA-7511)
 +2.1.0-final
 + * Fix min/max cell name collection on 2.0 SSTables with range
 +   tombstones (CASSANDRA-7593)
 + * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
 + * Filter cached results correctly (CASSANDRA-7636)
 + * Fix tracing on the new SEPExecutor (CASSANDRA-7644)
   * Remove shuffle and taketoken (CASSANDRA-7601)
 - * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 - * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 - * Always merge ranges owned by a single node (CASSANDRA-6930)
 - * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Clean up Windows batch scripts (CASSANDRA-7619)
 + * Fix native protocol drop user type notification (CASSANDRA-7571)
 + * Give read access to system.schema_usertypes to all authenticated users
 +   (CASSANDRA-7578)
 + * (cqlsh) Fix cqlsh display when zero rows are returned (CASSANDRA-7580)
 + * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572)
 + * Fix NPE when dropping index from non-existent keyspace, AssertionError when
 +   dropping non-existent index with IF EXISTS (CASSANDRA-7590)
 + * Fix sstablelevelresetter hang (CASSANDRA-7614)
 + * (cqlsh) Fix deserialization of blobs (CASSANDRA-7603)
 + * Use keyspace updated schema change message for UDT changes in v1 and
 +   v2 protocols (CASSANDRA-7617)
 + * Fix tracing of range slices and secondary index lookups that are local
 +   to the coordinator (CASSANDRA-7599)
 + * Set -Dcassandra.storagedir for all tool shell scripts (CASSANDRA-7587)
 + * Don't swap max/min col names when mutating sstable metadata 
(CASSANDRA-7596)
 + * (cqlsh) Correctly handle paged result sets (CASSANDRA-7625)
 + * (cqlsh) Improve waiting for a trace to complete (CASSANDRA-7626)
 + * Fix tracing of concurrent range slices and 2ary index queries 
(CASSANDRA-7626)
 +Merged from 2.0:
++ * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
 + * Always flush on truncate (CASSANDRA-7511)
   * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +
 +
 +2.1.0-rc4
 + * Fix word count hadoop example (CASSANDRA-7200)
 + * Updated memtable_cleanup_threshold and memtable_flush_writers defaults 
 +   (CASSANDRA-7551)
 + * (Windows) fix startup when WMI memory query fails (CASSANDRA-7505)
 + * Anti-compaction proceeds if any part of the repair failed (CASANDRA-7521)
 + * Add missing table name to DROP INDEX responses and notifications 
(CASSANDRA-7539)
 + * Bump CQL version to 3.2.0 and update CQL documentation (CASSANDRA-7527)
 + * Fix configuration error message when running nodetool ring (CASSANDRA-7508)
 + * Support conditional updates, tuple type, and the v3 protocol in cqlsh 
(CASSANDRA-7509)
 + * Handle queries on multiple secondary index types (CASSANDRA-7525)
 + * Fix cqlsh authentication with v3 native protocol (CASSANDRA-7564)
 + * Fix NPE when unknown prepared statement ID is used (CASSANDRA-7454)
 +Merged from 2.0:
   * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
   * Fix range merging when DES scores are zero (CASSANDRA-7535)
   * Warn when SSL certificates have expired (CASSANDRA-7528)
@@@ -82,39 -18,38 +83,51 @@@ Merged from 2.0
   * Make sure high level sstables get compacted (CASSANDRA-7414)
   * Fix AssertionError when using empty clustering columns and static columns
 (CASSANDRA-7455)
 - * Add 

[07/15] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a26ac36a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a26ac36a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a26ac36a

Branch: refs/heads/cassandra-2.1
Commit: a26ac36a76b02d16cee04cc8d6bd0996e6760a3e
Parents: 60eab4e eb92a9f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:46:38 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:46:38 2014 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 23 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a26ac36a/CHANGES.txt
--
diff --cc CHANGES.txt
index 33bab82,0ad02c1..a5b49c5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,63 -1,14 +1,64 @@@
 -1.2.19
 +2.0.10
 + * Fix truncate to always flush (CASSANDRA-7511)
 + * Remove shuffle and taketoken (CASSANDRA-7601)
 + * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
 + * Fix range merging when DES scores are zero (CASSANDRA-7535)
 + * Warn when SSL certificates have expired (CASSANDRA-7528)
 + * Workaround JVM NPE on JMX bind failure (CASSANDRA-7254)
 + * Fix race in FileCacheService RemovalListener (CASSANDRA-7278)
 + * Fix inconsistent use of consistencyForCommit that allowed LOCAL_QUORUM
 +   operations to incorrect become full QUORUM (CASSANDRA-7345)
 + * Properly handle unrecognized opcodes and flags (CASSANDRA-7440)
 + * (Hadoop) close CqlRecordWriter clients when finished (CASSANDRA-7459)
 + * Make sure high level sstables get compacted (CASSANDRA-7414)
 + * Fix AssertionError when using empty clustering columns and static columns
 +   (CASSANDRA-7455)
 + * Add inter_dc_stream_throughput_outbound_megabits_per_sec (CASSANDRA-6596)
 + * Add option to disable STCS in L0 (CASSANDRA-6621)
 + * Fix error when doing reversed queries with static columns (CASSANDRA-7490)
 + * Backport CASSANDRA-6747 (CASSANDRA-7560)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +Merged from 1.2:
+  * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
   * Set correct stream ID on responses when non-Exception Throwables
 are thrown while handling native protocol messages (CASSANDRA-7470)
 - * Fix row size miscalculation in LazilyCompactedRow (CASSANDRA-7543)
  
 -1.2.18
 - * Support Thrift tables clustering columns on CqlPagingInputFormat 
(CASSANDRA-7445)
 - * Fix compilation with java 6 broke by CASSANDRA-7147
  
 -1.2.17
 +2.0.9
 + * Fix CC#collectTimeOrderedData() tombstone optimisations (CASSANDRA-7394)
 + * Fix assertion error in CL.ANY timeout handling (CASSANDRA-7364)
 + * Handle empty CFs in Memtable#maybeUpdateLiveRatio() (CASSANDRA-7401)
 + * Fix native protocol CAS batches (CASSANDRA-7337)
 + * Add per-CF range read request latency metrics (CASSANDRA-7338)
 + * Fix NPE in StreamTransferTask.createMessageForRetry() (CASSANDRA-7323)
 + * Add conditional CREATE/DROP USER support (CASSANDRA-7264)
 + * Swap local and global default read repair chances (CASSANDRA-7320)
 + * Add missing iso8601 patterns for date strings (CASSANDRA-6973)
 + * Support selecting multiple rows in a partition using IN (CASSANDRA-6875)
 + * cqlsh: always emphasize the partition key in DESC output (CASSANDRA-7274)
 + * Copy compaction options to make sure they are reloaded (CASSANDRA-7290)
 + * Add option to do more aggressive tombstone compactions (CASSANDRA-6563)
 + * Don't try to compact already-compacting files in HHOM (CASSANDRA-7288)
 + * Add authentication support to shuffle (CASSANDRA-6484)
 + * Cqlsh counts non-empty lines for Blank lines warning (CASSANDRA-7325)
 + * Make StreamSession#closeSession() idempotent (CASSANDRA-7262)
 + * Fix infinite loop on exception while streaming (CASSANDRA-7330)
 + * Reference sstables before populating key cache (CASSANDRA-7234)
 + * Account for range tombstones in min/max column names (CASSANDRA-7235)
 + * Improve sub range repair validation (CASSANDRA-7317)
 + * Accept 

[06/15] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a26ac36a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a26ac36a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a26ac36a

Branch: refs/heads/trunk
Commit: a26ac36a76b02d16cee04cc8d6bd0996e6760a3e
Parents: 60eab4e eb92a9f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:46:38 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:46:38 2014 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 23 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a26ac36a/CHANGES.txt
--
diff --cc CHANGES.txt
index 33bab82,0ad02c1..a5b49c5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,63 -1,14 +1,64 @@@
 -1.2.19
 +2.0.10
 + * Fix truncate to always flush (CASSANDRA-7511)
 + * Remove shuffle and taketoken (CASSANDRA-7601)
 + * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
 + * Fix range merging when DES scores are zero (CASSANDRA-7535)
 + * Warn when SSL certificates have expired (CASSANDRA-7528)
 + * Workaround JVM NPE on JMX bind failure (CASSANDRA-7254)
 + * Fix race in FileCacheService RemovalListener (CASSANDRA-7278)
 + * Fix inconsistent use of consistencyForCommit that allowed LOCAL_QUORUM
 +   operations to incorrect become full QUORUM (CASSANDRA-7345)
 + * Properly handle unrecognized opcodes and flags (CASSANDRA-7440)
 + * (Hadoop) close CqlRecordWriter clients when finished (CASSANDRA-7459)
 + * Make sure high level sstables get compacted (CASSANDRA-7414)
 + * Fix AssertionError when using empty clustering columns and static columns
 +   (CASSANDRA-7455)
 + * Add inter_dc_stream_throughput_outbound_megabits_per_sec (CASSANDRA-6596)
 + * Add option to disable STCS in L0 (CASSANDRA-6621)
 + * Fix error when doing reversed queries with static columns (CASSANDRA-7490)
 + * Backport CASSANDRA-6747 (CASSANDRA-7560)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +Merged from 1.2:
+  * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
   * Set correct stream ID on responses when non-Exception Throwables
 are thrown while handling native protocol messages (CASSANDRA-7470)
 - * Fix row size miscalculation in LazilyCompactedRow (CASSANDRA-7543)
  
 -1.2.18
 - * Support Thrift tables clustering columns on CqlPagingInputFormat 
(CASSANDRA-7445)
 - * Fix compilation with java 6 broke by CASSANDRA-7147
  
 -1.2.17
 +2.0.9
 + * Fix CC#collectTimeOrderedData() tombstone optimisations (CASSANDRA-7394)
 + * Fix assertion error in CL.ANY timeout handling (CASSANDRA-7364)
 + * Handle empty CFs in Memtable#maybeUpdateLiveRatio() (CASSANDRA-7401)
 + * Fix native protocol CAS batches (CASSANDRA-7337)
 + * Add per-CF range read request latency metrics (CASSANDRA-7338)
 + * Fix NPE in StreamTransferTask.createMessageForRetry() (CASSANDRA-7323)
 + * Add conditional CREATE/DROP USER support (CASSANDRA-7264)
 + * Swap local and global default read repair chances (CASSANDRA-7320)
 + * Add missing iso8601 patterns for date strings (CASSANDRA-6973)
 + * Support selecting multiple rows in a partition using IN (CASSANDRA-6875)
 + * cqlsh: always emphasize the partition key in DESC output (CASSANDRA-7274)
 + * Copy compaction options to make sure they are reloaded (CASSANDRA-7290)
 + * Add option to do more aggressive tombstone compactions (CASSANDRA-6563)
 + * Don't try to compact already-compacting files in HHOM (CASSANDRA-7288)
 + * Add authentication support to shuffle (CASSANDRA-6484)
 + * Cqlsh counts non-empty lines for Blank lines warning (CASSANDRA-7325)
 + * Make StreamSession#closeSession() idempotent (CASSANDRA-7262)
 + * Fix infinite loop on exception while streaming (CASSANDRA-7330)
 + * Reference sstables before populating key cache (CASSANDRA-7234)
 + * Account for range tombstones in min/max column names (CASSANDRA-7235)
 + * Improve sub range repair validation (CASSANDRA-7317)
 + * Accept subtypes 

[12/15] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/080aa94c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/080aa94c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/080aa94c

Branch: refs/heads/cassandra-2.1.0
Commit: 080aa94c06236ca0ac0d28481481c2cde1640713
Parents: c46477b a26ac36
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:47:41 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:47:41 2014 -0500

--
 CHANGES.txt | 13 +++
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 35 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/080aa94c/CHANGES.txt
--
diff --cc CHANGES.txt
index c2ae6dc,a5b49c5..f6f30fa
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,11 +1,52 @@@
 -2.0.10
 - * Fix truncate to always flush (CASSANDRA-7511)
 +2.1.0-final
 + * Fix min/max cell name collection on 2.0 SSTables with range
 +   tombstones (CASSANDRA-7593)
 + * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
 + * Filter cached results correctly (CASSANDRA-7636)
 + * Fix tracing on the new SEPExecutor (CASSANDRA-7644)
   * Remove shuffle and taketoken (CASSANDRA-7601)
 - * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 - * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 - * Always merge ranges owned by a single node (CASSANDRA-6930)
 - * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Clean up Windows batch scripts (CASSANDRA-7619)
 + * Fix native protocol drop user type notification (CASSANDRA-7571)
 + * Give read access to system.schema_usertypes to all authenticated users
 +   (CASSANDRA-7578)
 + * (cqlsh) Fix cqlsh display when zero rows are returned (CASSANDRA-7580)
 + * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572)
 + * Fix NPE when dropping index from non-existent keyspace, AssertionError when
 +   dropping non-existent index with IF EXISTS (CASSANDRA-7590)
 + * Fix sstablelevelresetter hang (CASSANDRA-7614)
 + * (cqlsh) Fix deserialization of blobs (CASSANDRA-7603)
 + * Use keyspace updated schema change message for UDT changes in v1 and
 +   v2 protocols (CASSANDRA-7617)
 + * Fix tracing of range slices and secondary index lookups that are local
 +   to the coordinator (CASSANDRA-7599)
 + * Set -Dcassandra.storagedir for all tool shell scripts (CASSANDRA-7587)
 + * Don't swap max/min col names when mutating sstable metadata 
(CASSANDRA-7596)
 + * (cqlsh) Correctly handle paged result sets (CASSANDRA-7625)
 + * (cqlsh) Improve waiting for a trace to complete (CASSANDRA-7626)
 + * Fix tracing of concurrent range slices and 2ary index queries 
(CASSANDRA-7626)
 +Merged from 2.0:
++ * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
 + * Always flush on truncate (CASSANDRA-7511)
   * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +
 +
 +2.1.0-rc4
 + * Fix word count hadoop example (CASSANDRA-7200)
 + * Updated memtable_cleanup_threshold and memtable_flush_writers defaults 
 +   (CASSANDRA-7551)
 + * (Windows) fix startup when WMI memory query fails (CASSANDRA-7505)
 + * Anti-compaction proceeds if any part of the repair failed (CASANDRA-7521)
 + * Add missing table name to DROP INDEX responses and notifications 
(CASSANDRA-7539)
 + * Bump CQL version to 3.2.0 and update CQL documentation (CASSANDRA-7527)
 + * Fix configuration error message when running nodetool ring (CASSANDRA-7508)
 + * Support conditional updates, tuple type, and the v3 protocol in cqlsh 
(CASSANDRA-7509)
 + * Handle queries on multiple secondary index types (CASSANDRA-7525)
 + * Fix cqlsh authentication with v3 native protocol (CASSANDRA-7564)
 + * Fix NPE when unknown prepared statement ID is used (CASSANDRA-7454)
 +Merged from 2.0:
   * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
   * Fix range merging when DES scores are zero (CASSANDRA-7535)
   * Warn when SSL certificates have expired (CASSANDRA-7528)
@@@ -82,39 -18,38 +83,51 @@@ Merged from 2.0
   * Make sure high level sstables get compacted (CASSANDRA-7414)
   * Fix AssertionError when using empty clustering columns and static columns
 (CASSANDRA-7455)
 - * Add 

[01/15] git commit: SSP doesn't cache seeds forever

2014-08-01 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 249bbfc3b - eb92a9fca
  refs/heads/cassandra-2.0 60eab4e45 - a26ac36a7
  refs/heads/cassandra-2.1 8b5990ae9 - dfa6b9802
  refs/heads/cassandra-2.1.0 c46477b8e - 080aa94c0
  refs/heads/trunk 149d151fa - 5c1220379


SSP doesn't cache seeds forever

Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-7663


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb92a9fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb92a9fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb92a9fc

Branch: refs/heads/cassandra-1.2
Commit: eb92a9fca76f51c91c9eebaddfd439897a14a6e0
Parents: 249bbfc
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:40:53 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:40:53 2014 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 43 +---
 3 files changed, 30 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 676c4e5..0ad02c1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.19
+ * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
  * Set correct stream ID on responses when non-Exception Throwables
are thrown while handling native protocol messages (CASSANDRA-7470)
  * Fix row size miscalculation in LazilyCompactedRow (CASSANDRA-7543)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 1e534f9..3079283 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -103,7 +103,7 @@ public class DatabaseDescriptor
 /**
  * Inspect the classpath to find storage configuration file
  */
-static URL getStorageConfigURL() throws ConfigurationException
+public static URL getStorageConfigURL() throws ConfigurationException
 {
 String configUrl = System.getProperty(cassandra.config);
 if (configUrl == null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
--
diff --git a/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java 
b/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
index a3031fa..9f491f3 100644
--- a/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
+++ b/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
@@ -17,26 +17,50 @@
  */
 package org.apache.cassandra.locator;
 
+import java.io.InputStream;
 import java.net.InetAddress;
+import java.net.URL;
 import java.net.UnknownHostException;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
 import java.util.Map;
 
+import org.apache.cassandra.config.Config;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.config.SeedProviderDef;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import org.yaml.snakeyaml.Loader;
+import org.yaml.snakeyaml.TypeDescription;
+import org.yaml.snakeyaml.Yaml;
 
 public class SimpleSeedProvider implements SeedProvider
 {
 private static final Logger logger = 
LoggerFactory.getLogger(SimpleSeedProvider.class);
 
-private final ListInetAddress seeds;
+public SimpleSeedProvider(MapString, String args) {}
 
-public SimpleSeedProvider(MapString, String args)
+public ListInetAddress getSeeds()
 {
-String[] hosts = args.get(seeds).split(,, -1);
-seeds = new ArrayListInetAddress(hosts.length);
+InputStream input;
+try
+{
+URL url = DatabaseDescriptor.getStorageConfigURL();
+input = url.openStream();
+}
+catch (Exception e)
+{
+throw new AssertionError(e);
+}
+org.yaml.snakeyaml.constructor.Constructor constructor = new 
org.yaml.snakeyaml.constructor.Constructor(Config.class);
+TypeDescription seedDesc = new TypeDescription(SeedProviderDef.class);
+seedDesc.putMapPropertyType(parameters, String.class, String.class);
+constructor.addTypeDescription(seedDesc);
+Yaml yaml = new Yaml(new 

[13/15] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfa6b980
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfa6b980
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfa6b980

Branch: refs/heads/trunk
Commit: dfa6b980229297fe9c1ac161189d29a32b3e3d64
Parents: 8b5990a 080aa94
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:47:50 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:47:50 2014 -0500

--
 CHANGES.txt | 13 +++
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 35 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfa6b980/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfa6b980/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--



[09/15] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a26ac36a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a26ac36a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a26ac36a

Branch: refs/heads/cassandra-2.1.0
Commit: a26ac36a76b02d16cee04cc8d6bd0996e6760a3e
Parents: 60eab4e eb92a9f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:46:38 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:46:38 2014 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 23 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a26ac36a/CHANGES.txt
--
diff --cc CHANGES.txt
index 33bab82,0ad02c1..a5b49c5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,63 -1,14 +1,64 @@@
 -1.2.19
 +2.0.10
 + * Fix truncate to always flush (CASSANDRA-7511)
 + * Remove shuffle and taketoken (CASSANDRA-7601)
 + * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
 + * Fix range merging when DES scores are zero (CASSANDRA-7535)
 + * Warn when SSL certificates have expired (CASSANDRA-7528)
 + * Workaround JVM NPE on JMX bind failure (CASSANDRA-7254)
 + * Fix race in FileCacheService RemovalListener (CASSANDRA-7278)
 + * Fix inconsistent use of consistencyForCommit that allowed LOCAL_QUORUM
 +   operations to incorrect become full QUORUM (CASSANDRA-7345)
 + * Properly handle unrecognized opcodes and flags (CASSANDRA-7440)
 + * (Hadoop) close CqlRecordWriter clients when finished (CASSANDRA-7459)
 + * Make sure high level sstables get compacted (CASSANDRA-7414)
 + * Fix AssertionError when using empty clustering columns and static columns
 +   (CASSANDRA-7455)
 + * Add inter_dc_stream_throughput_outbound_megabits_per_sec (CASSANDRA-6596)
 + * Add option to disable STCS in L0 (CASSANDRA-6621)
 + * Fix error when doing reversed queries with static columns (CASSANDRA-7490)
 + * Backport CASSANDRA-6747 (CASSANDRA-7560)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +Merged from 1.2:
+  * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
   * Set correct stream ID on responses when non-Exception Throwables
 are thrown while handling native protocol messages (CASSANDRA-7470)
 - * Fix row size miscalculation in LazilyCompactedRow (CASSANDRA-7543)
  
 -1.2.18
 - * Support Thrift tables clustering columns on CqlPagingInputFormat 
(CASSANDRA-7445)
 - * Fix compilation with java 6 broke by CASSANDRA-7147
  
 -1.2.17
 +2.0.9
 + * Fix CC#collectTimeOrderedData() tombstone optimisations (CASSANDRA-7394)
 + * Fix assertion error in CL.ANY timeout handling (CASSANDRA-7364)
 + * Handle empty CFs in Memtable#maybeUpdateLiveRatio() (CASSANDRA-7401)
 + * Fix native protocol CAS batches (CASSANDRA-7337)
 + * Add per-CF range read request latency metrics (CASSANDRA-7338)
 + * Fix NPE in StreamTransferTask.createMessageForRetry() (CASSANDRA-7323)
 + * Add conditional CREATE/DROP USER support (CASSANDRA-7264)
 + * Swap local and global default read repair chances (CASSANDRA-7320)
 + * Add missing iso8601 patterns for date strings (CASSANDRA-6973)
 + * Support selecting multiple rows in a partition using IN (CASSANDRA-6875)
 + * cqlsh: always emphasize the partition key in DESC output (CASSANDRA-7274)
 + * Copy compaction options to make sure they are reloaded (CASSANDRA-7290)
 + * Add option to do more aggressive tombstone compactions (CASSANDRA-6563)
 + * Don't try to compact already-compacting files in HHOM (CASSANDRA-7288)
 + * Add authentication support to shuffle (CASSANDRA-6484)
 + * Cqlsh counts non-empty lines for Blank lines warning (CASSANDRA-7325)
 + * Make StreamSession#closeSession() idempotent (CASSANDRA-7262)
 + * Fix infinite loop on exception while streaming (CASSANDRA-7330)
 + * Reference sstables before populating key cache (CASSANDRA-7234)
 + * Account for range tombstones in min/max column names (CASSANDRA-7235)
 + * Improve sub range repair validation (CASSANDRA-7317)
 + * Accept 

[15/15] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c122037
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c122037
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c122037

Branch: refs/heads/trunk
Commit: 5c122037931c898f1b96b49a95df89f085d52414
Parents: 149d151 dfa6b98
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:47:57 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:47:57 2014 -0500

--
 CHANGES.txt | 13 +++
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 35 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c122037/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c122037/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--



[08/15] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/DatabaseDescriptor.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a26ac36a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a26ac36a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a26ac36a

Branch: refs/heads/cassandra-2.0
Commit: a26ac36a76b02d16cee04cc8d6bd0996e6760a3e
Parents: 60eab4e eb92a9f
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:46:38 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:46:38 2014 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 23 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a26ac36a/CHANGES.txt
--
diff --cc CHANGES.txt
index 33bab82,0ad02c1..a5b49c5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,63 -1,14 +1,64 @@@
 -1.2.19
 +2.0.10
 + * Fix truncate to always flush (CASSANDRA-7511)
 + * Remove shuffle and taketoken (CASSANDRA-7601)
 + * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
 + * Fix range merging when DES scores are zero (CASSANDRA-7535)
 + * Warn when SSL certificates have expired (CASSANDRA-7528)
 + * Workaround JVM NPE on JMX bind failure (CASSANDRA-7254)
 + * Fix race in FileCacheService RemovalListener (CASSANDRA-7278)
 + * Fix inconsistent use of consistencyForCommit that allowed LOCAL_QUORUM
 +   operations to incorrect become full QUORUM (CASSANDRA-7345)
 + * Properly handle unrecognized opcodes and flags (CASSANDRA-7440)
 + * (Hadoop) close CqlRecordWriter clients when finished (CASSANDRA-7459)
 + * Make sure high level sstables get compacted (CASSANDRA-7414)
 + * Fix AssertionError when using empty clustering columns and static columns
 +   (CASSANDRA-7455)
 + * Add inter_dc_stream_throughput_outbound_megabits_per_sec (CASSANDRA-6596)
 + * Add option to disable STCS in L0 (CASSANDRA-6621)
 + * Fix error when doing reversed queries with static columns (CASSANDRA-7490)
 + * Backport CASSANDRA-6747 (CASSANDRA-7560)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +Merged from 1.2:
+  * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
   * Set correct stream ID on responses when non-Exception Throwables
 are thrown while handling native protocol messages (CASSANDRA-7470)
 - * Fix row size miscalculation in LazilyCompactedRow (CASSANDRA-7543)
  
 -1.2.18
 - * Support Thrift tables clustering columns on CqlPagingInputFormat 
(CASSANDRA-7445)
 - * Fix compilation with java 6 broke by CASSANDRA-7147
  
 -1.2.17
 +2.0.9
 + * Fix CC#collectTimeOrderedData() tombstone optimisations (CASSANDRA-7394)
 + * Fix assertion error in CL.ANY timeout handling (CASSANDRA-7364)
 + * Handle empty CFs in Memtable#maybeUpdateLiveRatio() (CASSANDRA-7401)
 + * Fix native protocol CAS batches (CASSANDRA-7337)
 + * Add per-CF range read request latency metrics (CASSANDRA-7338)
 + * Fix NPE in StreamTransferTask.createMessageForRetry() (CASSANDRA-7323)
 + * Add conditional CREATE/DROP USER support (CASSANDRA-7264)
 + * Swap local and global default read repair chances (CASSANDRA-7320)
 + * Add missing iso8601 patterns for date strings (CASSANDRA-6973)
 + * Support selecting multiple rows in a partition using IN (CASSANDRA-6875)
 + * cqlsh: always emphasize the partition key in DESC output (CASSANDRA-7274)
 + * Copy compaction options to make sure they are reloaded (CASSANDRA-7290)
 + * Add option to do more aggressive tombstone compactions (CASSANDRA-6563)
 + * Don't try to compact already-compacting files in HHOM (CASSANDRA-7288)
 + * Add authentication support to shuffle (CASSANDRA-6484)
 + * Cqlsh counts non-empty lines for Blank lines warning (CASSANDRA-7325)
 + * Make StreamSession#closeSession() idempotent (CASSANDRA-7262)
 + * Fix infinite loop on exception while streaming (CASSANDRA-7330)
 + * Reference sstables before populating key cache (CASSANDRA-7234)
 + * Account for range tombstones in min/max column names (CASSANDRA-7235)
 + * Improve sub range repair validation (CASSANDRA-7317)
 + * Accept 

[04/15] git commit: SSP doesn't cache seeds forever

2014-08-01 Thread brandonwilliams
SSP doesn't cache seeds forever

Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-7663


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb92a9fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb92a9fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb92a9fc

Branch: refs/heads/trunk
Commit: eb92a9fca76f51c91c9eebaddfd439897a14a6e0
Parents: 249bbfc
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:40:53 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:40:53 2014 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 43 +---
 3 files changed, 30 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 676c4e5..0ad02c1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.19
+ * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
  * Set correct stream ID on responses when non-Exception Throwables
are thrown while handling native protocol messages (CASSANDRA-7470)
  * Fix row size miscalculation in LazilyCompactedRow (CASSANDRA-7543)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 1e534f9..3079283 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -103,7 +103,7 @@ public class DatabaseDescriptor
 /**
  * Inspect the classpath to find storage configuration file
  */
-static URL getStorageConfigURL() throws ConfigurationException
+public static URL getStorageConfigURL() throws ConfigurationException
 {
 String configUrl = System.getProperty(cassandra.config);
 if (configUrl == null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
--
diff --git a/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java 
b/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
index a3031fa..9f491f3 100644
--- a/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
+++ b/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
@@ -17,26 +17,50 @@
  */
 package org.apache.cassandra.locator;
 
+import java.io.InputStream;
 import java.net.InetAddress;
+import java.net.URL;
 import java.net.UnknownHostException;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
 import java.util.Map;
 
+import org.apache.cassandra.config.Config;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.config.SeedProviderDef;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import org.yaml.snakeyaml.Loader;
+import org.yaml.snakeyaml.TypeDescription;
+import org.yaml.snakeyaml.Yaml;
 
 public class SimpleSeedProvider implements SeedProvider
 {
 private static final Logger logger = 
LoggerFactory.getLogger(SimpleSeedProvider.class);
 
-private final ListInetAddress seeds;
+public SimpleSeedProvider(MapString, String args) {}
 
-public SimpleSeedProvider(MapString, String args)
+public ListInetAddress getSeeds()
 {
-String[] hosts = args.get(seeds).split(,, -1);
-seeds = new ArrayListInetAddress(hosts.length);
+InputStream input;
+try
+{
+URL url = DatabaseDescriptor.getStorageConfigURL();
+input = url.openStream();
+}
+catch (Exception e)
+{
+throw new AssertionError(e);
+}
+org.yaml.snakeyaml.constructor.Constructor constructor = new 
org.yaml.snakeyaml.constructor.Constructor(Config.class);
+TypeDescription seedDesc = new TypeDescription(SeedProviderDef.class);
+seedDesc.putMapPropertyType(parameters, String.class, String.class);
+constructor.addTypeDescription(seedDesc);
+Yaml yaml = new Yaml(new Loader(constructor));
+Config conf = (Config)yaml.load(input);
+String[] hosts = conf.seed_provider.parameters.get(seeds).split(,, 
-1);
+ListInetAddress seeds = new ArrayListInetAddress(hosts.length);
 for (String host : hosts)
 {
 try
@@ 

[02/15] git commit: SSP doesn't cache seeds forever

2014-08-01 Thread brandonwilliams
SSP doesn't cache seeds forever

Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-7663


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb92a9fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb92a9fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb92a9fc

Branch: refs/heads/cassandra-2.0
Commit: eb92a9fca76f51c91c9eebaddfd439897a14a6e0
Parents: 249bbfc
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:40:53 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:40:53 2014 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 43 +---
 3 files changed, 30 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 676c4e5..0ad02c1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.2.19
+ * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
  * Set correct stream ID on responses when non-Exception Throwables
are thrown while handling native protocol messages (CASSANDRA-7470)
  * Fix row size miscalculation in LazilyCompactedRow (CASSANDRA-7543)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 1e534f9..3079283 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -103,7 +103,7 @@ public class DatabaseDescriptor
 /**
  * Inspect the classpath to find storage configuration file
  */
-static URL getStorageConfigURL() throws ConfigurationException
+public static URL getStorageConfigURL() throws ConfigurationException
 {
 String configUrl = System.getProperty(cassandra.config);
 if (configUrl == null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb92a9fc/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
--
diff --git a/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java 
b/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
index a3031fa..9f491f3 100644
--- a/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
+++ b/src/java/org/apache/cassandra/locator/SimpleSeedProvider.java
@@ -17,26 +17,50 @@
  */
 package org.apache.cassandra.locator;
 
+import java.io.InputStream;
 import java.net.InetAddress;
+import java.net.URL;
 import java.net.UnknownHostException;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
 import java.util.Map;
 
+import org.apache.cassandra.config.Config;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.config.SeedProviderDef;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import org.yaml.snakeyaml.Loader;
+import org.yaml.snakeyaml.TypeDescription;
+import org.yaml.snakeyaml.Yaml;
 
 public class SimpleSeedProvider implements SeedProvider
 {
 private static final Logger logger = 
LoggerFactory.getLogger(SimpleSeedProvider.class);
 
-private final ListInetAddress seeds;
+public SimpleSeedProvider(MapString, String args) {}
 
-public SimpleSeedProvider(MapString, String args)
+public ListInetAddress getSeeds()
 {
-String[] hosts = args.get(seeds).split(,, -1);
-seeds = new ArrayListInetAddress(hosts.length);
+InputStream input;
+try
+{
+URL url = DatabaseDescriptor.getStorageConfigURL();
+input = url.openStream();
+}
+catch (Exception e)
+{
+throw new AssertionError(e);
+}
+org.yaml.snakeyaml.constructor.Constructor constructor = new 
org.yaml.snakeyaml.constructor.Constructor(Config.class);
+TypeDescription seedDesc = new TypeDescription(SeedProviderDef.class);
+seedDesc.putMapPropertyType(parameters, String.class, String.class);
+constructor.addTypeDescription(seedDesc);
+Yaml yaml = new Yaml(new Loader(constructor));
+Config conf = (Config)yaml.load(input);
+String[] hosts = conf.seed_provider.parameters.get(seeds).split(,, 
-1);
+ListInetAddress seeds = new ArrayListInetAddress(hosts.length);
 for (String host : hosts)
 {
 

[11/15] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/080aa94c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/080aa94c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/080aa94c

Branch: refs/heads/cassandra-2.1
Commit: 080aa94c06236ca0ac0d28481481c2cde1640713
Parents: c46477b a26ac36
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:47:41 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:47:41 2014 -0500

--
 CHANGES.txt | 13 +++
 .../cassandra/config/DatabaseDescriptor.java|  2 +-
 .../cassandra/locator/SimpleSeedProvider.java   | 36 
 3 files changed, 35 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/080aa94c/CHANGES.txt
--
diff --cc CHANGES.txt
index c2ae6dc,a5b49c5..f6f30fa
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,11 +1,52 @@@
 -2.0.10
 - * Fix truncate to always flush (CASSANDRA-7511)
 +2.1.0-final
 + * Fix min/max cell name collection on 2.0 SSTables with range
 +   tombstones (CASSANDRA-7593)
 + * Tolerate min/max cell names of different lengths (CASSANDRA-7651)
 + * Filter cached results correctly (CASSANDRA-7636)
 + * Fix tracing on the new SEPExecutor (CASSANDRA-7644)
   * Remove shuffle and taketoken (CASSANDRA-7601)
 - * Switch liveRatio-related log messages to DEBUG (CASSANDRA-7467)
 - * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 - * Always merge ranges owned by a single node (CASSANDRA-6930)
 - * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Clean up Windows batch scripts (CASSANDRA-7619)
 + * Fix native protocol drop user type notification (CASSANDRA-7571)
 + * Give read access to system.schema_usertypes to all authenticated users
 +   (CASSANDRA-7578)
 + * (cqlsh) Fix cqlsh display when zero rows are returned (CASSANDRA-7580)
 + * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572)
 + * Fix NPE when dropping index from non-existent keyspace, AssertionError when
 +   dropping non-existent index with IF EXISTS (CASSANDRA-7590)
 + * Fix sstablelevelresetter hang (CASSANDRA-7614)
 + * (cqlsh) Fix deserialization of blobs (CASSANDRA-7603)
 + * Use keyspace updated schema change message for UDT changes in v1 and
 +   v2 protocols (CASSANDRA-7617)
 + * Fix tracing of range slices and secondary index lookups that are local
 +   to the coordinator (CASSANDRA-7599)
 + * Set -Dcassandra.storagedir for all tool shell scripts (CASSANDRA-7587)
 + * Don't swap max/min col names when mutating sstable metadata 
(CASSANDRA-7596)
 + * (cqlsh) Correctly handle paged result sets (CASSANDRA-7625)
 + * (cqlsh) Improve waiting for a trace to complete (CASSANDRA-7626)
 + * Fix tracing of concurrent range slices and 2ary index queries 
(CASSANDRA-7626)
 +Merged from 2.0:
++ * SimpleSeedProvider no longer caches seeds forever (CASSANDRA-7663)
 + * Always flush on truncate (CASSANDRA-7511)
   * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576)
 + * Always merge ranges owned by a single node (CASSANDRA-6930)
 + * Track max/min timestamps for range tombstones (CASSANDRA-7647)
 + * Fix NPE when listing saved caches dir (CASSANDRA-7632)
 +
 +
 +2.1.0-rc4
 + * Fix word count hadoop example (CASSANDRA-7200)
 + * Updated memtable_cleanup_threshold and memtable_flush_writers defaults 
 +   (CASSANDRA-7551)
 + * (Windows) fix startup when WMI memory query fails (CASSANDRA-7505)
 + * Anti-compaction proceeds if any part of the repair failed (CASANDRA-7521)
 + * Add missing table name to DROP INDEX responses and notifications 
(CASSANDRA-7539)
 + * Bump CQL version to 3.2.0 and update CQL documentation (CASSANDRA-7527)
 + * Fix configuration error message when running nodetool ring (CASSANDRA-7508)
 + * Support conditional updates, tuple type, and the v3 protocol in cqlsh 
(CASSANDRA-7509)
 + * Handle queries on multiple secondary index types (CASSANDRA-7525)
 + * Fix cqlsh authentication with v3 native protocol (CASSANDRA-7564)
 + * Fix NPE when unknown prepared statement ID is used (CASSANDRA-7454)
 +Merged from 2.0:
   * (Windows) force range-based repair to non-sequential mode (CASSANDRA-7541)
   * Fix range merging when DES scores are zero (CASSANDRA-7535)
   * Warn when SSL certificates have expired (CASSANDRA-7528)
@@@ -82,39 -18,38 +83,51 @@@ Merged from 2.0
   * Make sure high level sstables get compacted (CASSANDRA-7414)
   * Fix AssertionError when using empty clustering columns and static columns
 (CASSANDRA-7455)
 - * Add 

[6/6] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/db434289
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/db434289
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/db434289

Branch: refs/heads/trunk
Commit: db4342896ccba06dfc6f48be9445fd40da70ca65
Parents: 5c12203 6e15762
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:51:08 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:51:08 2014 -0500

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/dht/RangeStreamer.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/db434289/CHANGES.txt
--



[3/6] git commit: Don't use strict consistency when replacing

2014-08-01 Thread brandonwilliams
Don't use strict consistency when replacing

Patch by brandonwilliams, reviewed by tjake for CASSANDRA-7568


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2f48723b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2f48723b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2f48723b

Branch: refs/heads/trunk
Commit: 2f48723bfff605af31102fcfcd767b3dc878e097
Parents: 080aa94
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:50:34 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:50:34 2014 -0500

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/dht/RangeStreamer.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f48723b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f6f30fa..a8299c6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-final
+ * Don't use strict consistency when replacing (CASSANDRA-7568)
  * Fix min/max cell name collection on 2.0 SSTables with range
tombstones (CASSANDRA-7593)
  * Tolerate min/max cell names of different lengths (CASSANDRA-7651)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f48723b/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index 2308d30..14d24fc 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -128,7 +128,7 @@ public class RangeStreamer
 
 public void addRanges(String keyspaceName, CollectionRangeToken ranges)
 {
-MultimapRangeToken, InetAddress rangesForKeyspace = 
useStrictConsistency  tokens != null
+MultimapRangeToken, InetAddress rangesForKeyspace = 
!DatabaseDescriptor.isReplacing()  useStrictConsistency  tokens != null
 ? getAllRangesWithStrictSourcesFor(keyspaceName, ranges) : 
getAllRangesWithSourcesFor(keyspaceName, ranges);
 
 if (logger.isDebugEnabled())



[4/6] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6e157625
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6e157625
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6e157625

Branch: refs/heads/trunk
Commit: 6e157625f2ee4979441d6af4f91cdfd8edc774ce
Parents: dfa6b98 2f48723
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:51:01 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:51:01 2014 -0500

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/dht/RangeStreamer.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6e157625/CHANGES.txt
--
diff --cc CHANGES.txt
index 578ba87,a8299c6..5954236
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,17 -1,5 +1,18 @@@
 +2.1.1
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Add listen_interface and rpc_interface options (CASSANDRA-7417)
 + * Improve schema merge performance (CASSANDRA-7444)
 + * Adjust MT depth based on # of partition validating (CASSANDRA-5263)
 + * Optimise NativeCell comparisons (CASSANDRA-6755)
 + * Configurable client timeout for cqlsh (CASSANDRA-7516)
 + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111)
 +Merged from 2.0:
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Catch errors when the JVM pulls the rug out from GCInspector 
(CASSANDRA-5345)
 +
 +
  2.1.0-final
+  * Don't use strict consistency when replacing (CASSANDRA-7568)
   * Fix min/max cell name collection on 2.0 SSTables with range
 tombstones (CASSANDRA-7593)
   * Tolerate min/max cell names of different lengths (CASSANDRA-7651)



[jira] [Commented] (CASSANDRA-7056) Add RAMP transactions

2014-08-01 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082383#comment-14082383
 ] 

Aleksey Yeschenko commented on CASSANDRA-7056:
--

bq. There really isn't much of a use case for unlogged batches now that we have 
async drivers. So I'd rather keep logged/ramp the default.

Good point. Yes, you are right.

 Add RAMP transactions
 -

 Key: CASSANDRA-7056
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7056
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Tupshin Harper
Priority: Minor

 We should take a look at 
 [RAMP|http://www.bailis.org/blog/scalable-atomic-visibility-with-ramp-transactions/]
  transactions, and figure out if they can be used to provide more efficient 
 LWT (or LWT-like) operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7475) Dtest: Windows - various cqlsh_tests errors

2014-08-01 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082382#comment-14082382
 ] 

Philip Thompson edited comment on CASSANDRA-7475 at 8/1/14 3:56 PM:


I made a few improvements to the tests. The errors were related to running 
cqlsh instead of cqlsh.bat and with starting cqlsh before the nodes were up. 
test_simple_insert is now passing. test_eat_glass and test_with_empty_values 
are both now failing on the validations.

{code}
test_simple_insert (cqlsh_tests.TestCqlsh) ... SUCCESS: The process with PID 
9652 has been terminated.
ok
test_with_empty_values (cqlsh_tests.TestCqlsh) ... FAIL
Error saving log: 'module' object has no attribute 'symlink'
SUCCESS: The process with PID 9412 has been terminated.

==
FAIL: test_eat_glass (cqlsh_tests.TestCqlsh)
--
Traceback (most recent call last):
  File D:\Users\Philip\cstar\cassandra-dtest\cqlsh_tests.py, line 279, in 
test_eat_glass
self.assertEquals(output.count('Можам да јадам стакло, а 
не ме штета.'), 16)
AssertionError: 0 != 16
  begin captured logging  
dtest: DEBUG: cluster ccm directory: 
d:\users\philip\appdata\local\temp\dtest-9rsf2x
-  end captured logging  -

==
FAIL: test_with_empty_values (cqlsh_tests.TestCqlsh)
--
Traceback (most recent call last):
  File D:\Users\Philip\cstar\cassandra-dtest\cqlsh_tests.py, line 361, in 
test_with_empty_values
self.assertTrue(expected in output, Output \n {%s} \n doesn't contain 
expected\n {%s} % (output, expected))
AssertionError: Output
 {
 intcol  | bigintcol| varintcol
-+--+-
 -12 |  1234567890123456789 |  10
  2147483647 |  9223372036854775807 |   9
   0 |0 |   0
 -2147483648 | -9223372036854775808 | -10
 |  |

(5 rows)

}
 doesn't contain expected
 {
 intcol  | bigintcol| varintcol
-+--+-
 -12 |  1234567890123456789 |  10
  2147483647 |  9223372036854775807 |   9
   0 |0 |   0
 -2147483648 | -9223372036854775808 | -10
 |  |

(5 rows)}
{code}


was (Author: philipthompson):
I made a few improvements to the tests. The errors were related to running 
cqlsh instead of cqlsh.bat and with starting cqlsh before the nodes were up. 
test_simple_insert is now passing. test_eat_glass and test_with_empty_values 
are both now failing on the validations.

 Dtest: Windows - various cqlsh_tests errors
 ---

 Key: CASSANDRA-7475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7475
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows

 Have a few windows-specific failures in this test.
 {code:title=test_eat_glass}
 ==
 ERROR: test_eat_glass (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 158, in test_eat_glass
 .encode(utf-8))
   File build\bdist.win32\egg\ccmlib\node.py, line 613, in run_cqlsh
 p.stdin.write(cmd + ';\n')
 IOError: [Errno 22] Invalid argument
 {code}
 {code:title=test_simple_insert}
 ==
 ERROR: test_simple_insert (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 35, in test_simple_insert
 cursor.execute(select id, value from simple.simple);
   File c:\src\cassandra-dbapi2\cql\cursor.py, line 80, in execute
 response = self.get_response(prepared_q, cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 77, in get_response
 return self.handle_cql_execution_errors(doquery, compressed_q, compress, 
 cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 98, in 
 

[jira] [Commented] (CASSANDRA-7475) Dtest: Windows - various cqlsh_tests errors

2014-08-01 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082382#comment-14082382
 ] 

Philip Thompson commented on CASSANDRA-7475:


I made a few improvements to the tests. The errors were related to running 
cqlsh instead of cqlsh.bat and with starting cqlsh before the nodes were up. 
test_simple_insert is now passing. test_eat_glass and test_with_empty_values 
are both now failing on the validations.

 Dtest: Windows - various cqlsh_tests errors
 ---

 Key: CASSANDRA-7475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7475
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows

 Have a few windows-specific failures in this test.
 {code:title=test_eat_glass}
 ==
 ERROR: test_eat_glass (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 158, in test_eat_glass
 .encode(utf-8))
   File build\bdist.win32\egg\ccmlib\node.py, line 613, in run_cqlsh
 p.stdin.write(cmd + ';\n')
 IOError: [Errno 22] Invalid argument
 {code}
 {code:title=test_simple_insert}
 ==
 ERROR: test_simple_insert (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 35, in test_simple_insert
 cursor.execute(select id, value from simple.simple);
   File c:\src\cassandra-dbapi2\cql\cursor.py, line 80, in execute
 response = self.get_response(prepared_q, cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 77, in get_response
 return self.handle_cql_execution_errors(doquery, compressed_q, compress, 
 cl)
   File c:\src\cassandra-dbapi2\cql\thrifteries.py, line 98, in 
 handle_cql_execution_errors
 raise cql.ProgrammingError(Bad Request: %s % ire.why)
 ProgrammingError: Bad Request: Keyspace simple does not exist
 {code}
 {code:title=test_with_empty_values}
 ==
 ERROR: test_with_empty_values (cqlsh_tests.TestCqlsh)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 347, in 
 test_with_empty_values
 output = self.run_cqlsh(node1, select intcol, bigintcol, varintcol from 
 CASSANDRA_7196.has_all_types where num in (0, 1, 2, 3, 4))
   File C:\src\cassandra-dtest\cqlsh_tests.py, line 373, in run_cqlsh
 p = subprocess.Popen([ cli ] + args, env=env, stdin=subprocess.PIPE, 
 stderr=subprocess.PIPE, stdout=subprocess.PIPE)
   File C:\Python27\lib\subprocess.py, line 710, in __init__
 errread, errwrite)
   File C:\Python27\lib\subprocess.py, line 958, in _execute_child
 startupinfo)
 WindowsError: [Error 193] %1 is not a valid Win32 application
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/6] git commit: Don't use strict consistency when replacing

2014-08-01 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 dfa6b9802 - 6e157625f
  refs/heads/cassandra-2.1.0 080aa94c0 - 2f48723bf
  refs/heads/trunk 5c1220379 - db4342896


Don't use strict consistency when replacing

Patch by brandonwilliams, reviewed by tjake for CASSANDRA-7568


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2f48723b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2f48723b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2f48723b

Branch: refs/heads/cassandra-2.1
Commit: 2f48723bfff605af31102fcfcd767b3dc878e097
Parents: 080aa94
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:50:34 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:50:34 2014 -0500

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/dht/RangeStreamer.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f48723b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f6f30fa..a8299c6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-final
+ * Don't use strict consistency when replacing (CASSANDRA-7568)
  * Fix min/max cell name collection on 2.0 SSTables with range
tombstones (CASSANDRA-7593)
  * Tolerate min/max cell names of different lengths (CASSANDRA-7651)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f48723b/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index 2308d30..14d24fc 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -128,7 +128,7 @@ public class RangeStreamer
 
 public void addRanges(String keyspaceName, CollectionRangeToken ranges)
 {
-MultimapRangeToken, InetAddress rangesForKeyspace = 
useStrictConsistency  tokens != null
+MultimapRangeToken, InetAddress rangesForKeyspace = 
!DatabaseDescriptor.isReplacing()  useStrictConsistency  tokens != null
 ? getAllRangesWithStrictSourcesFor(keyspaceName, ranges) : 
getAllRangesWithSourcesFor(keyspaceName, ranges);
 
 if (logger.isDebugEnabled())



[2/6] git commit: Don't use strict consistency when replacing

2014-08-01 Thread brandonwilliams
Don't use strict consistency when replacing

Patch by brandonwilliams, reviewed by tjake for CASSANDRA-7568


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2f48723b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2f48723b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2f48723b

Branch: refs/heads/cassandra-2.1.0
Commit: 2f48723bfff605af31102fcfcd767b3dc878e097
Parents: 080aa94
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:50:34 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:50:34 2014 -0500

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/dht/RangeStreamer.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f48723b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f6f30fa..a8299c6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-final
+ * Don't use strict consistency when replacing (CASSANDRA-7568)
  * Fix min/max cell name collection on 2.0 SSTables with range
tombstones (CASSANDRA-7593)
  * Tolerate min/max cell names of different lengths (CASSANDRA-7651)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f48723b/src/java/org/apache/cassandra/dht/RangeStreamer.java
--
diff --git a/src/java/org/apache/cassandra/dht/RangeStreamer.java 
b/src/java/org/apache/cassandra/dht/RangeStreamer.java
index 2308d30..14d24fc 100644
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@ -128,7 +128,7 @@ public class RangeStreamer
 
 public void addRanges(String keyspaceName, CollectionRangeToken ranges)
 {
-MultimapRangeToken, InetAddress rangesForKeyspace = 
useStrictConsistency  tokens != null
+MultimapRangeToken, InetAddress rangesForKeyspace = 
!DatabaseDescriptor.isReplacing()  useStrictConsistency  tokens != null
 ? getAllRangesWithStrictSourcesFor(keyspaceName, ranges) : 
getAllRangesWithSourcesFor(keyspaceName, ranges);
 
 if (logger.isDebugEnabled())



[5/6] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-01 Thread brandonwilliams
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6e157625
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6e157625
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6e157625

Branch: refs/heads/cassandra-2.1
Commit: 6e157625f2ee4979441d6af4f91cdfd8edc774ce
Parents: dfa6b98 2f48723
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Aug 1 10:51:01 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Aug 1 10:51:01 2014 -0500

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/dht/RangeStreamer.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6e157625/CHANGES.txt
--
diff --cc CHANGES.txt
index 578ba87,a8299c6..5954236
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,17 -1,5 +1,18 @@@
 +2.1.1
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Add listen_interface and rpc_interface options (CASSANDRA-7417)
 + * Improve schema merge performance (CASSANDRA-7444)
 + * Adjust MT depth based on # of partition validating (CASSANDRA-5263)
 + * Optimise NativeCell comparisons (CASSANDRA-6755)
 + * Configurable client timeout for cqlsh (CASSANDRA-7516)
 + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111)
 +Merged from 2.0:
 + * (cqlsh) Add tab-completion for CREATE/DROP USER IF [NOT] EXISTS 
(CASSANDRA-7611)
 + * Catch errors when the JVM pulls the rug out from GCInspector 
(CASSANDRA-5345)
 +
 +
  2.1.0-final
+  * Don't use strict consistency when replacing (CASSANDRA-7568)
   * Fix min/max cell name collection on 2.0 SSTables with range
 tombstones (CASSANDRA-7593)
   * Tolerate min/max cell names of different lengths (CASSANDRA-7651)



[jira] [Updated] (CASSANDRA-7638) Revisit GCInspector

2014-08-01 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7638:


Reviewer: Yuki Morishita

 Revisit GCInspector
 ---

 Key: CASSANDRA-7638
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7638
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.1.0

 Attachments: 7638.txt


 In CASSANDRA-2868 we had to change the api that GCI uses to avoid the native 
 memory leak, but this caused GCI to be less reliable and more 'best effort' 
 than before where it was 100% reliable.  Let's revisit this and see if the 
 native memory leak is fixed in java7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-6313) Refactor dtests to use python driver instead of cassandra-dbapi2

2014-08-01 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-6313:
--

Assignee: Philip Thompson  (was: Ryan McGuire)

 Refactor dtests to use python driver instead of cassandra-dbapi2
 

 Key: CASSANDRA-6313
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6313
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Philip Thompson

 cassandra-dbapi2 is effectively deprecated. The python driver is the future, 
 we should refactor our dtests to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6313) Refactor dtests to use python driver instead of cassandra-dbapi2

2014-08-01 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082391#comment-14082391
 ] 

Philip Thompson commented on CASSANDRA-6313:


All dtests have been converted to the use the most recent DataStax python 
driver. The changes are in the python-driver branch of dtests. Once 2.1 is 
released and runs are stable on cassci, we will move them over. 

https://github.com/riptano/cassandra-dtest/tree/python-driver
http://cassci.datastax.com/job/cassandra-2.1_dtest_pydriver/

 Refactor dtests to use python driver instead of cassandra-dbapi2
 

 Key: CASSANDRA-6313
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6313
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Philip Thompson

 cassandra-dbapi2 is effectively deprecated. The python driver is the future, 
 we should refactor our dtests to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   >