[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220674#comment-14220674
 ] 

Andreas Ländle commented on CASSANDRA-8192:
---

Maybe this helps - I can reproduce this call stack on a 64-Bit machine (8 GB of 
Ram) running JetBrains upsource.
Also the heap of RAM available to Cassandra should be big enough.

C:\Tools\Upsource\internal\java\windows-amd64\jre\bin\java.exe, -ea, 
-XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=..\..\logs\cassandra, 
-Dfile.encoding=UTF-8, -Xbootclasspath/a:lib/jamm/jamm-0.2.6.jar, 
-javaagent:lib/jamm/jamm-0.2.6.jar, -d64, -Xmx3000m, -XX:MaxPermSize=128m, 
-jar, launcher\lib\app-wrapper\app-wrapper.jar, Apache Cassandra, AppStarter, 
com.jetbrains.cassandra.service.CassandraServiceMain] (at path: 
C:\Tools\Upsource\apps\cassandra, system properties: 
{launcher.app.home=C:\Tools\Upsource\apps\cassandra, 
launcher.app.logs.dir=C:\Tools\Upsource\logs\cassandra})

18:05:01.043 [SSTableBatchOpen:4] ERROR o.a.c.service.CassandraDaemon - 
Exception in thread Thread[SSTableBatchOpen:4,5,main]
java.lang.AssertionError: null
at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
~[cassandra-all-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:135)
 ~[cassandra-all-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
 ~[cassandra-all-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
 ~[cassandra-all-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
 ~[cassandra-all-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
~[cassandra-all-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
~[cassandra-all-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
~[cassandra-all-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
~[cassandra-all-2.1.1.jar:2.1.1]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
~[cassandra-all-2.1.1.jar:2.1.1]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_60]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_60]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]

Please let me know if I can provide additional information that may help you.

 AssertionError in Memory.java
 -

 Key: CASSANDRA-8192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
 Attachments: cassandra.bat, cassandra.yaml, system.log


 Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
 start up.
 {panel:title=system.log}
 ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
 Exception in thread Thread[SSTableBatchOpen:1,5,main]
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:135)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 

[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-21 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8280:
---
Attachment: 8280-2.0-v2.txt
8280-2.1-v2.txt

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.0-v2.txt, 8280-2.0.txt, 8280-2.1-v2.txt, 
 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  

[jira] [Commented] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-21 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220710#comment-14220710
 ] 

Sam Tunnicliffe commented on CASSANDRA-8280:


My original patches didn't cover conditional updates, so I've attached v2 of 
both. Also, because there's no easy way to execute conditional CQL in a unit 
test I've written a dtest which is included in 
[this|https://github.com/riptano/cassandra-dtest/pull/119] pull request. The 
dtest kind of makes the unit test redundant, but it felt wrong to remove it so 
I've left it in the patches.

I should also note that in cases where the indexed column is a clustering 
component or partition key, we would already reject update with these oversized 
values. In 2.0 though, this results in an ServerError rather than an 
InvalidRequest so I've added a little more validation to return a nicer error 
response.


 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.0-v2.txt, 8280-2.0.txt, 8280-2.1-v2.txt, 
 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 

[jira] [Updated] (CASSANDRA-8181) Intermittent failure of SSTableImportTest unit test

2014-11-21 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8181:
--
Attachment: CASSANDRA-8181.txt

The problem came from the fact that 
{{testBackwardCompatibilityOfImportWithAsciiKeyValidator}} was setting the 
System property {{skip.key.validator}} to true while 
{{testImportWithAsciiKeyValidator}} was expecting the default value which is 
false. As with Java 7 JUnit does not guarantee anymore the order in which the 
test methods are run, if 
{{testBackwardCompatibilityOfImportWithAsciiKeyValidator}}  was run before 
{{testImportWithAsciiKeyValidator}} then the second test would fail otherwise 
it would pass.
The patch set the the System property {{skip.key.validator}} to false in 
{{testImportWithAsciiKeyValidator}}.

 Intermittent failure of SSTableImportTest unit test
 ---

 Key: CASSANDRA-8181
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8181
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Michael Shuler
Assignee: Benjamin Lerer
 Fix For: 2.1.3

 Attachments: CASSANDRA-8181.txt, 
 SSTableImportTest_failure_system.log.gz


 {noformat}
 [junit] Testsuite: org.apache.cassandra.tools.SSTableImportTest
 [junit] Tests run: 8, Failures: 1, Errors: 0, Time elapsed: 6.712 sec
 [junit] 
 [junit] - Standard Output ---
 [junit] Counting keys to import, please wait... (NOTE: to skip this use 
 -n num_keys)
 [junit] Importing 2 keys...
 [junit] 2 keys imported successfully.
 [junit] Counting keys to import, please wait... (NOTE: to skip this use 
 -n num_keys)
 [junit] Importing 2 keys...
 [junit] 2 keys imported successfully.
 [junit] Counting keys to import, please wait... (NOTE: to skip this use 
 -n num_keys)
 [junit] Importing 2 keys...
 [junit] 2 keys imported successfully.
 [junit] Counting keys to import, please wait... (NOTE: to skip this use 
 -n num_keys)
 [junit] Importing 2 keys...
 [junit] Importing 2 keys...
 [junit] 2 keys imported successfully.
 [junit] Counting keys to import, please wait... (NOTE: to skip this use 
 -n num_keys)
 [junit] Importing 2 keys...
 [junit] 2 keys imported successfully.
 [junit] Counting keys to import, please wait... (NOTE: to skip this use 
 -n num_keys)
 [junit] Importing 1 keys...
 [junit] 1 keys imported successfully.
 [junit] Counting keys to import, please wait... (NOTE: to skip this use 
 -n num_keys)
 [junit] Importing 2 keys...
 [junit] 2 keys imported successfully.
 [junit] -  ---
 [junit] Testcase: 
 testImportWithAsciiKeyValidator(org.apache.cassandra.tools.SSTableImportTest):
 FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError
 [junit] at 
 org.apache.cassandra.tools.SSTableImportTest.testImportWithAsciiKeyValidator(SSTableImportTest.java:166)
 [junit] 
 [junit] 
 [junit] Test org.apache.cassandra.tools.SSTableImportTest FAILED
 {noformat}
 testImportWithAsciiKeyValidator was added in CASSANDRA-7498 and fails as 
 above occasionally (~10-15% of runs) in CI. Attached is the system.log from 
 the failed test on 2.1 HEAD (8e5fdc2).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8353) Prepared statement doesn't revalidate after table schema changes

2014-11-21 Thread JIRA
Michał Jaszczyk created CASSANDRA-8353:
--

 Summary: Prepared statement doesn't revalidate after table schema 
changes
 Key: CASSANDRA-8353
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8353
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Michał Jaszczyk


Having simple table:
{code}
CREATE TABLE test1 (
  key TEXT,
  value TEXT,
  PRIMARY KEY (key)
);
{code}
I prepare following statement:
{code}
SELECT * FROM test1;
{code}
I run queries based on the statement which returns expected results.
Then I update schema definition like this:
{code}
ALTER TABLE test1 ADD value2 TEXT;
{code}
I populate the value2 values and use the same statement again. The results 
returned by the same query don't include value2. I'm sure it is not cached in 
the driver/application because I was starting new process after changing schema.

It looks to me like a bug. Please correct me if it works like this on purpose.

I'm using ruby cql driver but I believe it is not related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8349) ALTER KEYSPACE causes tables not to be found

2014-11-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220821#comment-14220821
 ] 

Aleksey Yeschenko commented on CASSANDRA-8349:
--

This is most likely a python-driver issue. Have you tried simply restarting 
cqlsh?

 ALTER KEYSPACE causes tables not to be found
 

 Key: CASSANDRA-8349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8349
 Project: Cassandra
  Issue Type: Bug
Reporter: Joseph Chu
Priority: Minor

 Running Cassandra 2.1.2 on a single node.
 Reproduction steps in cqlsh:
 CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 CREATE TABLE a.a (a INT PRIMARY KEY);
 INSERT INTO a.a (a) VALUES (1);
 SELECT * FROM a.a;
 ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 2};
 SELECT * FROM a.a;
 DESCRIBE KEYSPACE a
 Errors:
 Column family 'a' not found
 Workaround(?):
 Restart the instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8354) A better story for dealing with empty values

2014-11-21 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-8354:
---

 Summary: A better story for dealing with empty values
 Key: CASSANDRA-8354
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8354
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
 Fix For: 3.0


In CQL, a value of any type can be empty, even for types for which such 
values doesn't make any sense (int, uuid, ...). Note that it's different from 
having no value (i.e. a {{null}}). This is due to historical reasons, and we 
can't entirely disallow it for backward compatibility, but it's pretty painful 
when working with CQL since you always need to be defensive about such largely 
non-sensical values.

This is particularly annoying with UDF: those empty values are represented as 
{{null}} for UDF and that plays weirdly with UDF that use unboxed native types.

So I would suggest that we introduce variations of the types that don't accept 
empty byte buffers for those type for which it's not a particularly sensible 
value.

Ideally we'd use those variant by default, that is:
{noformat}
CREATE TABLE foo (k text PRIMARY, v int)
{noformat}
would not accept empty values for {{v}}. But
{noformat}
CREATE TABLE foo (k text PRIMARY, v int ALLOW EMPTY)
{noformat}
would.

Similarly, for UDF, a function like:
{noformat}
CREATE FUNCTION incr(v int) RETURNS int LANGUAGE JAVA AS 'return v + 1';
{noformat}
would be guaranteed it can only be applied where no empty values are allowed. A
function that wants to handle empty values could be created with:
{noformat}
CREATE FUNCTION incr(v int ALLOW EMPTY) RETURNS int ALLOW EMPTY LANGUAGE JAVA 
AS 'return (v == null) ? null : v + 1';
{noformat}

Of course, doing that has the problem of backward compatibility. One option 
could be to say that if a type doesn't accept empties, but we do have an empty 
internally, then we convert it to some reasonably sensible default value (0 for 
numeric values, the smallest possible uuid for uuids, etc...). This way, we 
could allow convesion of types to and from 'ALLOW EMPTY'. And maybe we'd say 
that existing compact tables gets the 'ALLOW EMPTY' flag for their types by 
default.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8350) Drop,Create CF, insert 658, one record fail to insert

2014-11-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8350.
--
Resolution: Duplicate

In the future, until you upgrade to 2.1, truncate before you drop a table if 
you plan on reusing it.

 Drop,Create CF, insert 658,  one record fail to insert
 --

 Key: CASSANDRA-8350
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8350
 Project: Cassandra
  Issue Type: Bug
Reporter: Harpreet Kaur

 For a CF ,there is a change in definition. so we 
 Drop CF
 Create CF
 Inserted data. 658 
 cqlsh : select count(*) from CF; 657
 one record did not insert. 
 Try inserting as one-off, no errors on cqlsh, but record did not insert. 
 when doing a tracing on; select * from CF where id='blah'; 
 shows 
 Read 0 live and 73 tombstoned cells 
 changed gc_grace_seconds to 600 for CF 
 next performed nodetool flush,compact,repair on all 12 nodes, did not help. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8053) Support for user defined aggregate functions

2014-11-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220823#comment-14220823
 ] 

Sylvain Lebresne commented on CASSANDRA-8053:
-

Regarding the syntax above, as far as I can tell, Postgres syntax doesn't have 
a {{FINALTYPE}}, they infer it from {{FINALFUNC}} (or from {{STYPE}} if there 
is no {{FINALFUNC}}). So I'd rather not adding something useless. Now, Postgres 
is not terribly consistent in that {{STYPE}} is arguably also dispensable since 
by definition it should be the type of the first argument of {{SFUNC}}. So I'd 
get rid of it too and make the syntax be simply: {noformat}
CREATE AGGREGATE aggregateName(param-type...)
 SFUNC name-of-state-function
 [ FINALFUNC name-of-final-function ]
 [ INITCOND term ]
{noformat}


Other than that, 2 general comment from a (very quick, not really a review) 
read of the patch
* I'd really rather not save both scalar and aggregate functions in the same 
system table. Let's just add a new system table for aggregates that have just 
the info that an aggregate function need.
* I'm not sure how fan I am of the delayed resolution of functions. I 
understand the problem it's trying to solve, that an aggregate definition may 
reach a node before the definition of the scalar function it uses, but I'm not 
sure it's worth adding complexity for that. This problem is not different from 
trying to create a table after it's keyspace, or an index after it's table, and 
the status quo is that it's the job of the client to wait for schema agreement 
before moving on with DML definitions. I'd rather use the same assumption here.


 Support for user defined aggregate functions
 

 Key: CASSANDRA-8053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8053
 Project: Cassandra
  Issue Type: New Feature
Reporter: Robert Stupp
Assignee: Robert Stupp
  Labels: cql, udf
 Fix For: 3.0

 Attachments: 8053v1.txt


 CASSANDRA-4914 introduces aggregate functions.
 This ticket is about to decide how we can support user defined aggregate 
 functions. UD aggregate functions should be supported for all UDF flavors 
 (class, java, jsr223).
 Things to consider:
 * Special implementations for each scripting language should be omitted
 * No exposure of internal APIs (e.g. {{AggregateFunction}} interface)
 * No need for users to deal with serializers / codecs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8353) Prepared statement doesn't revalidate after table schema changes

2014-11-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220845#comment-14220845
 ] 

Aleksey Yeschenko commented on CASSANDRA-8353:
--

Do you mean 'invalidate', not 'revalidate'?

We can't just silently change things under the hood, because the drivers would 
have the resultset metadata without `value2`.

What we can (and probably should) do is to extend CASSANDRA-7566 and invalidate 
the statements like this once schema changes like this happen 
(added/altered/dropped column or dropped index).



 Prepared statement doesn't revalidate after table schema changes
 

 Key: CASSANDRA-8353
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8353
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Michał Jaszczyk

 Having simple table:
 {code}
 CREATE TABLE test1 (
   key TEXT,
   value TEXT,
   PRIMARY KEY (key)
 );
 {code}
 I prepare following statement:
 {code}
 SELECT * FROM test1;
 {code}
 I run queries based on the statement which returns expected results.
 Then I update schema definition like this:
 {code}
 ALTER TABLE test1 ADD value2 TEXT;
 {code}
 I populate the value2 values and use the same statement again. The results 
 returned by the same query don't include value2. I'm sure it is not cached 
 in the driver/application because I was starting new process after changing 
 schema.
 It looks to me like a bug. Please correct me if it works like this on purpose.
 I'm using ruby cql driver but I believe it is not related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8345) Client notifications should carry the entire delta of the information that changed

2014-11-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220853#comment-14220853
 ] 

Aleksey Yeschenko commented on CASSANDRA-8345:
--

Fair enough.

Another issue is schema versioning. This change would necessarily require us to 
modify internal schema representation in lock-step with native protocol 
versions (and support serializing schema changes for the previous native 
protocol versions), which is a heavy burden.

We should really just make drivers fetch as little of the new schema as they 
possibly can - then you get the same total bandwidth per update and only pay a 
bit more in the extra roundtrip cost.

And don't worry about C* handling those queries - it can handle them.

 Client notifications should carry the entire delta of the information that 
 changed
 --

 Key: CASSANDRA-8345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8345
 Project: Cassandra
  Issue Type: Improvement
Reporter: Michaël Figuière
  Labels: protocolv4

 Currently when the schema changes, a {{SCHEMA_CHANGE}} notification is sent 
 to the client to let it know that a modification happened in a specific table 
 or keyspace. If the client register for these notifications, this is likely 
 that it actually cares to have an up to date version of this information, so 
 the next step is logically for the client to query the {{system}} keyspace to 
 retrieve the latest version of the schema for the particular element that was 
 mentioned in the notification.
 The same thing happen with the {{TOPOLOGY_CHANGE}} notification as the client 
 will follow up with a query to retrieve the details that changed in the 
 {{system.peers}} table.
 It would be interesting to send the entire delta of the information that 
 changed within the notification. I see several advantages with this:
 * This would ensure that the data that are sent to the client are as small as 
 possible as such a delta will always be smaller than the resultset that would 
 eventually be received for a formal query on the {{system}} keyspace.
 * This avoid the Cassandra node to receive plenty of query after it issue a 
 notification but rather to prepare a delta once and send it to everybody.
 * This should improve the overall behaviour when dealing with very large 
 schemas with frequent changes (typically due to a tentative of implementing 
 multitenancy through separate keyspaces), as it has been observed that the 
 the notifications and subsequent queries traffic can become non negligible in 
 this case.
 * This would eventually simplify the driver design by removing the need for 
 an extra asynchronous operation to follow up with, although the benefit of 
 this point will be real only once the previous versions of the protocols are 
 far behind.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6717) Modernize schema tables

2014-11-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220857#comment-14220857
 ] 

Aleksey Yeschenko commented on CASSANDRA-6717:
--

bq. That probably means that for clustering columns definition we'd have to 
keep a boolean on whether the clustering order is reversed or not.

Actually, I think we should instead store the clustering order in 
`system_schema.tables` instead. That way we wouldn't need the `is_reversed` 
boolean *or* the `component_index`.

Would've allowed us to get rid of `component_index` too, if not for the 
composite partition key columns.

 Modernize schema tables
 ---

 Key: CASSANDRA-6717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6717
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 3.0


 There is a few problems/improvements that can be done with the way we store 
 schema:
 # CASSANDRA-4988: as explained on the ticket, storing the comparator is now 
 redundant (or almost, we'd need to store whether the table is COMPACT or not 
 too, which we don't currently is easy and probably a good idea anyway), it 
 can be entirely reconstructed from the infos in schema_columns (the same is 
 true of key_validator and subcomparator, and replacing default_validator by a 
 COMPACT_VALUE column in all case is relatively simple). And storing the 
 comparator as an opaque string broke concurrent updates of sub-part of said 
 comparator (concurrent collection addition or altering 2 separate clustering 
 columns typically) so it's really worth removing it.
 # CASSANDRA-4603: it's time to get rid of those ugly json maps. I'll note 
 that schema_keyspaces is a problem due to its use of COMPACT STORAGE, but I 
 think we should fix it once and for-all nonetheless (see below).
 # For CASSANDRA-6382 and to allow indexing both map keys and values at the 
 same time, we'd need to be able to have more than one index definition for a 
 given column.
 # There is a few mismatches in table options between the one stored in the 
 schema and the one used when declaring/altering a table which would be nice 
 to fix. The compaction, compression and replication maps are one already 
 mentioned from CASSANDRA-4603, but also for some reason 
 'dclocal_read_repair_chance' in CQL is called just 'local_read_repair_chance' 
 in the schema table, and 'min/max_compaction_threshold' are column families 
 option in the schema but just compaction options for CQL (which makes more 
 sense).
 None of those issues are major, and we could probably deal with them 
 independently but it might be simpler to just fix them all in one shot so I 
 wanted to sum them all up here. In particular, the fact that 
 'schema_keyspaces' uses COMPACT STORAGE is annoying (for the replication map, 
 but it may limit future stuff too) which suggest we should migrate it to a 
 new, non COMPACT table. And while that's arguably a detail, it wouldn't hurt 
 to rename schema_columnfamilies to schema_tables for the years to come since 
 that's the prefered vernacular for CQL.
 Overall, what I would suggest is to move all schema tables to a new keyspace, 
 named 'schema' for instance (or 'system_schema' but I prefer the shorter 
 version), and fix all the issues above at once. Since we currently don't 
 exchange schema between nodes of different versions, all we'd need to do that 
 is a one shot startup migration, and overall, I think it could be simpler for 
 clients to deal with one clear migration than to have to handle minor 
 individual changes all over the place. I also think it's somewhat cleaner 
 conceptually to have schema tables in their own keyspace since they are 
 replicated through a different mechanism than other system tables.
 If we do that, we could, for instance, migrate to the following schema tables 
 (details up for discussion of course):
 {noformat}
 CREATE TYPE user_type (
   name text,
   column_names listtext,
   column_types listtext
 )
 CREATE TABLE keyspaces (
   name text PRIMARY KEY,
   durable_writes boolean,
   replication mapstring, string,
   user_types mapstring, user_type
 )
 CREATE TYPE trigger_definition (
   name text,
   options maptex, text
 )
 CREATE TABLE tables (
   keyspace text,
   name text,
   id uuid,
   table_type text, // COMPACT, CQL or SUPER
   dropped_columns maptext, bigint,
   triggers maptext, trigger_definition,
   // options
   comment text,
   compaction maptext, text,
   compression maptext, text,
   read_repair_chance double,
   dclocal_read_repair_chance double,
   gc_grace_seconds int,
   caching text,
   rows_per_partition_to_cache text,
   default_time_to_live int,
   min_index_interval int,
   max_index_interval int,
   speculative_retry text,
   

[jira] [Commented] (CASSANDRA-8281) CQLSSTableWriter close does not work

2014-11-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220858#comment-14220858
 ] 

Benjamin Lerer commented on CASSANDRA-8281:
---

The problem is a configuration issue. The client mode need to be set to true 
{{org.apache.cassandra.config.Config.setClientMode(true);}}. 
My understanding is that {{CQLSSTableWriter}} is always used in a client mode 
so I will make sure that internally it clientMode is set to true. 

 CQLSSTableWriter close does not work
 

 Key: CASSANDRA-8281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8281
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Cassandra 2.1.1
Reporter: Xu Zhongxing
Assignee: Benjamin Lerer

 I called CQLSSTableWriter.close(). But the program still cannot exit. But the 
 same code works fine on Cassandra 2.0.10.
 It seems that CQLSSTableWriter cannot be closed, and blocks the program from 
 exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8332) Null pointer after droping keyspace

2014-11-21 Thread Jacek Lewandowski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220863#comment-14220863
 ] 

Jacek Lewandowski commented on CASSANDRA-8332:
--

I can see the problem with dropping keyspace right after write-intensive 
operations in 2.0.11.

The way to reproduce: 
1. clean data directory
2. start C*
3. run subsequent trials 

It happens after the second trial or later - depending on the amount of data 
being written during a single trial. However, I couldn't reproduce it after a 
single trial. 

Now, the trial is:
1. create keyspace and table (bigint pk, 5 x text)
2. write *b* batches of *n* inserts in parallel (using Java Driver 2.1.2, 
protocol V2) with *p* workers
3. drop keyspace
4. assert that the keyspace was really dropped by getting metadata from Cluster 
object

The procedure break in step 1, because asserting Cluster metadata is useless - 
it says the keyspace is missing. However, in the next trial, when it tries to 
create the keyspace again, it says that the keyspace already exists. Notice, 
that drop keyspace statement doesn't throw any exception, either in Java 
Driver or in CQLSH.  

I tried different values for b, n and p, for example b=50k, n=50, p=6 and each 
text field is 30 bytes long (10 x 24-bit utf characters)

System log shows various errors. Some of them occur even during step 2 in the 
second trial:
{noformat}
ERROR 13:01:00,422 Exception in thread Thread[CompactionExecutor:4,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
/var/lib/cassandra/data/ks/tab/ks-tab-jb-2-Data.db (No such file or directory)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1366)
at org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1172)
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1184)
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:272)
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:278)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Caused by: java.io.FileNotFoundException: 
/var/lib/cassandra/data/ks/tab/ks-tab-jb-2-Data.db (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.init(RandomAccessFile.java:241)
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.init(CompressedRandomAccessReader.java:76)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.init(CompressedThrottledReader.java:34)
at 
org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48)
... 17 more
{noformat}

Very simiar errors are added to the system log when the keyspace is tried to be 
dropped. However, in this case I also noticed the following exceptions:
{noformat}
INFO 12:43:35,006 Compaction interrupted: 
Compaction@3e922a80-7599-3e0e-b804-6b32581a18d9(ks, tab, 1579024/110473935)bytes
ERROR 12:43:35,007 Unexpected error during query
java.lang.RuntimeException: java.util.concurrent.ExectionException: 
java.lang.RuntimeException: Tried to hard link to file that does not exist 
/var/lib/cassandra/data/ks/tab/ks-tab-jb-10-Statistics.db
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:413)
at 
org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:285)
at 
org.apache.cassandra.service.MigrationManager.announceKeyspaceDrop(MigrationManager.java:259)
at 
org.apache.cassandra.cql3.statements.DropKeyspaceStatement.announceMigration(DropKeyspaceStatement.java:62)
at 
org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:79)
at 

[jira] [Updated] (CASSANDRA-8332) Null pointer after droping keyspace

2014-11-21 Thread Jacek Lewandowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacek Lewandowski updated CASSANDRA-8332:
-
Attachment: CassandraStressTest-8332.zip

Here is a simple tool which I used to reproduce this problem. You can use it 
with the following command:

sbt/sbt run --batches 10 --inserts 100 --length 10 --parallelism 3 --host 
127.0.0.1 --trials 2

params are:
- batches - the number of batches in a single trial
- inserts - the number of inserts in a single batch
- length - the number of 3-byte characters in each text field
...


 Null pointer after droping keyspace
 ---

 Key: CASSANDRA-8332
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8332
 Project: Cassandra
  Issue Type: Bug
Reporter: Chris Lohfink
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 2.1.3

 Attachments: CassandraStressTest-8332.zip


 After dropping keyspace, sometimes I see this in logs:
 {code}
 ERROR 03:40:29 Exception in thread Thread[CompactionExecutor:2,1,main]
 java.lang.AssertionError: null
   at 
 org.apache.cassandra.io.compress.CompressionParameters.setLiveMetadata(CompressionParameters.java:108)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:1142)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1896)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:68) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1681)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1693)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:181)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getScanners(WrappingCompactionStrategy.java:320)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:340)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:233)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_71]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_71]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_71]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_71]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 {code}
 Minor issue since doesn't really affect anything, but the error makes it look 
 like somethings wrong.  Seen on 2.1 branch 
 (1b21aef8152d96a180e75f2fcc5afad9ded6c595), not sure how far back (may be 
 post 2.1.2).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8281) CQLSSTableWriter close does not work

2014-11-21 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8281:
--
Attachment: CASSANDRA-8281.txt

The patch make sure that the {{CQLSSTableWriter}} is always using the client 
mode.

 CQLSSTableWriter close does not work
 

 Key: CASSANDRA-8281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8281
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Cassandra 2.1.1
Reporter: Xu Zhongxing
Assignee: Benjamin Lerer
 Attachments: CASSANDRA-8281.txt


 I called CQLSSTableWriter.close(). But the program still cannot exit. But the 
 same code works fine on Cassandra 2.0.10.
 It seems that CQLSSTableWriter cannot be closed, and blocks the program from 
 exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8329) LeveledCompactionStrategy should split large files across data directories when compacting

2014-11-21 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-8329:
---
Attachment: test_with_patch_2.0.jpg
test_no_patch_2.0.jpg

Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

!test_no_patch_2.0.jpg|thumbnail!

h5. Results - With Patch

Success. There is no more peak during the compaction.

!test_with_patch_2.0.jpg|thumbnail!

Let me know if I can do anything else.


 LeveledCompactionStrategy should split large files across data directories 
 when compacting
 --

 Key: CASSANDRA-8329
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8329
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: J.B. Langston
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 
 0001-get-new-sstable-directory-for-every-new-file-during-.patch, 
 test_no_patch_2.0.jpg, test_with_patch_2.0.jpg


 Because we fall back to STCS for L0 when LCS gets behind, the sstables in L0 
 can get quite large during sustained periods of heavy writes.  This can 
 result in large imbalances between data volumes when using JBOD support.  
 Eventually these large files get broken up as L0 sstables are moved up into 
 higher levels; however, because LCS only chooses a single volume on which to 
 write all of the sstables created during a single compaction, the imbalance 
 is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8329) LeveledCompactionStrategy should split large files across data directories when compacting

2014-11-21 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220914#comment-14220914
 ] 

Alan Boudreault edited comment on CASSANDRA-8329 at 11/21/14 1:41 PM:
--

Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

[^test_no_patch_2.0.jpg]

h5. Results - With Patch

Success. There is no more peak during the compaction.

[^test_with_patch_2.0.jpg]

Let me know if I can do anything else.



was (Author: aboudreault):
Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

!test_no_patch_2.0.jpg|thumbnail!

h5. Results - With Patch

Success. There is no more peak during the compaction.

!test_with_patch_2.0.jpg|thumbnail!

Let me know if I can do anything else.


 LeveledCompactionStrategy should split large files across data directories 
 when compacting
 --

 Key: CASSANDRA-8329
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8329
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: J.B. Langston
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 
 0001-get-new-sstable-directory-for-every-new-file-during-.patch, 
 test_no_patch_2.0.jpg, test_with_patch_2.0.jpg


 Because we fall back to STCS for L0 when LCS gets behind, the sstables in L0 
 can get quite large during sustained periods of heavy writes.  This can 
 result in large imbalances between data volumes when using JBOD support.  
 Eventually these large files get broken up as L0 sstables are moved up into 
 higher levels; however, because LCS only chooses a single volume on which to 
 write all of the sstables created during a single compaction, the imbalance 
 is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8354) A better story for dealing with empty values

2014-11-21 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220930#comment-14220930
 ] 

Robert Stupp commented on CASSANDRA-8354:
-

I'd prefer something like {{ALLOW NULL}} for UDFs since {{null}} and empty 
are equivalent for a UDF (it cannot handle an _empty int_ or _empty uuid_).

 A better story for dealing with empty values
 

 Key: CASSANDRA-8354
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8354
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
 Fix For: 3.0


 In CQL, a value of any type can be empty, even for types for which such 
 values doesn't make any sense (int, uuid, ...). Note that it's different from 
 having no value (i.e. a {{null}}). This is due to historical reasons, and we 
 can't entirely disallow it for backward compatibility, but it's pretty 
 painful when working with CQL since you always need to be defensive about 
 such largely non-sensical values.
 This is particularly annoying with UDF: those empty values are represented as 
 {{null}} for UDF and that plays weirdly with UDF that use unboxed native 
 types.
 So I would suggest that we introduce variations of the types that don't 
 accept empty byte buffers for those type for which it's not a particularly 
 sensible value.
 Ideally we'd use those variant by default, that is:
 {noformat}
 CREATE TABLE foo (k text PRIMARY, v int)
 {noformat}
 would not accept empty values for {{v}}. But
 {noformat}
 CREATE TABLE foo (k text PRIMARY, v int ALLOW EMPTY)
 {noformat}
 would.
 Similarly, for UDF, a function like:
 {noformat}
 CREATE FUNCTION incr(v int) RETURNS int LANGUAGE JAVA AS 'return v + 1';
 {noformat}
 would be guaranteed it can only be applied where no empty values are allowed. 
 A
 function that wants to handle empty values could be created with:
 {noformat}
 CREATE FUNCTION incr(v int ALLOW EMPTY) RETURNS int ALLOW EMPTY LANGUAGE JAVA 
 AS 'return (v == null) ? null : v + 1';
 {noformat}
 Of course, doing that has the problem of backward compatibility. One option 
 could be to say that if a type doesn't accept empties, but we do have an 
 empty internally, then we convert it to some reasonably sensible default 
 value (0 for numeric values, the smallest possible uuid for uuids, etc...). 
 This way, we could allow convesion of types to and from 'ALLOW EMPTY'. And 
 maybe we'd say that existing compact tables gets the 'ALLOW EMPTY' flag for 
 their types by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8355) NPE when passing wrong argument in ALTER TABLE statement

2014-11-21 Thread Pierre Laporte (JIRA)
Pierre Laporte created CASSANDRA-8355:
-

 Summary: NPE when passing wrong argument in ALTER TABLE statement
 Key: CASSANDRA-8355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8355
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Pierre Laporte
Priority: Minor


When I tried to change the caching strategy of a table, I provided a wrong 
argument {{'rows_per_partition' : ALL}} with unquoted ALL. Cassandra returned a 
SyntaxError, which is good, but it seems it was because of a 
NullPointerException.

*Howto*
{code}
CREATE TABLE foo (k int primary key);
ALTER TABLE foo WITH caching = {'keys' : 'all', 'rows_per_partition' : ALL};
{code}

*Output*
{code}
ErrorMessage code=2000 [Syntax error in CQL query] message=Failed parsing 
statement: [ALTER TABLE foo WITH caching = {'keys' : 'all', 
'rows_per_partition' : ALL};] reason: NullPointerException null
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8356) Slice query on super column camily with counters don't get all the data

2014-11-21 Thread JIRA
Nicolas Lalevée created CASSANDRA-8356:
--

 Summary: Slice query on super column camily with counters don't 
get all the data
 Key: CASSANDRA-8356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8356
 Project: Cassandra
  Issue Type: Bug
Reporter: Nicolas Lalevée


We've finally been able to upgrade our cluster to 2.0.11, after CASSANDRA-7188 
being fixed.
But now slice queries on a super column family with counters doesn't return all 
the expected data. We first though because of all the trouble we had that we 
lost data, but there a way to actually get the data, so nothing is lost; it 
just that cassandra seems to incorrectly skip it.
See the following CQL log:
{noformat}
cqlsh:Theme desc table theme_view;

CREATE TABLE theme_view (
  key bigint,
  column1 varint,
  column2 text,
  value counter,
  PRIMARY KEY ((key), column1, column2)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=1.00 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};

cqlsh:Theme select * from theme_view where key = 99421 limit 10;

 key   | column1 | column2| value
---+-++---
 99421 | -12 | 2011-03-25 |59
 99421 | -12 | 2011-03-26 | 5
 99421 | -12 | 2011-03-27 | 2
 99421 | -12 | 2011-03-28 |40
 99421 | -12 | 2011-03-29 |14
 99421 | -12 | 2011-03-30 |17
 99421 | -12 | 2011-03-31 | 5
 99421 | -12 | 2011-04-01 |37
 99421 | -12 | 2011-04-02 | 7
 99421 | -12 | 2011-04-03 | 4

(10 rows)

cqlsh:Theme select * from theme_view where key = 99421 and column1 = -12 limit 
10;

 key   | column1 | column2| value
---+-++---
 99421 | -12 | 2011-03-25 |59
 99421 | -12 | 2014-05-06 |15
 99421 | -12 | 2014-06-06 | 7
 99421 | -12 | 2014-06-10 |22
 99421 | -12 | 2014-06-11 |34
 99421 | -12 | 2014-06-12 |35
 99421 | -12 | 2014-06-13 |26
 99421 | -12 | 2014-06-14 |16
 99421 | -12 | 2014-06-15 |24
 99421 | -12 | 2014-06-16 |25

(10 rows)
{noformat}
As you can see the second query should return data from 2012, but it is not. 
Via thrift, we have the exact same bug.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8356) Slice query on a super column family with counters don't get all the data

2014-11-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Lalevée updated CASSANDRA-8356:
---
Summary: Slice query on a super column family with counters don't get all 
the data  (was: Slice query on super column camily with counters don't get all 
the data)

 Slice query on a super column family with counters don't get all the data
 -

 Key: CASSANDRA-8356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8356
 Project: Cassandra
  Issue Type: Bug
Reporter: Nicolas Lalevée

 We've finally been able to upgrade our cluster to 2.0.11, after 
 CASSANDRA-7188 being fixed.
 But now slice queries on a super column family with counters doesn't return 
 all the expected data. We first though because of all the trouble we had that 
 we lost data, but there a way to actually get the data, so nothing is lost; 
 it just that cassandra seems to incorrectly skip it.
 See the following CQL log:
 {noformat}
 cqlsh:Theme desc table theme_view;
 CREATE TABLE theme_view (
   key bigint,
   column1 varint,
   column2 text,
   value counter,
   PRIMARY KEY ((key), column1, column2)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=1.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:Theme select * from theme_view where key = 99421 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2011-03-26 | 5
  99421 | -12 | 2011-03-27 | 2
  99421 | -12 | 2011-03-28 |40
  99421 | -12 | 2011-03-29 |14
  99421 | -12 | 2011-03-30 |17
  99421 | -12 | 2011-03-31 | 5
  99421 | -12 | 2011-04-01 |37
  99421 | -12 | 2011-04-02 | 7
  99421 | -12 | 2011-04-03 | 4
 (10 rows)
 cqlsh:Theme select * from theme_view where key = 99421 and column1 = -12 
 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2014-05-06 |15
  99421 | -12 | 2014-06-06 | 7
  99421 | -12 | 2014-06-10 |22
  99421 | -12 | 2014-06-11 |34
  99421 | -12 | 2014-06-12 |35
  99421 | -12 | 2014-06-13 |26
  99421 | -12 | 2014-06-14 |16
  99421 | -12 | 2014-06-15 |24
  99421 | -12 | 2014-06-16 |25
 (10 rows)
 {noformat}
 As you can see the second query should return data from 2012, but it is not. 
 Via thrift, we have the exact same bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8356) Slice query on a super column family with counters doesn't get all the data

2014-11-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Lalevée updated CASSANDRA-8356:
---
Summary: Slice query on a super column family with counters doesn't get all 
the data  (was: Slice query on a super column family with counters don't get 
all the data)

 Slice query on a super column family with counters doesn't get all the data
 ---

 Key: CASSANDRA-8356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8356
 Project: Cassandra
  Issue Type: Bug
Reporter: Nicolas Lalevée

 We've finally been able to upgrade our cluster to 2.0.11, after 
 CASSANDRA-7188 being fixed.
 But now slice queries on a super column family with counters doesn't return 
 all the expected data. We first though because of all the trouble we had that 
 we lost data, but there a way to actually get the data, so nothing is lost; 
 it just that cassandra seems to incorrectly skip it.
 See the following CQL log:
 {noformat}
 cqlsh:Theme desc table theme_view;
 CREATE TABLE theme_view (
   key bigint,
   column1 varint,
   column2 text,
   value counter,
   PRIMARY KEY ((key), column1, column2)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=1.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:Theme select * from theme_view where key = 99421 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2011-03-26 | 5
  99421 | -12 | 2011-03-27 | 2
  99421 | -12 | 2011-03-28 |40
  99421 | -12 | 2011-03-29 |14
  99421 | -12 | 2011-03-30 |17
  99421 | -12 | 2011-03-31 | 5
  99421 | -12 | 2011-04-01 |37
  99421 | -12 | 2011-04-02 | 7
  99421 | -12 | 2011-04-03 | 4
 (10 rows)
 cqlsh:Theme select * from theme_view where key = 99421 and column1 = -12 
 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2014-05-06 |15
  99421 | -12 | 2014-06-06 | 7
  99421 | -12 | 2014-06-10 |22
  99421 | -12 | 2014-06-11 |34
  99421 | -12 | 2014-06-12 |35
  99421 | -12 | 2014-06-13 |26
  99421 | -12 | 2014-06-14 |16
  99421 | -12 | 2014-06-15 |24
  99421 | -12 | 2014-06-16 |25
 (10 rows)
 {noformat}
 As you can see the second query should return data from 2012, but it is not. 
 Via thrift, we have the exact same bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220948#comment-14220948
 ] 

Aleksey Yeschenko commented on CASSANDRA-8280:
--

LGTM, with one last 2.0 nit - can you put the builder total size calculation 
logic into a method in ColumnNameBuilder?

That, and you should include the extra 3 bytes per component in the calculation 
(2 for size, one for EOC).

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.0-v2.txt, 8280-2.0.txt, 8280-2.1-v2.txt, 
 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 

[jira] [Updated] (CASSANDRA-8244) Token, DecoratedKey, RowPosition and all bound types should not make any hidden references to the database partitioner

2014-11-21 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-8244:
---
Attachment: 8244.patch

Rebased new version and patch are uploaded, comparison [at the same 
location|https://github.com/blambov/cassandra/compare/8244-partitioner-in-token].

bq. getHeapSize on BigIntegerToken has a 'TODO: Probably wrong' comment for 
getHeapSize()
Sorry, that wasn't to be in the final code. Since the hashes are all fixed-size 
numbers, the heap size actually is correct, just badly named. Fixed static 
field name.
bq. annotate @VisibleForTesting on 
RandomPartitioner.BigIntegerToken.BigIntegerToken(String token)
Done.
bq. clean up import order on FBUtilities.java
Done.
bq. looks like changes to a couple of the .db files under test snuck in on 
commit
Done.

 Token, DecoratedKey, RowPosition and all bound types should not make any 
 hidden references to the database partitioner
 --

 Key: CASSANDRA-8244
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8244
 Project: Cassandra
  Issue Type: Bug
Reporter: Branimir Lambov
Assignee: Branimir Lambov
Priority: Minor
 Fix For: 3.0

 Attachments: 8244.patch


 Currently some of the functionality of Token refers to 
 StorageService.getPartitioner() to avoid needing an extra argument. This is 
 in turn implicitly used by RowPosition and then Range, causing possible 
 problems, for example when ranges on secondary indices are used in a 
 murmur-partitioned database.
 These references should be removed to force explicit choice of partitioner by 
 callers; alternatively, the Token interface could be changed to provide a 
 reference to the partitioner that created it.
 (Note: the hidden reference to partitioner in serialization is a separate 
 issue.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6198) Distinguish streaming traffic at network level

2014-11-21 Thread Norman Maurer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norman Maurer updated CASSANDRA-6198:
-
Attachment: 0001-CASSANDRA-6198-Set-IPTOS_THROUGHPUT-on-streaming-con.txt

 Distinguish streaming traffic at network level
 --

 Key: CASSANDRA-6198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6198
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: Norman Maurer
Priority: Minor
 Attachments: 
 0001-CASSANDRA-6198-Set-IPTOS_THROUGHPUT-on-streaming-con.txt


 It would be nice to have some information in the TCP packet which network 
 teams can inspect to distinguish between streaming traffic and other organic 
 cassandra traffic. This is very useful for monitoring WAN traffic. 
 Here are some solutions:
 1) Use a different port for streaming. 
 2) Add some IP header. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6198) Distinguish streaming traffic at network level

2014-11-21 Thread Norman Maurer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norman Maurer updated CASSANDRA-6198:
-
Attachment: (was: 
0001-CASSANDRA-6198-Set-IPTOS_THROUGHPUT-on-streaming-con.txt)

 Distinguish streaming traffic at network level
 --

 Key: CASSANDRA-6198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6198
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: Norman Maurer
Priority: Minor

 It would be nice to have some information in the TCP packet which network 
 teams can inspect to distinguish between streaming traffic and other organic 
 cassandra traffic. This is very useful for monitoring WAN traffic. 
 Here are some solutions:
 1) Use a different port for streaming. 
 2) Add some IP header. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6198) Distinguish streaming traffic at network level

2014-11-21 Thread Norman Maurer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norman Maurer updated CASSANDRA-6198:
-
Attachment: 0001-CASSANDRA-6198-Set-IPTOS_THROUGHPUT-on-streaming-con.txt

 Distinguish streaming traffic at network level
 --

 Key: CASSANDRA-6198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6198
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: Norman Maurer
Priority: Minor
 Attachments: 
 0001-CASSANDRA-6198-Set-IPTOS_THROUGHPUT-on-streaming-con.txt


 It would be nice to have some information in the TCP packet which network 
 teams can inspect to distinguish between streaming traffic and other organic 
 cassandra traffic. This is very useful for monitoring WAN traffic. 
 Here are some solutions:
 1) Use a different port for streaming. 
 2) Add some IP header. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6198) Distinguish streaming traffic at network level

2014-11-21 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220968#comment-14220968
 ] 

Norman Maurer commented on CASSANDRA-6198:
--

Please review...

 Distinguish streaming traffic at network level
 --

 Key: CASSANDRA-6198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6198
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: Norman Maurer
Priority: Minor
 Attachments: 
 0001-CASSANDRA-6198-Set-IPTOS_THROUGHPUT-on-streaming-con.txt


 It would be nice to have some information in the TCP packet which network 
 teams can inspect to distinguish between streaming traffic and other organic 
 cassandra traffic. This is very useful for monitoring WAN traffic. 
 Here are some solutions:
 1) Use a different port for streaming. 
 2) Add some IP header. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2014-11-21 Thread Norman Maurer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norman Maurer updated CASSANDRA-8086:
-
Attachment: 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt

Please review... This is against 2.1

 Cassandra should have ability to limit the number of native connections
 ---

 Key: CASSANDRA-8086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
 Project: Cassandra
  Issue Type: Bug
Reporter: Vishy Kasar
Assignee: Norman Maurer
 Attachments: 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt


 We have a production cluster with 72 instances spread across 2 DCs. We have a 
 large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
 connects to 4 cassandra instances. Some event (we think it is a schema change 
 on server side) triggered the client to establish connections to all 
 cassandra instances of local DC. This brought the server to its knees. The 
 client connections failed and client attempted re-connections. 
 Cassandra should protect itself from such attack from client. Do we have any 
 knobs to control the number of max connections? If not, we need to add that 
 knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6198) Distinguish streaming traffic at network level

2014-11-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6198:
-
Reviewer: Brandon Williams

 Distinguish streaming traffic at network level
 --

 Key: CASSANDRA-6198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6198
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: Norman Maurer
Priority: Minor
 Attachments: 
 0001-CASSANDRA-6198-Set-IPTOS_THROUGHPUT-on-streaming-con.txt


 It would be nice to have some information in the TCP packet which network 
 teams can inspect to distinguish between streaming traffic and other organic 
 cassandra traffic. This is very useful for monitoring WAN traffic. 
 Here are some solutions:
 1) Use a different port for streaming. 
 2) Add some IP header. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2014-11-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8086:
-
Reviewer: Sylvain Lebresne  (was: Brandon Williams)

 Cassandra should have ability to limit the number of native connections
 ---

 Key: CASSANDRA-8086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
 Project: Cassandra
  Issue Type: Bug
Reporter: Vishy Kasar
Assignee: Norman Maurer
 Attachments: 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt


 We have a production cluster with 72 instances spread across 2 DCs. We have a 
 large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
 connects to 4 cassandra instances. Some event (we think it is a schema change 
 on server side) triggered the client to establish connections to all 
 cassandra instances of local DC. This brought the server to its knees. The 
 client connections failed and client attempted re-connections. 
 Cassandra should protect itself from such attack from client. Do we have any 
 knobs to control the number of max connections? If not, we need to add that 
 knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8329) LeveledCompactionStrategy should split large files across data directories when compacting

2014-11-21 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220914#comment-14220914
 ] 

Alan Boudreault edited comment on CASSANDRA-8329 at 11/21/14 2:43 PM:
--

Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

!test_no_patch_2.0.jpg|thumbnaild!

h5. Results - With Patch

Success. There is no more peak during the compaction.

[^test_with_patch_2.0.jpg]

Let me know if I can do anything else.



was (Author: aboudreault):
Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

[^test_no_patch_2.0.jpg]

h5. Results - With Patch

Success. There is no more peak during the compaction.

[^test_with_patch_2.0.jpg]

Let me know if I can do anything else.


 LeveledCompactionStrategy should split large files across data directories 
 when compacting
 --

 Key: CASSANDRA-8329
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8329
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: J.B. Langston
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 
 0001-get-new-sstable-directory-for-every-new-file-during-.patch, 
 test_no_patch_2.0.jpg, test_with_patch_2.0.jpg


 Because we fall back to STCS for L0 when LCS gets behind, the sstables in L0 
 can get quite large during sustained periods of heavy writes.  This can 
 result in large imbalances between data volumes when using JBOD support.  
 Eventually these large files get broken up as L0 sstables are moved up into 
 higher levels; however, because LCS only chooses a single volume on which to 
 write all of the sstables created during a single compaction, the imbalance 
 is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8329) LeveledCompactionStrategy should split large files across data directories when compacting

2014-11-21 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220914#comment-14220914
 ] 

Alan Boudreault edited comment on CASSANDRA-8329 at 11/21/14 2:43 PM:
--

Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

!test_no_patch_2.0.jpg|thumbnail!

h5. Results - With Patch

Success. There is no more peak during the compaction.

[^test_with_patch_2.0.jpg]

Let me know if I can do anything else.



was (Author: aboudreault):
Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

!test_no_patch_2.0.jpg|thumbnaild!

h5. Results - With Patch

Success. There is no more peak during the compaction.

[^test_with_patch_2.0.jpg]

Let me know if I can do anything else.


 LeveledCompactionStrategy should split large files across data directories 
 when compacting
 --

 Key: CASSANDRA-8329
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8329
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: J.B. Langston
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 
 0001-get-new-sstable-directory-for-every-new-file-during-.patch, 
 test_no_patch_2.0.jpg, test_with_patch_2.0.jpg


 Because we fall back to STCS for L0 when LCS gets behind, the sstables in L0 
 can get quite large during sustained periods of heavy writes.  This can 
 result in large imbalances between data volumes when using JBOD support.  
 Eventually these large files get broken up as L0 sstables are moved up into 
 higher levels; however, because LCS only chooses a single volume on which to 
 write all of the sstables created during a single compaction, the imbalance 
 is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8329) LeveledCompactionStrategy should split large files across data directories when compacting

2014-11-21 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220914#comment-14220914
 ] 

Alan Boudreault edited comment on CASSANDRA-8329 at 11/21/14 2:46 PM:
--

Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

[^test_no_patch_2.0.jpg]

h5. Results - With Patch

Success. There is no more peak during the compaction.

[^test_with_patch_2.0.jpg]

Let me know if I can do anything else.



was (Author: aboudreault):
Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

!test_no_patch_2.0.jpg!

h5. Results - With Patch

Success. There is no more peak during the compaction.

[^test_with_patch_2.0.jpg]

Let me know if I can do anything else.


 LeveledCompactionStrategy should split large files across data directories 
 when compacting
 --

 Key: CASSANDRA-8329
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8329
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: J.B. Langston
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 
 0001-get-new-sstable-directory-for-every-new-file-during-.patch, 
 test_no_patch_2.0.jpg, test_with_patch_2.0.jpg


 Because we fall back to STCS for L0 when LCS gets behind, the sstables in L0 
 can get quite large during sustained periods of heavy writes.  This can 
 result in large imbalances between data volumes when using JBOD support.  
 Eventually these large files get broken up as L0 sstables are moved up into 
 higher levels; however, because LCS only chooses a single volume on which to 
 write all of the sstables created during a single compaction, the imbalance 
 is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8329) LeveledCompactionStrategy should split large files across data directories when compacting

2014-11-21 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220914#comment-14220914
 ] 

Alan Boudreault edited comment on CASSANDRA-8329 at 11/21/14 2:46 PM:
--

Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

!test_no_patch_2.0.jpg!

h5. Results - With Patch

Success. There is no more peak during the compaction.

[^test_with_patch_2.0.jpg]

Let me know if I can do anything else.



was (Author: aboudreault):
Devs, here my test results.

h4. Test

* 12 disks of 2G of size
* Cassandra use default values for concurrent_compactors and 
compaction_throughput_mb_per_sec.
* Goal: Stress the server with many concurent writes  for 45-50 minutes.

h5. Results - No Patch

We can see the peak the LCS compaction of big sstables.

!test_no_patch_2.0.jpg|thumbnail!

h5. Results - With Patch

Success. There is no more peak during the compaction.

[^test_with_patch_2.0.jpg]

Let me know if I can do anything else.


 LeveledCompactionStrategy should split large files across data directories 
 when compacting
 --

 Key: CASSANDRA-8329
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8329
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: J.B. Langston
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 
 0001-get-new-sstable-directory-for-every-new-file-during-.patch, 
 test_no_patch_2.0.jpg, test_with_patch_2.0.jpg


 Because we fall back to STCS for L0 when LCS gets behind, the sstables in L0 
 can get quite large during sustained periods of heavy writes.  This can 
 result in large imbalances between data volumes when using JBOD support.  
 Eventually these large files get broken up as L0 sstables are moved up into 
 higher levels; however, because LCS only chooses a single volume on which to 
 write all of the sstables created during a single compaction, the imbalance 
 is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8357) ArrayOutOfBounds in cassandra-stress with inverted exponential distribution

2014-11-21 Thread JIRA
Jens Preußner created CASSANDRA-8357:


 Summary: ArrayOutOfBounds in cassandra-stress with inverted 
exponential distribution
 Key: CASSANDRA-8357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8357
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: 6-node cassandra cluster (2.1.1) on debian.
Reporter: Jens Preußner
 Fix For: 2.1.1


When using the CQLstress example from GitHub 
(https://github.com/apache/cassandra/blob/trunk/tools/cqlstress-example.yaml) 
with an inverted exponential distribution in the insert-partitions field, 
generated threads fail with
Exception in thread Thread-20 java.lang.ArrayIndexOutOfBoundsException: 20 at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:307)

See the gist https://gist.github.com/jenzopr/9edde53122554729c852 for the 
typetest.yaml I used.
The call was:
cassandra-stress user profile=typetest.yaml ops\(insert=1\) -node $NODES



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8357) ArrayOutOfBounds in cassandra-stress with inverted exponential distribution

2014-11-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Preußner updated CASSANDRA-8357:
-
Fix Version/s: (was: 2.1.1)

 ArrayOutOfBounds in cassandra-stress with inverted exponential distribution
 ---

 Key: CASSANDRA-8357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8357
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: 6-node cassandra cluster (2.1.1) on debian.
Reporter: Jens Preußner

 When using the CQLstress example from GitHub 
 (https://github.com/apache/cassandra/blob/trunk/tools/cqlstress-example.yaml) 
 with an inverted exponential distribution in the insert-partitions field, 
 generated threads fail with
 Exception in thread Thread-20 java.lang.ArrayIndexOutOfBoundsException: 20 
 at 
 org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:307)
 See the gist https://gist.github.com/jenzopr/9edde53122554729c852 for the 
 typetest.yaml I used.
 The call was:
 cassandra-stress user profile=typetest.yaml ops\(insert=1\) -node $NODES



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-21 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221023#comment-14221023
 ] 

Joshua McKenzie commented on CASSANDRA-8192:


Can you reproduce the error on the 64-bit machine with the 3G heap without 
running from Upsource?  i.e. if you run it on that box using the packaged 
bin\cassandra.bat within the cassandra folder and JDK 7u71 for example, do you 
see the same errors?

 AssertionError in Memory.java
 -

 Key: CASSANDRA-8192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
 Attachments: cassandra.bat, cassandra.yaml, system.log


 Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
 start up.
 {panel:title=system.log}
 ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
 Exception in thread Thread[SSTableBatchOpen:1,5,main]
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:135)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {panel}
 In the attached log you can still see as well CASSANDRA-8069 and 
 CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Remove hidden references to partitioner in Tokens

2014-11-21 Thread jmckenzie
Remove hidden references to partitioner in Tokens

Patch by blambov; reviewed by jmckenzie for CASSANDRA-8244


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/07893d70
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/07893d70
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/07893d70

Branch: refs/heads/trunk
Commit: 07893d704598f7cbbc316c9a65a8c415e5404dfa
Parents: 68fdb2d
Author: Branimir Lambov branimir.lam...@datastax.com
Authored: Fri Nov 21 07:58:28 2014 -0800
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Fri Nov 21 07:58:28 2014 -0800

--
 .../org/apache/cassandra/client/RingCache.java  |   2 +-
 .../cql3/statements/SelectStatement.java|   2 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  |   6 +-
 .../org/apache/cassandra/db/DataTracker.java|   2 +-
 .../org/apache/cassandra/db/DecoratedKey.java   |  15 +-
 .../cassandra/db/HintedHandOffManager.java  |   2 +-
 src/java/org/apache/cassandra/db/Memtable.java  |   6 +-
 .../compaction/AbstractCompactionStrategy.java  |   2 +-
 .../db/compaction/CompactionManager.java|   2 +-
 .../compaction/LeveledCompactionStrategy.java   |   2 +-
 .../AbstractSimplePerColumnSecondaryIndex.java  |   7 -
 .../cassandra/db/index/SecondaryIndex.java  |  12 +-
 .../db/index/composites/CompositesSearcher.java |   2 +-
 .../cassandra/db/index/keys/KeysSearcher.java   |   2 +-
 .../apache/cassandra/dht/AbstractBounds.java|   6 +-
 .../dht/AbstractByteOrderedPartitioner.java | 206 ---
 .../cassandra/dht/AbstractPartitioner.java  |  31 ---
 .../apache/cassandra/dht/BigIntegerToken.java   |  35 ---
 src/java/org/apache/cassandra/dht/Bounds.java   |  24 +-
 .../cassandra/dht/ByteOrderedPartitioner.java   | 257 ++-
 .../org/apache/cassandra/dht/BytesToken.java|  79 --
 .../apache/cassandra/dht/ExcludingBounds.java   |  21 +-
 .../org/apache/cassandra/dht/IPartitioner.java  |   9 -
 .../cassandra/dht/IncludingExcludingBounds.java |  22 +-
 .../apache/cassandra/dht/LocalPartitioner.java  |  68 -
 .../org/apache/cassandra/dht/LocalToken.java|  46 
 .../org/apache/cassandra/dht/LongToken.java |  62 -
 .../cassandra/dht/Murmur3Partitioner.java   |  66 -
 .../dht/OrderPreservingPartitioner.java |  31 ++-
 .../apache/cassandra/dht/RandomPartitioner.java |  41 ++-
 src/java/org/apache/cassandra/dht/Range.java|  45 ++--
 .../org/apache/cassandra/dht/RingPosition.java  |   4 +-
 .../org/apache/cassandra/dht/StringToken.java   |  29 ---
 src/java/org/apache/cassandra/dht/Token.java|  35 ++-
 .../hadoop/AbstractColumnFamilyInputFormat.java |  11 +-
 .../cassandra/io/sstable/CQLSSTableWriter.java  |   2 +-
 .../io/sstable/format/big/BigTableScanner.java  |  14 +-
 .../apache/cassandra/service/StorageProxy.java  |   2 +-
 .../cassandra/thrift/CassandraServer.java   |   8 +-
 .../cassandra/thrift/ThriftValidation.java  |   4 +-
 .../org/apache/cassandra/tools/BulkLoader.java  |   2 +-
 .../org/apache/cassandra/utils/FBUtilities.java |  30 ++-
 .../org/apache/cassandra/utils/MerkleTree.java  |   2 +-
 test/unit/org/apache/cassandra/Util.java|   1 +
 .../org/apache/cassandra/db/CleanupTest.java|   2 +-
 .../org/apache/cassandra/db/RowCacheTest.java   |   2 +-
 .../apache/cassandra/db/SystemKeyspaceTest.java |   2 +-
 .../db/compaction/AntiCompactionTest.java   |   8 +-
 .../db/compaction/CompactionsTest.java  |   2 +-
 .../dht/ByteOrderedPartitionerTest.java |   2 +-
 .../apache/cassandra/dht/KeyCollisionTest.java  |  55 ++--
 .../cassandra/dht/Murmur3PartitionerTest.java   |   2 +-
 .../dht/OrderPreservingPartitionerTest.java |   2 +-
 .../cassandra/dht/PartitionerTestCase.java  |   5 +
 .../cassandra/dht/RandomPartitionerTest.java|   2 +-
 .../org/apache/cassandra/dht/RangeTest.java | 168 ++--
 .../cassandra/io/sstable/IndexSummaryTest.java  |   2 +-
 .../cassandra/io/sstable/SSTableReaderTest.java |   4 +-
 .../io/sstable/SSTableScannerTest.java  |   2 +-
 .../locator/NetworkTopologyStrategyTest.java|   2 +-
 .../locator/OldNetworkTopologyStrategyTest.java |   2 +-
 .../ReplicationStrategyEndpointCacheTest.java   |   2 +-
 .../cassandra/locator/SimpleStrategyTest.java   |   4 +-
 .../cassandra/repair/LocalSyncTaskTest.java |   2 +-
 .../cassandra/repair/RepairSessionTest.java |   4 +-
 .../repair/messages/RepairOptionTest.java   |   2 +-
 .../service/LeaveAndBootstrapTest.java  |  15 +-
 .../org/apache/cassandra/service/MoveTest.java  |   7 +-
 .../apache/cassandra/service/RemoveTest.java|   2 +-
 .../cassandra/service/SerializationsTest.java   |   2 +-
 .../service/StorageServiceServerTest.java   |   8 +-
 .../apache/cassandra/utils/MerkleTreeTest.java  |   3 +-
 72 files changed, 746 

[1/2] cassandra git commit: Remove hidden references to partitioner in Tokens

2014-11-21 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 68fdb2db2 - 07893d704


http://git-wip-us.apache.org/repos/asf/cassandra/blob/07893d70/src/java/org/apache/cassandra/hadoop/AbstractColumnFamilyInputFormat.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/AbstractColumnFamilyInputFormat.java 
b/src/java/org/apache/cassandra/hadoop/AbstractColumnFamilyInputFormat.java
index 8368519..f4ad40f 100644
--- a/src/java/org/apache/cassandra/hadoop/AbstractColumnFamilyInputFormat.java
+++ b/src/java/org/apache/cassandra/hadoop/AbstractColumnFamilyInputFormat.java
@@ -150,14 +150,12 @@ public abstract class AbstractColumnFamilyInputFormatK, 
Y extends InputFormat
 if (jobKeyRange.end_token != null)
 throw new IllegalArgumentException(only start_key 
supported);
 jobRange = new 
Range(partitioner.getToken(jobKeyRange.start_key),
-   
partitioner.getToken(jobKeyRange.end_key),
-   partitioner);
+   
partitioner.getToken(jobKeyRange.end_key));
 }
 else if (jobKeyRange.start_token != null)
 {
 jobRange = new 
Range(partitioner.getTokenFactory().fromString(jobKeyRange.start_token),
-   
partitioner.getTokenFactory().fromString(jobKeyRange.end_token),
-   partitioner);
+   
partitioner.getTokenFactory().fromString(jobKeyRange.end_token));
 }
 else
 {
@@ -175,8 +173,7 @@ public abstract class AbstractColumnFamilyInputFormatK, Y 
extends InputFormat
 else
 {
 RangeToken dhtRange = new 
RangeToken(partitioner.getTokenFactory().fromString(range.start_token),
- 
partitioner.getTokenFactory().fromString(range.end_token),
- partitioner);
+ 
partitioner.getTokenFactory().fromString(range.end_token));
 
 if (dhtRange.intersects(jobRange))
 {
@@ -252,7 +249,7 @@ public abstract class AbstractColumnFamilyInputFormatK, Y 
extends InputFormat
 {
 Token left = factory.fromString(subSplit.getStart_token());
 Token right = factory.fromString(subSplit.getEnd_token());
-RangeToken range = new RangeToken(left, right, 
partitioner);
+RangeToken range = new RangeToken(left, right);
 ListRangeToken ranges = range.isWrapAround() ? 
range.unwrap() : ImmutableList.of(range);
 for (RangeToken subrange : ranges)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/07893d70/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
index d4b4eab..43cd2c0 100644
--- a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
@@ -272,7 +272,7 @@ public class CQLSSTableWriter implements Closeable
 public static class Builder
 {
 private File directory;
-private IPartitioner partitioner = new Murmur3Partitioner();
+private IPartitioner partitioner = Murmur3Partitioner.instance;
 
 protected SSTableFormat.Type formatType = null;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/07893d70/src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java 
b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
index db55353..7e3c877 100644
--- a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
@@ -76,16 +76,16 @@ public class BigTableScanner implements ICompactionScanner
 this.rowIndexEntrySerializer = 
sstable.descriptor.version.getSSTableFormat().getIndexSerializer(sstable.metadata);
 
 ListAbstractBoundsRowPosition boundsList = new ArrayList(2);
-if (dataRange.isWrapAround()  
!dataRange.stopKey().isMinimum(sstable.partitioner))
+if (dataRange.isWrapAround()  !dataRange.stopKey().isMinimum())
 {
 // split the wrapping range into two parts: 1) the part that 
starts at the beginning of the sstable, and
 

[jira] [Created] (CASSANDRA-8358) Bundled tools shouldn't be using Thrift API

2014-11-21 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-8358:


 Summary: Bundled tools shouldn't be using Thrift API
 Key: CASSANDRA-8358
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8358
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
 Fix For: 3.0


In 2.1, we switched cqlsh to the python-driver.
In 3.0, we got rid of cassandra-cli.

Yet there is still code that's using legacy Thrift API. We want to convert it 
all to use the java-driver instead.

1. BulkLoader uses Thrift to query the schema tables. It should be using 
java-driver metadata APIs directly instead.
2. o.a.c.hadoop.cql3.CqlRecordWriter is using Thrift
3. o.a.c.hadoop.ColumnFamilyRecordReader is using Thrift
4. o.a.c.hadoop.AbstractCassandraStorage is using Thrift
5. o.a.c.hadoop.CqlStorage is using Thrift

Some of the things listed above use Thrift to get the list of partition key 
columns or clustering columns. Those should be converted to use the Metadata 
API of the java-driver.

Somewhat related to that, we also have badly ported code from Thrift in 
o.a.c.hadoop.cql3.CqlRecordReader (see fetchKeys()) that manually fetches 
columns from schema tables instead of properly using the driver's Metadata API.

We need all of it fixed. One exception, for now, is 
o.a.c.hadoop.AbstractColumnFamilyInputFormat - it's using Thrift for its 
describe_splits_ex() call that cannot be currently replaced by any java-driver 
call (?).

Once this is done, we can stop starting Thrift RPC port by default in 
cassandra.yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8358) Bundled tools shouldn't be using Thrift API

2014-11-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-8358:
--

Assignee: Philip Thompson

 Bundled tools shouldn't be using Thrift API
 ---

 Key: CASSANDRA-8358
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8358
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Philip Thompson
 Fix For: 3.0


 In 2.1, we switched cqlsh to the python-driver.
 In 3.0, we got rid of cassandra-cli.
 Yet there is still code that's using legacy Thrift API. We want to convert it 
 all to use the java-driver instead.
 1. BulkLoader uses Thrift to query the schema tables. It should be using 
 java-driver metadata APIs directly instead.
 2. o.a.c.hadoop.cql3.CqlRecordWriter is using Thrift
 3. o.a.c.hadoop.ColumnFamilyRecordReader is using Thrift
 4. o.a.c.hadoop.AbstractCassandraStorage is using Thrift
 5. o.a.c.hadoop.CqlStorage is using Thrift
 Some of the things listed above use Thrift to get the list of partition key 
 columns or clustering columns. Those should be converted to use the Metadata 
 API of the java-driver.
 Somewhat related to that, we also have badly ported code from Thrift in 
 o.a.c.hadoop.cql3.CqlRecordReader (see fetchKeys()) that manually fetches 
 columns from schema tables instead of properly using the driver's Metadata 
 API.
 We need all of it fixed. One exception, for now, is 
 o.a.c.hadoop.AbstractColumnFamilyInputFormat - it's using Thrift for its 
 describe_splits_ex() call that cannot be currently replaced by any 
 java-driver call (?).
 Once this is done, we can stop starting Thrift RPC port by default in 
 cassandra.yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8358) Bundled tools shouldn't be using Thrift API

2014-11-21 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-8358:
---
Description: 
In 2.1, we switched cqlsh to the python-driver.
In 3.0, we got rid of cassandra-cli.

Yet there is still code that's using legacy Thrift API. We want to convert it 
all to use the java-driver instead.

1. BulkLoader uses Thrift to query the schema tables. It should be using 
java-driver metadata APIs directly instead.
2. o.a.c.hadoop.cql3.CqlRecordWriter is using Thrift
3. o.a.c.hadoop.ColumnFamilyRecordReader is using Thrift
4. o.a.c.hadoop.AbstractCassandraStorage is using Thrift
5. o.a.c.hadoop.pig.CqlStorage is using Thrift

Some of the things listed above use Thrift to get the list of partition key 
columns or clustering columns. Those should be converted to use the Metadata 
API of the java-driver.

Somewhat related to that, we also have badly ported code from Thrift in 
o.a.c.hadoop.cql3.CqlRecordReader (see fetchKeys()) that manually fetches 
columns from schema tables instead of properly using the driver's Metadata API.

We need all of it fixed. One exception, for now, is 
o.a.c.hadoop.AbstractColumnFamilyInputFormat - it's using Thrift for its 
describe_splits_ex() call that cannot be currently replaced by any java-driver 
call (?).

Once this is done, we can stop starting Thrift RPC port by default in 
cassandra.yaml.

  was:
In 2.1, we switched cqlsh to the python-driver.
In 3.0, we got rid of cassandra-cli.

Yet there is still code that's using legacy Thrift API. We want to convert it 
all to use the java-driver instead.

1. BulkLoader uses Thrift to query the schema tables. It should be using 
java-driver metadata APIs directly instead.
2. o.a.c.hadoop.cql3.CqlRecordWriter is using Thrift
3. o.a.c.hadoop.ColumnFamilyRecordReader is using Thrift
4. o.a.c.hadoop.AbstractCassandraStorage is using Thrift
5. o.a.c.hadoop.CqlStorage is using Thrift

Some of the things listed above use Thrift to get the list of partition key 
columns or clustering columns. Those should be converted to use the Metadata 
API of the java-driver.

Somewhat related to that, we also have badly ported code from Thrift in 
o.a.c.hadoop.cql3.CqlRecordReader (see fetchKeys()) that manually fetches 
columns from schema tables instead of properly using the driver's Metadata API.

We need all of it fixed. One exception, for now, is 
o.a.c.hadoop.AbstractColumnFamilyInputFormat - it's using Thrift for its 
describe_splits_ex() call that cannot be currently replaced by any java-driver 
call (?).

Once this is done, we can stop starting Thrift RPC port by default in 
cassandra.yaml.


 Bundled tools shouldn't be using Thrift API
 ---

 Key: CASSANDRA-8358
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8358
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Philip Thompson
 Fix For: 3.0


 In 2.1, we switched cqlsh to the python-driver.
 In 3.0, we got rid of cassandra-cli.
 Yet there is still code that's using legacy Thrift API. We want to convert it 
 all to use the java-driver instead.
 1. BulkLoader uses Thrift to query the schema tables. It should be using 
 java-driver metadata APIs directly instead.
 2. o.a.c.hadoop.cql3.CqlRecordWriter is using Thrift
 3. o.a.c.hadoop.ColumnFamilyRecordReader is using Thrift
 4. o.a.c.hadoop.AbstractCassandraStorage is using Thrift
 5. o.a.c.hadoop.pig.CqlStorage is using Thrift
 Some of the things listed above use Thrift to get the list of partition key 
 columns or clustering columns. Those should be converted to use the Metadata 
 API of the java-driver.
 Somewhat related to that, we also have badly ported code from Thrift in 
 o.a.c.hadoop.cql3.CqlRecordReader (see fetchKeys()) that manually fetches 
 columns from schema tables instead of properly using the driver's Metadata 
 API.
 We need all of it fixed. One exception, for now, is 
 o.a.c.hadoop.AbstractColumnFamilyInputFormat - it's using Thrift for its 
 describe_splits_ex() call that cannot be currently replaced by any 
 java-driver call (?).
 Once this is done, we can stop starting Thrift RPC port by default in 
 cassandra.yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8358) Bundled tools shouldn't be using Thrift API

2014-11-21 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221110#comment-14221110
 ] 

Jeremiah Jordan commented on CASSANDRA-8358:


o.a.c.h.ColumnFamily* stuff is replaced bu o.a.c.h.cql3.Cql*.  We should either 
just drop those, or leave them alone and people can turn on thrift if they 
still need to use them.

o.a.c.h.pig.CqlStorage and o.a.c.h.pig.CassandraStorage are replaced by 
o.a.c.h.pig.CqlNativeStorage.  Same thing, either drop or leave alone.

Maybe just mark all that stuff deprecated and leave it alone for now.

I think the main task here is to make sure o.a.c.cql3.Cql* and 
o.a.c.h.pig.CqlNativeStorage all use the metadata api's and don't have any 
thrift calls.

We need CASSANDRA-7688 or similar to be able to replace the describe_splits_ex 
call.  So we have to leave that in for now, but should be able to clean 
everything else up.

 Bundled tools shouldn't be using Thrift API
 ---

 Key: CASSANDRA-8358
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8358
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Philip Thompson
 Fix For: 3.0


 In 2.1, we switched cqlsh to the python-driver.
 In 3.0, we got rid of cassandra-cli.
 Yet there is still code that's using legacy Thrift API. We want to convert it 
 all to use the java-driver instead.
 1. BulkLoader uses Thrift to query the schema tables. It should be using 
 java-driver metadata APIs directly instead.
 2. o.a.c.hadoop.cql3.CqlRecordWriter is using Thrift
 3. o.a.c.hadoop.ColumnFamilyRecordReader is using Thrift
 4. o.a.c.hadoop.AbstractCassandraStorage is using Thrift
 5. o.a.c.hadoop.pig.CqlStorage is using Thrift
 Some of the things listed above use Thrift to get the list of partition key 
 columns or clustering columns. Those should be converted to use the Metadata 
 API of the java-driver.
 Somewhat related to that, we also have badly ported code from Thrift in 
 o.a.c.hadoop.cql3.CqlRecordReader (see fetchKeys()) that manually fetches 
 columns from schema tables instead of properly using the driver's Metadata 
 API.
 We need all of it fixed. One exception, for now, is 
 o.a.c.hadoop.AbstractColumnFamilyInputFormat - it's using Thrift for its 
 describe_splits_ex() call that cannot be currently replaced by any 
 java-driver call (?).
 Once this is done, we can stop starting Thrift RPC port by default in 
 cassandra.yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8359) Make DTCS consider removing SSTables much more frequently

2014-11-21 Thread JIRA
Björn Hegerfors created CASSANDRA-8359:
--

 Summary: Make DTCS consider removing SSTables much more frequently
 Key: CASSANDRA-8359
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8359
 Project: Cassandra
  Issue Type: Improvement
Reporter: Björn Hegerfors
Priority: Minor


When I run DTCS on a table where every value has a TTL (always the same TTL), 
SSTables are completely expired, but still stay on disk for much longer than 
they need to. I've applied CASSANDRA-8243, but it doesn't make an apparent 
difference (probably because the subject SSTables are purged via compaction 
anyway, if not by directly dropping them).

Disk size graphs show clearly that tombstones are only removed when the oldest 
SSTable participates in compaction. In the long run, size on disk continually 
grows bigger. This should not have to happen. It should easily be able to stay 
constant, thanks to DTCS separating the expired data from the rest.

I think checks for whether SSTables can be dropped should happen much more 
frequently. This is something that probably only needs to be tweaked for DTCS, 
but perhaps there's a more general place to put this. Anyway, my thinking is 
that DTCS should, on every call to getNextBackgroundTask, check which SSTables 
can be dropped. It would be something like a call to 
CompactionController.getFullyExpiredSSTables with all non-compactingSSTables 
sent in as compacting and all other SSTables sent in as overlapping. The 
returned SSTables, if any, are then added to whichever set of SSTables that 
DTCS decides to compact. Then before the compaction happens, Cassandra is going 
to make another call to CompactionController.getFullyExpiredSSTables, where it 
will see that it can just drop them.

This approach has a bit of redundancy in that it needs to call 
CompactionController.getFullyExpiredSSTables twice. To avoid that, the code 
path for deciding SSTables to drop would have to be changed.

(Side tracking a little here: I'm also thinking that tombstone compactions 
could be considered more often in DTCS. Maybe even some kind of multi-SSTable 
tombstone compaction involving the oldest couple of SSTables...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-7767) Expose sizes of off-heap data structures via JMX and `nodetool info`

2014-11-21 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-7767:
-

Assignee: Benjamin Lerer

 Expose sizes of off-heap data structures via JMX and `nodetool info`
 

 Key: CASSANDRA-7767
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7767
 Project: Cassandra
  Issue Type: New Feature
Reporter: J.B. Langston
Assignee: Benjamin Lerer

 It would be very helpful for troubleshooting memory consumption to know the 
 individual sizes of off-heap data structures such as bloom filters, index 
 summaries, compression metadata, etc. Can we expose this over JMX? Also, 
 since `nodetool info` already shows size of heap, key cache, etc. it seems 
 like a natural place to show this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7767) Expose sizes of off-heap data structures via JMX and `nodetool info`

2014-11-21 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-7767:
--
Fix Version/s: 2.0.12

 Expose sizes of off-heap data structures via JMX and `nodetool info`
 

 Key: CASSANDRA-7767
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7767
 Project: Cassandra
  Issue Type: New Feature
Reporter: J.B. Langston
Assignee: Benjamin Lerer
 Fix For: 2.0.12


 It would be very helpful for troubleshooting memory consumption to know the 
 individual sizes of off-heap data structures such as bloom filters, index 
 summaries, compression metadata, etc. Can we expose this over JMX? Also, 
 since `nodetool info` already shows size of heap, key cache, etc. it seems 
 like a natural place to show this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8360) In DTCS, always compact SSTables in the same time window, even if they are fewer than min_threshold

2014-11-21 Thread JIRA
Björn Hegerfors created CASSANDRA-8360:
--

 Summary: In DTCS, always compact SSTables in the same time window, 
even if they are fewer than min_threshold
 Key: CASSANDRA-8360
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8360
 Project: Cassandra
  Issue Type: Improvement
Reporter: Björn Hegerfors
Priority: Minor


DTCS uses min_threshold to decide how many time windows of the same size that 
need to accumulate before merging into a larger window. The age of an SSTable 
is determined as its min timestamp, and it always falls into exactly one of the 
time windows. If multiple SSTables fall into the same window, DTCS considers 
compacting them, but if they are fewer than min_threshold, it decides not to do 
it.

When do more than 1 but fewer than min_threshold SSTables end up in the same 
time window (except for the current window), you might ask? In the current 
state, DTCS can spill some extra SSTables into bigger windows when the previous 
window wasn't fully compacted, which happens all the time when the latest 
window stops being the current one. Also, repairs and hints can put new 
SSTables in old windows.

I think, and [~jjordan] agreed in a comment on CASSANDRA-6602, that DTCS should 
ignore min_threshold and compact tables in the same windows regardless of how 
few they are. I guess max_threshold should still be respected.

[~jjordan] suggested that this should apply to all windows but the current 
window, where all the new SSTables end up. That could make sense. I'm not clear 
on whether compacting many SSTables at once is more cost efficient or not, when 
it comes to the very newest and smallest SSTables. Maybe compacting as soon as 
2 SSTables are seen is fine if the initial window size is small enough? I guess 
the opposite could be the case too; that the very newest SSTables should be 
compacted very many at a time?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8150) Revaluate Default JVM tuning parameters

2014-11-21 Thread Matt Stump (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Stump updated CASSANDRA-8150:
--
Summary: Revaluate Default JVM tuning parameters  (was: Simplify and 
enlarge new heap calculation)

 Revaluate Default JVM tuning parameters
 ---

 Key: CASSANDRA-8150
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8150
 Project: Cassandra
  Issue Type: Improvement
  Components: Config
Reporter: Matt Stump
Assignee: Brandon Williams
 Attachments: upload.png


 It's been found that the old twitter recommendations of 100m per core up to 
 800m is harmful and should no longer be used.
 Instead the formula used should be 1/3 or 1/4 max heap with a max of 2G. 1/3 
 or 1/4 is debatable and I'm open to suggestions. If I were to hazard a guess 
 1/3 is probably better for releases greater than 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8061) tmplink files are not removed

2014-11-21 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reassigned CASSANDRA-8061:
--

Assignee: Joshua McKenzie  (was: Marcus Eriksson)

 tmplink files are not removed
 -

 Key: CASSANDRA-8061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Gianluca Borello
Assignee: Joshua McKenzie
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8248-thread_dump.txt


 After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
 filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
 and that is very similar, and I confirm it happens both on 2.1.0 as well as 
 from the latest commit on the cassandra-2.1 branch 
 (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
  from the cassandra-2.1)
 Even starting with a clean keyspace, after a few hours I get:
 {noformat}
 $ sudo find /raid0 | grep tmplink | xargs du -hs
 2.7G  
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
 13M   
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
 1.8G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
 12M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
 5.2M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
 822M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
 7.3M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
 1.2G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
 6.7M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
 1.1G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
 11M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
 1.7G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
 812K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
 122M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Data.db
 744K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-739-Index.db
 660K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Index.db
 796K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Index.db
 137M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Data.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Data.db
 139M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Data.db
 940K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Index.db
 936K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Index.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Data.db
 672K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Index.db
 113M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Data.db
 116M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Data.db
 712K  
 

[jira] [Created] (CASSANDRA-8361) Make DTCS split SSTables to perfectly fit time windows

2014-11-21 Thread JIRA
Björn Hegerfors created CASSANDRA-8361:
--

 Summary: Make DTCS split SSTables to perfectly fit time windows
 Key: CASSANDRA-8361
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8361
 Project: Cassandra
  Issue Type: Improvement
Reporter: Björn Hegerfors
Priority: Minor


The time windows that DTCS uses are what the strategy tries to align SSTables 
to, in order to get the right structure, for best performance. I added the 
ticket CASSANDRA-8360, taking SSTables one step closer to aligning with these 
windows in a 1:1 manner.

The idea in this ticket is to perfectly align SSTables with the DTCS time 
windows, by splitting SSTables that cross window borders. This can lead to 
certain benefits, perhaps mostly in consistency and predictability terms, where 
it will be very well defined where every value is stored that is old enough to 
have stabilized.

Read queries can be aligned with windows in order to guarantee a single disk 
seek (although then the client needs to know the right window placements). 
Basically, SSTables can be made to align perfectly on day borders, for example. 
Right now, there would be an SSTable that almost represents a day, but not 
perfectly. So some data is still in another SSTable. 

It could also be a useful property for tombstone expiration and repairs.

Practically all splits would happen only in the latest time windows with the 
newest and smallest SSTables. After those are split, DTCS would never compact 
SSTables across window borders. I have a hard time seeing when this could cause 
an expensive operation except for when switching from another compaction 
strategy (or even from current DTCS), and after a major compaction. In fact 
major compaction for DTCS should put data perfectly in windows rather than 
everything in one SSTable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8349) ALTER KEYSPACE causes tables not to be found

2014-11-21 Thread Joseph Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221171#comment-14221171
 ] 

Joseph Chu commented on CASSANDRA-8349:
---

Yes just restarting cqlsh works as well. I've also tried out DevCenter, which 
does not show this issue. It seems to be specific only to cqlsh in 2.1.2. I'll 
edit the description to reflect the changes.

 ALTER KEYSPACE causes tables not to be found
 

 Key: CASSANDRA-8349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8349
 Project: Cassandra
  Issue Type: Bug
Reporter: Joseph Chu
Priority: Minor

 Running Cassandra 2.1.2 on a single node.
 Reproduction steps in cqlsh:
 CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 CREATE TABLE a.a (a INT PRIMARY KEY);
 INSERT INTO a.a (a) VALUES (1);
 SELECT * FROM a.a;
 ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 2};
 SELECT * FROM a.a;
 DESCRIBE KEYSPACE a
 Errors:
 Column family 'a' not found
 Workaround(?):
 Restart the instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8349) Using cqlsh to alter keyspaces causes tables not to be found

2014-11-21 Thread Joseph Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Chu updated CASSANDRA-8349:
--
Description: 
Running cqlsh using Cassandra 2.1.2 on a single node.

Reproduction steps in cqlsh:

CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
CREATE TABLE a.a (a INT PRIMARY KEY);
INSERT INTO a.a (a) VALUES (1);
SELECT * FROM a.a;
ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 2};
SELECT * FROM a.a;
DESCRIBE KEYSPACE a

Errors:
Column family 'a' not found

Workaround:
Restart cqlsh

  was:
Running Cassandra 2.1.2 on a single node.

Reproduction steps in cqlsh:

CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
CREATE TABLE a.a (a INT PRIMARY KEY);
INSERT INTO a.a (a) VALUES (1);
SELECT * FROM a.a;
ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 2};
SELECT * FROM a.a;
DESCRIBE KEYSPACE a

Errors:
Column family 'a' not found

Workaround(?):
Restart the instance

Summary: Using cqlsh to alter keyspaces causes tables not to be found  
(was: ALTER KEYSPACE causes tables not to be found)

 Using cqlsh to alter keyspaces causes tables not to be found
 

 Key: CASSANDRA-8349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8349
 Project: Cassandra
  Issue Type: Bug
Reporter: Joseph Chu
Priority: Minor

 Running cqlsh using Cassandra 2.1.2 on a single node.
 Reproduction steps in cqlsh:
 CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 CREATE TABLE a.a (a INT PRIMARY KEY);
 INSERT INTO a.a (a) VALUES (1);
 SELECT * FROM a.a;
 ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 2};
 SELECT * FROM a.a;
 DESCRIBE KEYSPACE a
 Errors:
 Column family 'a' not found
 Workaround:
 Restart cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8349) Using cqlsh to alter keyspaces causes tables not to be found

2014-11-21 Thread Joseph Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Chu updated CASSANDRA-8349:
--
Description: 
Running cqlsh using Cassandra 2.1.2 on a single node.

Reproduction steps in cqlsh:

CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
CREATE TABLE a.a (a INT PRIMARY KEY);

INSERT INTO a.a (a) VALUES (1);
SELECT * FROM a.a;

ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 2};

SELECT * FROM a.a;
DESCRIBE KEYSPACE a

Errors:
Column family 'a' not found

Workaround:
Restart cqlsh

  was:
Running cqlsh using Cassandra 2.1.2 on a single node.

Reproduction steps in cqlsh:

CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
CREATE TABLE a.a (a INT PRIMARY KEY);
INSERT INTO a.a (a) VALUES (1);
SELECT * FROM a.a;
ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 2};
SELECT * FROM a.a;
DESCRIBE KEYSPACE a

Errors:
Column family 'a' not found

Workaround:
Restart cqlsh


 Using cqlsh to alter keyspaces causes tables not to be found
 

 Key: CASSANDRA-8349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8349
 Project: Cassandra
  Issue Type: Bug
Reporter: Joseph Chu
Priority: Minor

 Running cqlsh using Cassandra 2.1.2 on a single node.
 Reproduction steps in cqlsh:
 CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 CREATE TABLE a.a (a INT PRIMARY KEY);
 INSERT INTO a.a (a) VALUES (1);
 SELECT * FROM a.a;
 ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 2};
 SELECT * FROM a.a;
 DESCRIBE KEYSPACE a
 Errors:
 Column family 'a' not found
 Workaround:
 Restart cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-21 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8280:
---
Attachment: 8280-2.0-v3.txt

ah yes, I'd overlooked the length + EOC markers. v3 for 2.0 attached

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.0-v2.txt, 8280-2.0-v3.txt, 8280-2.0.txt, 
 8280-2.1-v2.txt, 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 

[jira] [Updated] (CASSANDRA-7538) Truncate of a CF should also delete Paxos CF

2014-11-21 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-7538:
---
Attachment: (was: 7538.txt)

 Truncate of a CF should also delete Paxos CF
 

 Key: CASSANDRA-7538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7538
 Project: Cassandra
  Issue Type: Bug
Reporter: sankalp kohli
Assignee: Sam Tunnicliffe
Priority: Minor

 We don't delete data from Paxos CF during truncate. This will cause data to 
 come back in the next CAS round for incomplete commits. 
 Also I am not sure whether we already do this but should we also not truncate 
 hints for that CF. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7538) Truncate of a CF should also delete Paxos CF

2014-11-21 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-7538:
---
Attachment: 7538.txt

Updated patch to ensure we update persisted paxos state whether we actually 
apply or drop the commit 

 Truncate of a CF should also delete Paxos CF
 

 Key: CASSANDRA-7538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7538
 Project: Cassandra
  Issue Type: Bug
Reporter: sankalp kohli
Assignee: Sam Tunnicliffe
Priority: Minor
 Attachments: 7538.txt


 We don't delete data from Paxos CF during truncate. This will cause data to 
 come back in the next CAS round for incomplete commits. 
 Also I am not sure whether we already do this but should we also not truncate 
 hints for that CF. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8362) Reduce memory usage of RefCountedMemory

2014-11-21 Thread Vijay (JIRA)
Vijay created CASSANDRA-8362:


 Summary: Reduce memory usage of RefCountedMemory
 Key: CASSANDRA-8362
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8362
 Project: Cassandra
  Issue Type: Bug
Reporter: Vijay
Assignee: Vijay
Priority: Minor


We can store the references as the first 4 bytes of the Unsafe memory and use 
case CAS[1] for reference counting of the memory.

This change will reduce the object over head + additional 4 bytes from java 
heap. Calling methods can reference as long's.

[1] 
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/concurrent/atomic/AtomicInteger.java#AtomicInteger.incrementAndGet%28%29



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8285) OOME in Cassandra 2.0.11

2014-11-21 Thread Kishan Karunaratne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221226#comment-14221226
 ] 

Kishan Karunaratne commented on CASSANDRA-8285:
---

Currently running Ruby duration tests against C* 2.0 head (2.0.12) in both 
duration and endurance fashion. This is another 6-day trial, and is about 40% 
complete (57.6h). So far no errors.

 OOME in Cassandra 2.0.11
 

 Key: CASSANDRA-8285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8285
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.11 + java-driver 2.0.8-SNAPSHOT
 Cassandra 2.0.11 + ruby-driver 1.0-beta
Reporter: Pierre Laporte
Assignee: Aleksey Yeschenko
 Attachments: OOME_node_system.log, gc.log.gz, 
 heap-usage-after-gc-zoom.png, heap-usage-after-gc.png


 We ran drivers 3-days endurance tests against Cassandra 2.0.11 and C* crashed 
 with an OOME.  This happened both with ruby-driver 1.0-beta and java-driver 
 2.0.8-snapshot.
 Attached are :
 | OOME_node_system.log | The system.log of one Cassandra node that crashed |
 | gc.log.gz | The GC log on the same node |
 | heap-usage-after-gc.png | The heap occupancy evolution after every GC cycle 
 |
 | heap-usage-after-gc-zoom.png | A focus on when things start to go wrong |
 Workload :
 Our test executes 5 CQL statements (select, insert, select, delete, select) 
 for a given unique id, during 3 days, using multiple threads.  There is not 
 change in the workload during the test.
 Symptoms :
 In the attached log, it seems something starts in Cassandra between 
 2014-11-06 10:29:22 and 2014-11-06 10:45:32.  This causes an allocation that 
 fills the heap.  We eventually get stuck in a Full GC storm and get an OOME 
 in the logs.
 I have run the java-driver tests against Cassandra 1.2.19 and 2.1.1.  The 
 error does not occur.  It seems specific to 2.0.11.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221294#comment-14221294
 ] 

Aleksey Yeschenko commented on CASSANDRA-8280:
--

Looks to me like you don't need the partition key length check in 
ModificationStatement - the subsequent call to 
ThriftValidation.validateKey(cfm, key) will validate the length.

Also, should override addUpdateForKey() in DeleteStatement, too.

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.0-v2.txt, 8280-2.0-v3.txt, 8280-2.0.txt, 
 8280-2.1-v2.txt, 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 

[jira] [Created] (CASSANDRA-8363) BUG

2014-11-21 Thread ZOILA (JIRA)
ZOILA created CASSANDRA-8363:


 Summary: BUG
 Key: CASSANDRA-8363
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8363
 Project: Cassandra
  Issue Type: Test
  Components: Core
 Environment: LINUX
Reporter: ZOILA
 Fix For: 2.0.12






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8364) ABC

2014-11-21 Thread ZOILA (JIRA)
ZOILA created CASSANDRA-8364:


 Summary: ABC
 Key: CASSANDRA-8364
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8364
 Project: Cassandra
  Issue Type: Sub-task
Reporter: ZOILA
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Ninja: fix check for duplicate cols in INSERT

2014-11-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 084d93daf - 025b4060c


Ninja: fix check for duplicate cols in INSERT

This was caused by the changes for CASSANDRA-8178


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/025b4060
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/025b4060
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/025b4060

Branch: refs/heads/cassandra-2.0
Commit: 025b4060cb3088f8815909e1710801f757e7a497
Parents: 084d93d
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 13:21:36 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 13:21:36 2014 -0600

--
 .../cassandra/cql3/statements/UpdateStatement.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/025b4060/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
index 022af26..e2da251 100644
--- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
@@ -138,13 +138,17 @@ public class UpdateStatement extends ModificationStatement
 
 for (int i = 0; i  columnNames.size(); i++)
 {
-CFDefinition.Name name = 
cfDef.get(columnNames.get(i).prepare(cfDef.cfm));
+ColumnIdentifier id = columnNames.get(i).prepare(cfDef.cfm);
+CFDefinition.Name name = cfDef.get(id);
 if (name == null)
-throw new InvalidRequestException(String.format(Unknown 
identifier %s, columnNames.get(i)));
+throw new InvalidRequestException(String.format(Unknown 
identifier %s, id));
 
 for (int j = 0; j  i; j++)
-if (name.name.equals(columnNames.get(j)))
-throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, name));
+{
+ColumnIdentifier otherId = 
columnNames.get(j).prepare(cfDef.cfm);
+if (id.equals(otherId))
+throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, id));
+}
 
 Term.Raw value = columnValues.get(i);
 



[jira] [Commented] (CASSANDRA-8363) BUG

2014-11-21 Thread ZOILA (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221304#comment-14221304
 ] 

ZOILA commented on CASSANDRA-8363:
--

AS

 BUG
 ---

 Key: CASSANDRA-8363
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8363
 Project: Cassandra
  Issue Type: Test
  Components: Core
 Environment: LINUX
Reporter: ZOILA
 Fix For: 2.0.12






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Ninja: fix check for duplicate cols in INSERT

2014-11-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 705e5e47d - 6826888be


Ninja: fix check for duplicate cols in INSERT

This was caused by the changes for CASSANDRA-8178


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/025b4060
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/025b4060
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/025b4060

Branch: refs/heads/cassandra-2.1
Commit: 025b4060cb3088f8815909e1710801f757e7a497
Parents: 084d93d
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 13:21:36 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 13:21:36 2014 -0600

--
 .../cassandra/cql3/statements/UpdateStatement.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/025b4060/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
index 022af26..e2da251 100644
--- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
@@ -138,13 +138,17 @@ public class UpdateStatement extends ModificationStatement
 
 for (int i = 0; i  columnNames.size(); i++)
 {
-CFDefinition.Name name = 
cfDef.get(columnNames.get(i).prepare(cfDef.cfm));
+ColumnIdentifier id = columnNames.get(i).prepare(cfDef.cfm);
+CFDefinition.Name name = cfDef.get(id);
 if (name == null)
-throw new InvalidRequestException(String.format(Unknown 
identifier %s, columnNames.get(i)));
+throw new InvalidRequestException(String.format(Unknown 
identifier %s, id));
 
 for (int j = 0; j  i; j++)
-if (name.name.equals(columnNames.get(j)))
-throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, name));
+{
+ColumnIdentifier otherId = 
columnNames.get(j).prepare(cfDef.cfm);
+if (id.equals(otherId))
+throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, id));
+}
 
 Term.Raw value = columnValues.get(i);
 



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-21 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6826888b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6826888b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6826888b

Branch: refs/heads/cassandra-2.1
Commit: 6826888be8604b3336e994c18f4c0e51f393d071
Parents: 705e5e4 025b406
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 13:25:41 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 13:25:41 2014 -0600

--
 .../cassandra/cql3/statements/UpdateStatement.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6826888b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
index b11173a,e2da251..2c87173
--- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
@@@ -136,13 -138,17 +136,17 @@@ public class UpdateStatement extends Mo
  
  for (int i = 0; i  columnNames.size(); i++)
  {
- ColumnDefinition def = 
cfm.getColumnDefinition(columnNames.get(i).prepare(cfm));
 -ColumnIdentifier id = columnNames.get(i).prepare(cfDef.cfm);
 -CFDefinition.Name name = cfDef.get(id);
 -if (name == null)
++ColumnIdentifier id = columnNames.get(i).prepare(cfm);
++ColumnDefinition def = cfm.getColumnDefinition(id);
 +if (def == null)
- throw new InvalidRequestException(String.format(Unknown 
identifier %s, columnNames.get(i)));
+ throw new InvalidRequestException(String.format(Unknown 
identifier %s, id));
  
  for (int j = 0; j  i; j++)
- if (def.name.equals(columnNames.get(j)))
- throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, def.name));
+ {
 -ColumnIdentifier otherId = 
columnNames.get(j).prepare(cfDef.cfm);
++ColumnIdentifier otherId = 
columnNames.get(j).prepare(cfm);
+ if (id.equals(otherId))
+ throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, id));
+ }
  
  Term.Raw value = columnValues.get(i);
  



[jira] [Updated] (CASSANDRA-8363) BUG

2014-11-21 Thread ZOILA (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZOILA updated CASSANDRA-8363:
-
Issue Type: Task  (was: Test)

 BUG
 ---

 Key: CASSANDRA-8363
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8363
 Project: Cassandra
  Issue Type: Task
  Components: Core
 Environment: LINUX
Reporter: ZOILA
 Fix For: 2.0.12






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Ninja: fix check for duplicate cols in INSERT

2014-11-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 07893d704 - c7a932c57


Ninja: fix check for duplicate cols in INSERT

This was caused by the changes for CASSANDRA-8178


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/025b4060
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/025b4060
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/025b4060

Branch: refs/heads/trunk
Commit: 025b4060cb3088f8815909e1710801f757e7a497
Parents: 084d93d
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 13:21:36 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 13:21:36 2014 -0600

--
 .../cassandra/cql3/statements/UpdateStatement.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/025b4060/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
index 022af26..e2da251 100644
--- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
@@ -138,13 +138,17 @@ public class UpdateStatement extends ModificationStatement
 
 for (int i = 0; i  columnNames.size(); i++)
 {
-CFDefinition.Name name = 
cfDef.get(columnNames.get(i).prepare(cfDef.cfm));
+ColumnIdentifier id = columnNames.get(i).prepare(cfDef.cfm);
+CFDefinition.Name name = cfDef.get(id);
 if (name == null)
-throw new InvalidRequestException(String.format(Unknown 
identifier %s, columnNames.get(i)));
+throw new InvalidRequestException(String.format(Unknown 
identifier %s, id));
 
 for (int j = 0; j  i; j++)
-if (name.name.equals(columnNames.get(j)))
-throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, name));
+{
+ColumnIdentifier otherId = 
columnNames.get(j).prepare(cfDef.cfm);
+if (id.equals(otherId))
+throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, id));
+}
 
 Term.Raw value = columnValues.get(i);
 



[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-21 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7a932c5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7a932c5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7a932c5

Branch: refs/heads/trunk
Commit: c7a932c57add10c7c455fb2b87d86f74b4e0cd3d
Parents: 07893d7 6826888
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 13:26:27 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 13:26:27 2014 -0600

--
 .../cassandra/cql3/statements/UpdateStatement.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7a932c5/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--



[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-21 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6826888b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6826888b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6826888b

Branch: refs/heads/trunk
Commit: 6826888be8604b3336e994c18f4c0e51f393d071
Parents: 705e5e4 025b406
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 13:25:41 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 13:25:41 2014 -0600

--
 .../cassandra/cql3/statements/UpdateStatement.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6826888b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
index b11173a,e2da251..2c87173
--- a/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
@@@ -136,13 -138,17 +136,17 @@@ public class UpdateStatement extends Mo
  
  for (int i = 0; i  columnNames.size(); i++)
  {
- ColumnDefinition def = 
cfm.getColumnDefinition(columnNames.get(i).prepare(cfm));
 -ColumnIdentifier id = columnNames.get(i).prepare(cfDef.cfm);
 -CFDefinition.Name name = cfDef.get(id);
 -if (name == null)
++ColumnIdentifier id = columnNames.get(i).prepare(cfm);
++ColumnDefinition def = cfm.getColumnDefinition(id);
 +if (def == null)
- throw new InvalidRequestException(String.format(Unknown 
identifier %s, columnNames.get(i)));
+ throw new InvalidRequestException(String.format(Unknown 
identifier %s, id));
  
  for (int j = 0; j  i; j++)
- if (def.name.equals(columnNames.get(j)))
- throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, def.name));
+ {
 -ColumnIdentifier otherId = 
columnNames.get(j).prepare(cfDef.cfm);
++ColumnIdentifier otherId = 
columnNames.get(j).prepare(cfm);
+ if (id.equals(otherId))
+ throw new 
InvalidRequestException(String.format(Multiple definitions found for column 
%s, id));
+ }
  
  Term.Raw value = columnValues.get(i);
  



[jira] [Commented] (CASSANDRA-8351) Running COPY FROM in cqlsh aborts with errors or segmentation fault

2014-11-21 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221327#comment-14221327
 ] 

Mikhail Stepura commented on CASSANDRA-8351:


[~joechu] which OS are you running?

 Running COPY FROM in cqlsh aborts with errors or segmentation fault
 ---

 Key: CASSANDRA-8351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8351
 Project: Cassandra
  Issue Type: Bug
Reporter: Joseph Chu
Priority: Minor
 Attachments: stress.cql, stress.csv


 Running Cassandra 2.1.2 binary tarball on a single instance.
 Put together a script to try to reproduce this using data generated by 
 cassandra-stress.
 Reproduction steps: Download files and run cqlsh -f stress.cql
 This may need to run a couple of times before errors are encountered. I've 
 seen this work best when running after a fresh install.
 Errors seen:
 1.Segmentation fault (core dumped)
 2.stress.cql:24:line contains NULL byte
stress.cql:24:Aborting import at record #0. Previously-inserted values 
 still present.
71 rows imported in 0.100 seconds.
 3.   *** glibc detected *** python: corrupted double-linked list: 
 0x01121ad0 ***
 === Backtrace: =
 /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7f80fe0cdb96]
 /lib/x86_64-linux-gnu/libc.so.6(+0x7fead)[0x7f80fe0ceead]
 python[0x42615d]
 python[0x501dc8]
 python[0x4ff715]
 python[0x425d02]
 python(PyEval_EvalCodeEx+0x1c4)[0x575db4]
 python[0x577be2]
 python(PyObject_Call+0x36)[0x4d91b6]
 python(PyEval_EvalFrameEx+0x2035)[0x54d8a5]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python[0x577be2]
 python(PyObject_Call+0x36)[0x4d91b6]
 python(PyEval_EvalFrameEx+0x2035)[0x54d8a5]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python[0x577ab0]
 python(PyObject_Call+0x36)[0x4d91b6]
 python[0x4c91fa]
 python(PyObject_Call+0x36)[0x4d91b6]
 python(PyEval_CallObjectWithKeywords+0x36)[0x4d97c6]
 python[0x4f7f58]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f80ff369e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f80fe1433fd]
 === Memory map: 
 0040-00672000 r-xp  08:01 1447344
 /usr/bin/python2.7
 00871000-00872000 r--p 00271000 08:01 1447344
 /usr/bin/python2.7
 00872000-008db000 rw-p 00272000 08:01 1447344
 /usr/bin/python2.7
 008db000-008ed000 rw-p  00:00 0 
 0090e000-0126 rw-p  00:00 0  
 [heap]
 7f80ec00-7f80ec0aa000 rw-p  00:00 0 
 7f80ec0aa000-7f80f000 ---p  00:00 0 
 7f80f000-7f80f0021000 rw-p  00:00 0 
 7f80f0021000-7f80f400 ---p  00:00 0 
 7f80f400-7f80f4021000 rw-p  00:00 0 
 7f80f4021000-7f80f800 ---p  00:00 0 
 7f80fa713000-7f80fa714000 ---p  00:00 0 
 7f80fa714000-7f80faf14000 rw-p  00:00 0  
 [stack:7493]
 7f80faf14000-7f80faf15000 ---p  00:00 0 
 7f80faf15000-7f80fb715000 rw-p  00:00 0  
 [stack:7492]
 7f80fb715000-7f80fb716000 ---p  00:00 0 
 7f80fb716000-7f80fbf16000 rw-p  00:00 0  
 [stack:7491]
 7f80fbf16000-7f80fbf21000 r-xp  08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fbf21000-7f80fc12 ---p b000 08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fc12-7f80fc121000 r--p a000 08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fc121000-7f80fc122000 rw-p b000 08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fc122000-7f80fc133000 r-xp  08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 7f80fc133000-7f80fc332000 ---p 00011000 08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 7f80fc332000-7f80fc333000 r--p 0001 08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 7f80fc333000-7f80fc335000 rw-p 00011000 08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 

[jira] [Reopened] (CASSANDRA-8364) ABC

2014-11-21 Thread ZOILA (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZOILA reopened CASSANDRA-8364:
--

 ABC
 ---

 Key: CASSANDRA-8364
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8364
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: ZOILA
Priority: Critical
 Fix For: 2.0.12






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8364) ABC

2014-11-21 Thread ZOILA (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZOILA updated CASSANDRA-8364:
-
Issue Type: Bug  (was: Sub-task)
Parent: (was: CASSANDRA-8363)

 ABC
 ---

 Key: CASSANDRA-8364
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8364
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: ZOILA
Priority: Critical
 Fix For: 2.0.12






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-11-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221331#comment-14221331
 ] 

Tyler Hobbs commented on CASSANDRA-7563:


When playing around with this, I discovered that you can trigger an assertion 
error when creating a function with a UDT without an explicit keyspace:

{noformat}
cqlsh use ks1;
cqlsh:ks1 create type mytype (a int);
cqlsh:ks1 create function bar (a mytype) RETURNS mytype LANGUAGE java AS 
$$return a;$$;
ErrorMessage code= [Server error] message=java.lang.AssertionError
{noformat}

{noformat}
java.lang.AssertionError: null
at org.apache.cassandra.config.Schema.getKSMetaData(Schema.java:222) 
~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawUT.prepare(CQL3Type.java:510) 
~[main/:na]
at 
org.apache.cassandra.cql3.statements.CreateFunctionStatement.announceMigration(CreateFunctionStatement.java:115)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:80)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:226)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:248) 
~[main/:na]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:118)
 ~[main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [main/:na]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_25]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
{noformat}

A quick investigation shows that the keyspace being passed to {{prepare()}} is 
null.

Other than that, the post-test cleanup seems to have problems dropping some of 
the functions.  I believe it's due to a signature mismatch.  This doesn't show 
up as an error, since {{DROP IF EXISTS}} is used, but it causes {{DROP TYPE}} 
statements to fail later.  I'm not sure if this is due to the way the tests are 
working, or if it's an actual problem with dropping functions.  (I haven't 
narrowed it down to a particular set of functions, yet.)

 UserType, TupleType and collections in UDFs
 ---

 Key: CASSANDRA-7563
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Stupp
Assignee: Robert Stupp
 Fix For: 3.0

 Attachments: 7563-7740.txt, 7563.txt, 7563v2.txt, 7563v3.txt, 
 7563v4.txt, 7563v5.txt, 7563v6.txt


 * is Java Driver as a dependency required ?
 * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
 * CQL {{DROP TYPE}} must check UDFs
 * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Deleted] (CASSANDRA-8364) ABC

2014-11-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams deleted CASSANDRA-8364:



 ABC
 ---

 Key: CASSANDRA-8364
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8364
 Project: Cassandra
  Issue Type: Bug
Reporter: ZOILA
Priority: Critical





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Deleted] (CASSANDRA-8363) BUG

2014-11-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams deleted CASSANDRA-8363:



 BUG
 ---

 Key: CASSANDRA-8363
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8363
 Project: Cassandra
  Issue Type: Task
 Environment: LINUX
Reporter: ZOILA
  Labels: ABC





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8353) Prepared statement doesn't revalidate after table schema changes

2014-11-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221404#comment-14221404
 ] 

Tyler Hobbs commented on CASSANDRA-8353:


We have CASSANDRA-7910 and CASSANDRA-7875 already, so this can probably be 
resolved as a duplicate.

 Prepared statement doesn't revalidate after table schema changes
 

 Key: CASSANDRA-8353
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8353
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Michał Jaszczyk

 Having simple table:
 {code}
 CREATE TABLE test1 (
   key TEXT,
   value TEXT,
   PRIMARY KEY (key)
 );
 {code}
 I prepare following statement:
 {code}
 SELECT * FROM test1;
 {code}
 I run queries based on the statement which returns expected results.
 Then I update schema definition like this:
 {code}
 ALTER TABLE test1 ADD value2 TEXT;
 {code}
 I populate the value2 values and use the same statement again. The results 
 returned by the same query don't include value2. I'm sure it is not cached 
 in the driver/application because I was starting new process after changing 
 schema.
 It looks to me like a bug. Please correct me if it works like this on purpose.
 I'm using ruby cql driver but I believe it is not related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8351) Running COPY FROM in cqlsh aborts with errors or segmentation fault

2014-11-21 Thread Joseph Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221408#comment-14221408
 ] 

Joseph Chu commented on CASSANDRA-8351:
---

[~mishail] I encountered this with Ubuntu Linux as well as Mac OSX.

 Running COPY FROM in cqlsh aborts with errors or segmentation fault
 ---

 Key: CASSANDRA-8351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8351
 Project: Cassandra
  Issue Type: Bug
Reporter: Joseph Chu
Priority: Minor
 Attachments: stress.cql, stress.csv


 Running Cassandra 2.1.2 binary tarball on a single instance.
 Put together a script to try to reproduce this using data generated by 
 cassandra-stress.
 Reproduction steps: Download files and run cqlsh -f stress.cql
 This may need to run a couple of times before errors are encountered. I've 
 seen this work best when running after a fresh install.
 Errors seen:
 1.Segmentation fault (core dumped)
 2.stress.cql:24:line contains NULL byte
stress.cql:24:Aborting import at record #0. Previously-inserted values 
 still present.
71 rows imported in 0.100 seconds.
 3.   *** glibc detected *** python: corrupted double-linked list: 
 0x01121ad0 ***
 === Backtrace: =
 /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7f80fe0cdb96]
 /lib/x86_64-linux-gnu/libc.so.6(+0x7fead)[0x7f80fe0ceead]
 python[0x42615d]
 python[0x501dc8]
 python[0x4ff715]
 python[0x425d02]
 python(PyEval_EvalCodeEx+0x1c4)[0x575db4]
 python[0x577be2]
 python(PyObject_Call+0x36)[0x4d91b6]
 python(PyEval_EvalFrameEx+0x2035)[0x54d8a5]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python[0x577be2]
 python(PyObject_Call+0x36)[0x4d91b6]
 python(PyEval_EvalFrameEx+0x2035)[0x54d8a5]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python[0x577ab0]
 python(PyObject_Call+0x36)[0x4d91b6]
 python[0x4c91fa]
 python(PyObject_Call+0x36)[0x4d91b6]
 python(PyEval_CallObjectWithKeywords+0x36)[0x4d97c6]
 python[0x4f7f58]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f80ff369e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f80fe1433fd]
 === Memory map: 
 0040-00672000 r-xp  08:01 1447344
 /usr/bin/python2.7
 00871000-00872000 r--p 00271000 08:01 1447344
 /usr/bin/python2.7
 00872000-008db000 rw-p 00272000 08:01 1447344
 /usr/bin/python2.7
 008db000-008ed000 rw-p  00:00 0 
 0090e000-0126 rw-p  00:00 0  
 [heap]
 7f80ec00-7f80ec0aa000 rw-p  00:00 0 
 7f80ec0aa000-7f80f000 ---p  00:00 0 
 7f80f000-7f80f0021000 rw-p  00:00 0 
 7f80f0021000-7f80f400 ---p  00:00 0 
 7f80f400-7f80f4021000 rw-p  00:00 0 
 7f80f4021000-7f80f800 ---p  00:00 0 
 7f80fa713000-7f80fa714000 ---p  00:00 0 
 7f80fa714000-7f80faf14000 rw-p  00:00 0  
 [stack:7493]
 7f80faf14000-7f80faf15000 ---p  00:00 0 
 7f80faf15000-7f80fb715000 rw-p  00:00 0  
 [stack:7492]
 7f80fb715000-7f80fb716000 ---p  00:00 0 
 7f80fb716000-7f80fbf16000 rw-p  00:00 0  
 [stack:7491]
 7f80fbf16000-7f80fbf21000 r-xp  08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fbf21000-7f80fc12 ---p b000 08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fc12-7f80fc121000 r--p a000 08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fc121000-7f80fc122000 rw-p b000 08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fc122000-7f80fc133000 r-xp  08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 7f80fc133000-7f80fc332000 ---p 00011000 08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 7f80fc332000-7f80fc333000 r--p 0001 08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 7f80fc333000-7f80fc335000 rw-p 00011000 08:01 1585974
 

[jira] [Resolved] (CASSANDRA-8353) Prepared statement doesn't revalidate after table schema changes

2014-11-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8353.
--
Resolution: Duplicate

 Prepared statement doesn't revalidate after table schema changes
 

 Key: CASSANDRA-8353
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8353
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Michał Jaszczyk

 Having simple table:
 {code}
 CREATE TABLE test1 (
   key TEXT,
   value TEXT,
   PRIMARY KEY (key)
 );
 {code}
 I prepare following statement:
 {code}
 SELECT * FROM test1;
 {code}
 I run queries based on the statement which returns expected results.
 Then I update schema definition like this:
 {code}
 ALTER TABLE test1 ADD value2 TEXT;
 {code}
 I populate the value2 values and use the same statement again. The results 
 returned by the same query don't include value2. I'm sure it is not cached 
 in the driver/application because I was starting new process after changing 
 schema.
 It looks to me like a bug. Please correct me if it works like this on purpose.
 I'm using ruby cql driver but I believe it is not related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix size calculations for prepared statements

2014-11-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 6826888be - 1bb2dd906


Fix size calculations for prepared statements

Patch by Benjamin Lerer; reviewed by Dave Brosius for CASSANDRA-8231


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1bb2dd90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1bb2dd90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1bb2dd90

Branch: refs/heads/cassandra-2.1
Commit: 1bb2dd906c1da04be602aa1cec988c4e15bf1ffc
Parents: 6826888
Author: blerer b_le...@hotmail.com
Authored: Fri Nov 21 14:54:47 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 14:54:47 2014 -0600

--
 CHANGES.txt |   1 +
 bin/cassandra.bat   |   2 +-
 bin/cassandra.in.sh |   2 +-
 build.xml   |  10 +-
 conf/cassandra-env.ps1  |   2 +-
 conf/cassandra-env.sh   |   2 +-
 debian/cassandra.in.sh  |   2 +-
 lib/jamm-0.2.8.jar  | Bin 13684 - 0 bytes
 lib/licenses/jamm-0.2.8.txt | 202 ---
 .../org/apache/cassandra/config/CFMetaData.java |   2 +
 .../cql3/MeasurableForPreparedCache.java|  26 ---
 .../apache/cassandra/cql3/QueryProcessor.java   |  33 ++-
 .../cassandra/cql3/functions/Function.java  |   2 +
 .../cql3/statements/BatchStatement.java |  14 +-
 .../cql3/statements/ModificationStatement.java  |  14 +-
 .../cql3/statements/SelectStatement.java|  18 +-
 .../cassandra/db/marshal/AbstractType.java  |   2 +
 17 files changed, 42 insertions(+), 292 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2dd90/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e008ab9..96da1bd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Fix high size calculations for prepared statements (CASSANDRA-8231)
  * Centralize shared executors (CASSANDRA-8055)
  * Fix filtering for CONTAINS (KEY) relations on frozen collection
clustering columns when the query is restricted to a single

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2dd90/bin/cassandra.bat
--
diff --git a/bin/cassandra.bat b/bin/cassandra.bat
index 5169c44..99b291a 100644
--- a/bin/cassandra.bat
+++ b/bin/cassandra.bat
@@ -54,7 +54,7 @@ if NOT DEFINED JAVA_HOME goto :err
 REM 
-
 REM JVM Opts we'll use in legacy run or installation
 set JAVA_OPTS=-ea^
- -javaagent:%CASSANDRA_HOME%\lib\jamm-0.2.8.jar^
+ -javaagent:%CASSANDRA_HOME%\lib\jamm-0.3.0.jar^
  -Xms2G^
  -Xmx2G^
  -XX:+HeapDumpOnOutOfMemoryError^

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2dd90/bin/cassandra.in.sh
--
diff --git a/bin/cassandra.in.sh b/bin/cassandra.in.sh
index 5b4ee0f..b6a53f3 100644
--- a/bin/cassandra.in.sh
+++ b/bin/cassandra.in.sh
@@ -48,5 +48,5 @@ done
 if [ $JVM_VENDOR != OpenJDK -o $JVM_VERSION \ 1.6.0 ] \
   || [ $JVM_VERSION = 1.6.0 -a $JVM_PATCH_VERSION -ge 23 ]
 then
-JAVA_AGENT=$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.8.jar
+JAVA_AGENT=$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.3.0.jar
 fi

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2dd90/build.xml
--
diff --git a/build.xml b/build.xml
index 43fa531..2e5d0ac 100644
--- a/build.xml
+++ b/build.xml
@@ -369,7 +369,7 @@
   /dependency
   dependency groupId=com.googlecode.json-simple 
artifactId=json-simple version=1.1/
   dependency groupId=com.boundary artifactId=high-scale-lib 
version=1.0.6/
-  dependency groupId=com.github.jbellis artifactId=jamm 
version=0.2.8/
+  dependency groupId=com.github.jbellis artifactId=jamm 
version=0.3.0/
   dependency groupId=com.thinkaurelius.thrift 
artifactId=thrift-server version=0.3.7
exclusion groupId=org.slf4j artifactId=slf4j-log4j12/
   /dependency
@@ -688,7 +688,7 @@
   pathelement location=${test.conf}/
 /classpath
 jvmarg value=-Dstorage-config=${test.conf}/
-jvmarg value=-javaagent:${basedir}/lib/jamm-0.2.8.jar /
+jvmarg value=-javaagent:${basedir}/lib/jamm-0.3.0.jar /
 jvmarg value=-ea/
   /java
 /target
@@ -1107,7 +1107,7 @@
 formatter type=brief usefile=false/
 jvmarg value=-Dstorage-config=${test.conf}/
 jvmarg 

[jira] [Updated] (CASSANDRA-8231) Wrong size of cached prepared statements

2014-11-21 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8231:
--
Attachment: CASSANDRA-8231-V2-trunk.txt

New patch for trunk.

 Wrong size of cached prepared statements
 

 Key: CASSANDRA-8231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8231
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jaroslav Kamenik
Assignee: Benjamin Lerer
 Fix For: 2.1.3

 Attachments: 8231-notes.txt, CASSANDRA-8231-V2-trunk.txt, 
 CASSANDRA-8231-V2.txt, CASSANDRA-8231.txt, Unsafes.java


 Cassandra counts memory footprint of prepared statements for caching 
 purposes. It seems, that there is problem with some statements, ie 
 SelectStatement. Even simple selects is counted as 100KB object, updates, 
 deletes etc have few hundreds or thousands bytes. Result is that cache - 
 QueryProcessor.preparedStatements  - holds just fraction of statements..
 I dig a little into the code, and it seems that problem is in jamm in class 
 MemoryMeter. It seems that if instance contains reference to class, it counts 
 size of whole class too. SelectStatement references EnumSet through 
 ResultSet.Metadata and EnumSet holds reference to Enum class...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8281) CQLSSTableWriter close does not work

2014-11-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221465#comment-14221465
 ] 

Benjamin Lerer commented on CASSANDRA-8281:
---

[~yukim] can you review?

 CQLSSTableWriter close does not work
 

 Key: CASSANDRA-8281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8281
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Cassandra 2.1.1
Reporter: Xu Zhongxing
Assignee: Benjamin Lerer
 Attachments: CASSANDRA-8281.txt


 I called CQLSSTableWriter.close(). But the program still cannot exit. But the 
 same code works fine on Cassandra 2.0.10.
 It seems that CQLSSTableWriter cannot be closed, and blocks the program from 
 exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Fix size calculations for prepared statements

2014-11-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk c7a932c57 - 699a69de8


Fix size calculations for prepared statements

Patch by Benjamin Lerer; reviewed by Dave Brosius for CASSANDRA-8231


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1bb2dd90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1bb2dd90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1bb2dd90

Branch: refs/heads/trunk
Commit: 1bb2dd906c1da04be602aa1cec988c4e15bf1ffc
Parents: 6826888
Author: blerer b_le...@hotmail.com
Authored: Fri Nov 21 14:54:47 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 14:54:47 2014 -0600

--
 CHANGES.txt |   1 +
 bin/cassandra.bat   |   2 +-
 bin/cassandra.in.sh |   2 +-
 build.xml   |  10 +-
 conf/cassandra-env.ps1  |   2 +-
 conf/cassandra-env.sh   |   2 +-
 debian/cassandra.in.sh  |   2 +-
 lib/jamm-0.2.8.jar  | Bin 13684 - 0 bytes
 lib/licenses/jamm-0.2.8.txt | 202 ---
 .../org/apache/cassandra/config/CFMetaData.java |   2 +
 .../cql3/MeasurableForPreparedCache.java|  26 ---
 .../apache/cassandra/cql3/QueryProcessor.java   |  33 ++-
 .../cassandra/cql3/functions/Function.java  |   2 +
 .../cql3/statements/BatchStatement.java |  14 +-
 .../cql3/statements/ModificationStatement.java  |  14 +-
 .../cql3/statements/SelectStatement.java|  18 +-
 .../cassandra/db/marshal/AbstractType.java  |   2 +
 17 files changed, 42 insertions(+), 292 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2dd90/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e008ab9..96da1bd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Fix high size calculations for prepared statements (CASSANDRA-8231)
  * Centralize shared executors (CASSANDRA-8055)
  * Fix filtering for CONTAINS (KEY) relations on frozen collection
clustering columns when the query is restricted to a single

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2dd90/bin/cassandra.bat
--
diff --git a/bin/cassandra.bat b/bin/cassandra.bat
index 5169c44..99b291a 100644
--- a/bin/cassandra.bat
+++ b/bin/cassandra.bat
@@ -54,7 +54,7 @@ if NOT DEFINED JAVA_HOME goto :err
 REM 
-
 REM JVM Opts we'll use in legacy run or installation
 set JAVA_OPTS=-ea^
- -javaagent:%CASSANDRA_HOME%\lib\jamm-0.2.8.jar^
+ -javaagent:%CASSANDRA_HOME%\lib\jamm-0.3.0.jar^
  -Xms2G^
  -Xmx2G^
  -XX:+HeapDumpOnOutOfMemoryError^

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2dd90/bin/cassandra.in.sh
--
diff --git a/bin/cassandra.in.sh b/bin/cassandra.in.sh
index 5b4ee0f..b6a53f3 100644
--- a/bin/cassandra.in.sh
+++ b/bin/cassandra.in.sh
@@ -48,5 +48,5 @@ done
 if [ $JVM_VENDOR != OpenJDK -o $JVM_VERSION \ 1.6.0 ] \
   || [ $JVM_VERSION = 1.6.0 -a $JVM_PATCH_VERSION -ge 23 ]
 then
-JAVA_AGENT=$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.8.jar
+JAVA_AGENT=$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.3.0.jar
 fi

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2dd90/build.xml
--
diff --git a/build.xml b/build.xml
index 43fa531..2e5d0ac 100644
--- a/build.xml
+++ b/build.xml
@@ -369,7 +369,7 @@
   /dependency
   dependency groupId=com.googlecode.json-simple 
artifactId=json-simple version=1.1/
   dependency groupId=com.boundary artifactId=high-scale-lib 
version=1.0.6/
-  dependency groupId=com.github.jbellis artifactId=jamm 
version=0.2.8/
+  dependency groupId=com.github.jbellis artifactId=jamm 
version=0.3.0/
   dependency groupId=com.thinkaurelius.thrift 
artifactId=thrift-server version=0.3.7
exclusion groupId=org.slf4j artifactId=slf4j-log4j12/
   /dependency
@@ -688,7 +688,7 @@
   pathelement location=${test.conf}/
 /classpath
 jvmarg value=-Dstorage-config=${test.conf}/
-jvmarg value=-javaagent:${basedir}/lib/jamm-0.2.8.jar /
+jvmarg value=-javaagent:${basedir}/lib/jamm-0.3.0.jar /
 jvmarg value=-ea/
   /java
 /target
@@ -1107,7 +1107,7 @@
 formatter type=brief usefile=false/
 jvmarg value=-Dstorage-config=${test.conf}/
 jvmarg 

[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-21 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/699a69de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/699a69de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/699a69de

Branch: refs/heads/trunk
Commit: 699a69de8dc7744500f76d5cc4a64f3742001cd7
Parents: c7a932c 1bb2dd9
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 15:40:53 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 15:40:53 2014 -0600

--
 bin/cassandra.bat   |   2 +-
 bin/cassandra.in.sh |   2 +-
 build.xml   |  10 +-
 conf/cassandra-env.ps1  |   2 +-
 conf/cassandra-env.sh   |   2 +-
 debian/cassandra.in.sh  |   2 +-
 lib/jamm-0.2.8.jar  | Bin 13684 - 0 bytes
 lib/jamm-0.3.0.jar  | Bin 0 - 21149 bytes
 lib/licenses/jamm-0.2.8.txt | 202 ---
 lib/licenses/jamm-0.3.0.txt | 202 +++
 .../org/apache/cassandra/config/CFMetaData.java |   5 +
 .../cql3/MeasurableForPreparedCache.java|  26 ---
 .../apache/cassandra/cql3/QueryProcessor.java   |  27 ++-
 .../cassandra/cql3/functions/Function.java  |   4 +
 .../cql3/statements/BatchStatement.java |  16 +-
 .../cql3/statements/ModificationStatement.java  |  13 +-
 .../cql3/statements/SelectStatement.java|  18 +-
 .../cassandra/db/marshal/AbstractType.java  |   2 +
 18 files changed, 247 insertions(+), 288 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/699a69de/bin/cassandra.bat
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/699a69de/bin/cassandra.in.sh
--
diff --cc bin/cassandra.in.sh
index b4ed9b6,b6a53f3..6b0581d
--- a/bin/cassandra.in.sh
+++ b/bin/cassandra.in.sh
@@@ -48,8 -48,5 +48,8 @@@ don
  if [ $JVM_VENDOR != OpenJDK -o $JVM_VERSION \ 1.6.0 ] \
|| [ $JVM_VERSION = 1.6.0 -a $JVM_PATCH_VERSION -ge 23 ]
  then
- JAVA_AGENT=$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.8.jar
+ JAVA_AGENT=$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.3.0.jar
  fi
 +
 +# Added sigar-bin to the java.library.path CASSANDRA-7838
 +JAVA_OPTS=$JAVA_OPTS:-Djava.library.path=$CASSANDRA_HOME/lib/sigar-bin

http://git-wip-us.apache.org/repos/asf/cassandra/blob/699a69de/build.xml
--
diff --cc build.xml
index c7aa83e,2e5d0ac..e5c5c83
--- a/build.xml
+++ b/build.xml
@@@ -334,11 -364,14 +334,11 @@@
dependency groupId=ch.qos.logback artifactId=logback-classic 
version=1.1.2/
dependency groupId=org.codehaus.jackson 
artifactId=jackson-core-asl version=1.9.2/
dependency groupId=org.codehaus.jackson 
artifactId=jackson-mapper-asl version=1.9.2/
 -  dependency groupId=jline artifactId=jline version=1.0
 -exclusion groupId=junit artifactId=junit/
 -  /dependency
dependency groupId=com.googlecode.json-simple 
artifactId=json-simple version=1.1/
dependency groupId=com.boundary artifactId=high-scale-lib 
version=1.0.6/
-   dependency groupId=com.github.jbellis artifactId=jamm 
version=0.2.8/
+   dependency groupId=com.github.jbellis artifactId=jamm 
version=0.3.0/
dependency groupId=com.thinkaurelius.thrift 
artifactId=thrift-server version=0.3.7
 -  exclusion groupId=org.slf4j artifactId=slf4j-log4j12/
 +exclusion groupId=org.slf4j artifactId=slf4j-log4j12/
/dependency
dependency groupId=org.yaml artifactId=snakeyaml 
version=1.11/
dependency groupId=org.apache.thrift artifactId=libthrift 
version=0.9.1/
@@@ -1113,7 -1107,7 +1113,7 @@@
  formatter type=brief usefile=false/
  jvmarg value=-Dstorage-config=${test.conf}/
  jvmarg value=-Djava.awt.headless=true/
- jvmarg line=-javaagent:${basedir}/lib/jamm-0.2.8.jar 
${additionalagent} /
 -jvmarg value=-javaagent:${basedir}/lib/jamm-0.3.0.jar /
++jvmarg value=-javaagent:${basedir}/lib/jamm-0.3.0.jar 
${additionalagent} /
  jvmarg value=-ea/
  jvmarg value=-Xss256k/
  jvmarg 
value=-Dcassandra.memtable_row_overhead_computation_step=100/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/699a69de/conf/cassandra-env.ps1
--


[1/2] cassandra git commit: Add missing jamm jar from CASSANDRA-8231 commit

2014-11-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 699a69de8 - be0b451b7


Add missing jamm jar from CASSANDRA-8231 commit


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f02d1945
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f02d1945
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f02d1945

Branch: refs/heads/trunk
Commit: f02d19451a65e243819e56556291588c0531c62f
Parents: 1bb2dd9
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 15:42:49 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 15:42:49 2014 -0600

--
 lib/jamm-0.3.0.jar  | Bin 0 - 21149 bytes
 lib/licenses/jamm-0.3.0.txt | 202 +++
 2 files changed, 202 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f02d1945/lib/jamm-0.3.0.jar
--
diff --git a/lib/jamm-0.3.0.jar b/lib/jamm-0.3.0.jar
new file mode 100644
index 000..782f00c
Binary files /dev/null and b/lib/jamm-0.3.0.jar differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f02d1945/lib/licenses/jamm-0.3.0.txt
--
diff --git a/lib/licenses/jamm-0.3.0.txt b/lib/licenses/jamm-0.3.0.txt
new file mode 100644
index 000..d645695
--- /dev/null
+++ b/lib/licenses/jamm-0.3.0.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  License shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  Licensor shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  Legal Entity shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  control means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  You (or Your) shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  Source form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  Object form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  Work shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is provided in the Appendix below).
+
+  Derivative Works shall mean any work, whether in Source or Object
+  form, that is based on (or derived from) the Work and for which the
+  editorial revisions, annotations, elaborations, or other modifications
+  represent, as a whole, an original work of authorship. For the purposes
+  of this License, Derivative Works shall not include works that remain
+  separable from, or merely link (or bind by name) to the interfaces of,
+  the Work and Derivative Works thereof.
+
+  Contribution shall mean any work of authorship, including
+  the original version of the Work and any modifications or additions
+  to that Work or Derivative Works thereof, that is intentionally
+  submitted to Licensor for inclusion in the Work by the copyright owner
+  or by an individual or Legal Entity authorized to submit on behalf of
+  the copyright owner. For the purposes of this definition, submitted
+  means any form of electronic, verbal, or written communication sent
+  to the Licensor or its representatives, including but not limited to
+  communication on electronic mailing lists, source code control systems,
+  and issue tracking systems that are managed by, or on behalf of, the
+  Licensor for the purpose of discussing and improving the Work, but
+  excluding communication that is conspicuously marked or otherwise
+  designated in writing by the copyright owner as Not a Contribution.
+
+  Contributor shall mean Licensor 

cassandra git commit: Add missing jamm jar from CASSANDRA-8231 commit

2014-11-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 1bb2dd906 - f02d19451


Add missing jamm jar from CASSANDRA-8231 commit


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f02d1945
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f02d1945
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f02d1945

Branch: refs/heads/cassandra-2.1
Commit: f02d19451a65e243819e56556291588c0531c62f
Parents: 1bb2dd9
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 15:42:49 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 15:42:49 2014 -0600

--
 lib/jamm-0.3.0.jar  | Bin 0 - 21149 bytes
 lib/licenses/jamm-0.3.0.txt | 202 +++
 2 files changed, 202 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f02d1945/lib/jamm-0.3.0.jar
--
diff --git a/lib/jamm-0.3.0.jar b/lib/jamm-0.3.0.jar
new file mode 100644
index 000..782f00c
Binary files /dev/null and b/lib/jamm-0.3.0.jar differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f02d1945/lib/licenses/jamm-0.3.0.txt
--
diff --git a/lib/licenses/jamm-0.3.0.txt b/lib/licenses/jamm-0.3.0.txt
new file mode 100644
index 000..d645695
--- /dev/null
+++ b/lib/licenses/jamm-0.3.0.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  License shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  Licensor shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  Legal Entity shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  control means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  You (or Your) shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  Source form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  Object form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  Work shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is provided in the Appendix below).
+
+  Derivative Works shall mean any work, whether in Source or Object
+  form, that is based on (or derived from) the Work and for which the
+  editorial revisions, annotations, elaborations, or other modifications
+  represent, as a whole, an original work of authorship. For the purposes
+  of this License, Derivative Works shall not include works that remain
+  separable from, or merely link (or bind by name) to the interfaces of,
+  the Work and Derivative Works thereof.
+
+  Contribution shall mean any work of authorship, including
+  the original version of the Work and any modifications or additions
+  to that Work or Derivative Works thereof, that is intentionally
+  submitted to Licensor for inclusion in the Work by the copyright owner
+  or by an individual or Legal Entity authorized to submit on behalf of
+  the copyright owner. For the purposes of this definition, submitted
+  means any form of electronic, verbal, or written communication sent
+  to the Licensor or its representatives, including but not limited to
+  communication on electronic mailing lists, source code control systems,
+  and issue tracking systems that are managed by, or on behalf of, the
+  Licensor for the purpose of discussing and improving the Work, but
+  excluding communication that is conspicuously marked or otherwise
+  designated in writing by the copyright owner as Not a Contribution.
+
+  Contributor 

[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-21 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be0b451b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be0b451b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be0b451b

Branch: refs/heads/trunk
Commit: be0b451b7cbe4c75ac5319325908842037e76e94
Parents: 699a69d f02d194
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Nov 21 15:43:21 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Nov 21 15:43:21 2014 -0600

--

--




[jira] [Updated] (CASSANDRA-8281) CQLSSTableWriter close does not work

2014-11-21 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-8281:
--
Reviewer: Yuki Morishita

 CQLSSTableWriter close does not work
 

 Key: CASSANDRA-8281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8281
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Cassandra 2.1.1
Reporter: Xu Zhongxing
Assignee: Benjamin Lerer
 Attachments: CASSANDRA-8281.txt


 I called CQLSSTableWriter.close(). But the program still cannot exit. But the 
 same code works fine on Cassandra 2.0.10.
 It seems that CQLSSTableWriter cannot be closed, and blocks the program from 
 exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8356) Slice query on a super column family with counters doesn't get all the data

2014-11-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8356:
---
Assignee: Aleksey Yeschenko

 Slice query on a super column family with counters doesn't get all the data
 ---

 Key: CASSANDRA-8356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8356
 Project: Cassandra
  Issue Type: Bug
Reporter: Nicolas Lalevée
Assignee: Aleksey Yeschenko
 Fix For: 2.0.12


 We've finally been able to upgrade our cluster to 2.0.11, after 
 CASSANDRA-7188 being fixed.
 But now slice queries on a super column family with counters doesn't return 
 all the expected data. We first though because of all the trouble we had that 
 we lost data, but there a way to actually get the data, so nothing is lost; 
 it just that cassandra seems to incorrectly skip it.
 See the following CQL log:
 {noformat}
 cqlsh:Theme desc table theme_view;
 CREATE TABLE theme_view (
   key bigint,
   column1 varint,
   column2 text,
   value counter,
   PRIMARY KEY ((key), column1, column2)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=1.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:Theme select * from theme_view where key = 99421 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2011-03-26 | 5
  99421 | -12 | 2011-03-27 | 2
  99421 | -12 | 2011-03-28 |40
  99421 | -12 | 2011-03-29 |14
  99421 | -12 | 2011-03-30 |17
  99421 | -12 | 2011-03-31 | 5
  99421 | -12 | 2011-04-01 |37
  99421 | -12 | 2011-04-02 | 7
  99421 | -12 | 2011-04-03 | 4
 (10 rows)
 cqlsh:Theme select * from theme_view where key = 99421 and column1 = -12 
 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2014-05-06 |15
  99421 | -12 | 2014-06-06 | 7
  99421 | -12 | 2014-06-10 |22
  99421 | -12 | 2014-06-11 |34
  99421 | -12 | 2014-06-12 |35
  99421 | -12 | 2014-06-13 |26
  99421 | -12 | 2014-06-14 |16
  99421 | -12 | 2014-06-15 |24
  99421 | -12 | 2014-06-16 |25
 (10 rows)
 {noformat}
 As you can see the second query should return data from 2012, but it is not. 
 Via thrift, we have the exact same bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8356) Slice query on a super column family with counters doesn't get all the data

2014-11-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8356:
---
Fix Version/s: 2.0.12

 Slice query on a super column family with counters doesn't get all the data
 ---

 Key: CASSANDRA-8356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8356
 Project: Cassandra
  Issue Type: Bug
Reporter: Nicolas Lalevée
Assignee: Aleksey Yeschenko
 Fix For: 2.0.12


 We've finally been able to upgrade our cluster to 2.0.11, after 
 CASSANDRA-7188 being fixed.
 But now slice queries on a super column family with counters doesn't return 
 all the expected data. We first though because of all the trouble we had that 
 we lost data, but there a way to actually get the data, so nothing is lost; 
 it just that cassandra seems to incorrectly skip it.
 See the following CQL log:
 {noformat}
 cqlsh:Theme desc table theme_view;
 CREATE TABLE theme_view (
   key bigint,
   column1 varint,
   column2 text,
   value counter,
   PRIMARY KEY ((key), column1, column2)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=1.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:Theme select * from theme_view where key = 99421 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2011-03-26 | 5
  99421 | -12 | 2011-03-27 | 2
  99421 | -12 | 2011-03-28 |40
  99421 | -12 | 2011-03-29 |14
  99421 | -12 | 2011-03-30 |17
  99421 | -12 | 2011-03-31 | 5
  99421 | -12 | 2011-04-01 |37
  99421 | -12 | 2011-04-02 | 7
  99421 | -12 | 2011-04-03 | 4
 (10 rows)
 cqlsh:Theme select * from theme_view where key = 99421 and column1 = -12 
 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2014-05-06 |15
  99421 | -12 | 2014-06-06 | 7
  99421 | -12 | 2014-06-10 |22
  99421 | -12 | 2014-06-11 |34
  99421 | -12 | 2014-06-12 |35
  99421 | -12 | 2014-06-13 |26
  99421 | -12 | 2014-06-14 |16
  99421 | -12 | 2014-06-15 |24
  99421 | -12 | 2014-06-16 |25
 (10 rows)
 {noformat}
 As you can see the second query should return data from 2012, but it is not. 
 Via thrift, we have the exact same bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8355) NPE when passing wrong argument in ALTER TABLE statement

2014-11-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8355:
---
Reproduced In: 2.1.2
Fix Version/s: 2.1.3

 NPE when passing wrong argument in ALTER TABLE statement
 

 Key: CASSANDRA-8355
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8355
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2
Reporter: Pierre Laporte
Priority: Minor
 Fix For: 2.1.3


 When I tried to change the caching strategy of a table, I provided a wrong 
 argument {{'rows_per_partition' : ALL}} with unquoted ALL. Cassandra returned 
 a SyntaxError, which is good, but it seems it was because of a 
 NullPointerException.
 *Howto*
 {code}
 CREATE TABLE foo (k int primary key);
 ALTER TABLE foo WITH caching = {'keys' : 'all', 'rows_per_partition' : ALL};
 {code}
 *Output*
 {code}
 ErrorMessage code=2000 [Syntax error in CQL query] message=Failed parsing 
 statement: [ALTER TABLE foo WITH caching = {'keys' : 'all', 
 'rows_per_partition' : ALL};] reason: NullPointerException null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8351) Running COPY FROM in cqlsh aborts with errors or segmentation fault

2014-11-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8351:
---
Reproduced In: 2.1.2
Fix Version/s: 2.1.3
   Labels: cqlsh  (was: )

 Running COPY FROM in cqlsh aborts with errors or segmentation fault
 ---

 Key: CASSANDRA-8351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8351
 Project: Cassandra
  Issue Type: Bug
Reporter: Joseph Chu
Priority: Minor
  Labels: cqlsh
 Fix For: 2.1.3

 Attachments: stress.cql, stress.csv


 Running Cassandra 2.1.2 binary tarball on a single instance.
 Put together a script to try to reproduce this using data generated by 
 cassandra-stress.
 Reproduction steps: Download files and run cqlsh -f stress.cql
 This may need to run a couple of times before errors are encountered. I've 
 seen this work best when running after a fresh install.
 Errors seen:
 1.Segmentation fault (core dumped)
 2.stress.cql:24:line contains NULL byte
stress.cql:24:Aborting import at record #0. Previously-inserted values 
 still present.
71 rows imported in 0.100 seconds.
 3.   *** glibc detected *** python: corrupted double-linked list: 
 0x01121ad0 ***
 === Backtrace: =
 /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7f80fe0cdb96]
 /lib/x86_64-linux-gnu/libc.so.6(+0x7fead)[0x7f80fe0ceead]
 python[0x42615d]
 python[0x501dc8]
 python[0x4ff715]
 python[0x425d02]
 python(PyEval_EvalCodeEx+0x1c4)[0x575db4]
 python[0x577be2]
 python(PyObject_Call+0x36)[0x4d91b6]
 python(PyEval_EvalFrameEx+0x2035)[0x54d8a5]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python(PyEval_EvalFrameEx+0x7b8)[0x54c028]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python[0x577be2]
 python(PyObject_Call+0x36)[0x4d91b6]
 python(PyEval_EvalFrameEx+0x2035)[0x54d8a5]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalFrameEx+0xa02)[0x54c272]
 python(PyEval_EvalCodeEx+0x1a2)[0x575d92]
 python[0x577ab0]
 python(PyObject_Call+0x36)[0x4d91b6]
 python[0x4c91fa]
 python(PyObject_Call+0x36)[0x4d91b6]
 python(PyEval_CallObjectWithKeywords+0x36)[0x4d97c6]
 python[0x4f7f58]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f80ff369e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f80fe1433fd]
 === Memory map: 
 0040-00672000 r-xp  08:01 1447344
 /usr/bin/python2.7
 00871000-00872000 r--p 00271000 08:01 1447344
 /usr/bin/python2.7
 00872000-008db000 rw-p 00272000 08:01 1447344
 /usr/bin/python2.7
 008db000-008ed000 rw-p  00:00 0 
 0090e000-0126 rw-p  00:00 0  
 [heap]
 7f80ec00-7f80ec0aa000 rw-p  00:00 0 
 7f80ec0aa000-7f80f000 ---p  00:00 0 
 7f80f000-7f80f0021000 rw-p  00:00 0 
 7f80f0021000-7f80f400 ---p  00:00 0 
 7f80f400-7f80f4021000 rw-p  00:00 0 
 7f80f4021000-7f80f800 ---p  00:00 0 
 7f80fa713000-7f80fa714000 ---p  00:00 0 
 7f80fa714000-7f80faf14000 rw-p  00:00 0  
 [stack:7493]
 7f80faf14000-7f80faf15000 ---p  00:00 0 
 7f80faf15000-7f80fb715000 rw-p  00:00 0  
 [stack:7492]
 7f80fb715000-7f80fb716000 ---p  00:00 0 
 7f80fb716000-7f80fbf16000 rw-p  00:00 0  
 [stack:7491]
 7f80fbf16000-7f80fbf21000 r-xp  08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fbf21000-7f80fc12 ---p b000 08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fc12-7f80fc121000 r--p a000 08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fc121000-7f80fc122000 rw-p b000 08:01 1456254
 /usr/lib/python2.7/lib-dynload/_json.so
 7f80fc122000-7f80fc133000 r-xp  08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 7f80fc133000-7f80fc332000 ---p 00011000 08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 7f80fc332000-7f80fc333000 r--p 0001 08:01 1585974
 /usr/local/lib/python2.7/dist-packages/blist/_blist.so
 7f80fc333000-7f80fc335000 rw-p 00011000 08:01 1585974
 

[jira] [Resolved] (CASSANDRA-8349) Using cqlsh to alter keyspaces causes tables not to be found

2014-11-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-8349.

Resolution: Duplicate

 Using cqlsh to alter keyspaces causes tables not to be found
 

 Key: CASSANDRA-8349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8349
 Project: Cassandra
  Issue Type: Bug
Reporter: Joseph Chu
Priority: Minor
  Labels: cqlsh

 Running cqlsh using Cassandra 2.1.2 on a single node.
 Reproduction steps in cqlsh:
 CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 CREATE TABLE a.a (a INT PRIMARY KEY);
 INSERT INTO a.a (a) VALUES (1);
 SELECT * FROM a.a;
 ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 2};
 SELECT * FROM a.a;
 DESCRIBE KEYSPACE a
 Errors:
 Column family 'a' not found
 Workaround:
 Restart cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-11-21 Thread Pierre Laporte (JIRA)
Pierre Laporte created CASSANDRA-8365:
-

 Summary: CamelCase name is used as index name instead of lowercase
 Key: CASSANDRA-8365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre Laporte
Priority: Minor


In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
name is used as index name, even though it is unquoted. Trying to quote the 
index name results in a syntax error.

However, when I try to delete the index, I have to quote the index name, 
otherwise I get an invalid-query error telling me that the index (lowercase) 
does not exist.

This seems inconsistent.  Shouldn't the index name be lowercased before the 
index is created ?

Here is the code to reproduce the issue :

{code}
cqlsh:schemabuilderit CREATE TABLE IndexTest (a int primary key, b int);
cqlsh:schemabuilderit CREATE INDEX FooBar on indextest (b);
cqlsh:schemabuilderit DESCRIBE TABLE indextest ;

CREATE TABLE schemabuilderit.indextest (
a int PRIMARY KEY,
b int
) ;
CREATE INDEX FooBar ON schemabuilderit.indextest (b);

cqlsh:schemabuilderit DROP INDEX FooBar;
code=2200 [Invalid query] message=Index 'foobar' could not be found in any of 
the tables of keyspace 'schemabuilderit'
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8349) Using cqlsh to alter keyspaces causes tables not to be found

2014-11-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8349:
---
Reproduced In: 2.1.2
   Labels: cqlsh  (was: )

 Using cqlsh to alter keyspaces causes tables not to be found
 

 Key: CASSANDRA-8349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8349
 Project: Cassandra
  Issue Type: Bug
Reporter: Joseph Chu
Priority: Minor
  Labels: cqlsh

 Running cqlsh using Cassandra 2.1.2 on a single node.
 Reproduction steps in cqlsh:
 CREATE KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 CREATE TABLE a.a (a INT PRIMARY KEY);
 INSERT INTO a.a (a) VALUES (1);
 SELECT * FROM a.a;
 ALTER KEYSPACE a WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 2};
 SELECT * FROM a.a;
 DESCRIBE KEYSPACE a
 Errors:
 Column family 'a' not found
 Workaround:
 Restart cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8347) 2.1.1: org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException after accidental computer crash

2014-11-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8347:
---
Description: 
{code}9:08:56.972 [SSTableBatchOpen:1] ERROR o.a.c.service.CassandraDaemon - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
 at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:129)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_65]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_65]
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_65]
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_65]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
Caused by: java.io.EOFException: null
 at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.7.0_65]
 at java.io.DataInputStream.readUTF(DataInputStream.java:589) ~[na:1.7.0_65]
 at java.io.DataInputStream.readUTF(DataInputStream.java:564) ~[na:1.7.0_65]
 at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:104)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 ... 13 common frames omitted{code}

  was:
9:08:56.972 [SSTableBatchOpen:1] ERROR o.a.c.service.CassandraDaemon - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
 at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:129)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
~[cassandra-all-2.1.1.jar:2.1.1]
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_65]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_65]
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_65]
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_65]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
Caused by: java.io.EOFException: null
 at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.7.0_65]
 at java.io.DataInputStream.readUTF(DataInputStream.java:589) ~[na:1.7.0_65]
 at java.io.DataInputStream.readUTF(DataInputStream.java:564) ~[na:1.7.0_65]
 at 
org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:104)
 ~[cassandra-all-2.1.1.jar:2.1.1]
 ... 13 common frames omitted


 2.1.1: org.apache.cassandra.io.sstable.CorruptSSTableException: 
 java.io.EOFException after accidental computer crash
 

 Key: CASSANDRA-8347
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8347
 Project: 

[jira] [Resolved] (CASSANDRA-7453) Geo-replication in Cassandra

2014-11-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-7453.
--
Resolution: Not a Problem

This contradicts Cassandra design principles on a fundamental level, and cannot 
be implemented in a proper way, so closing it until something changes.

 Geo-replication in Cassandra
 

 Key: CASSANDRA-7453
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7453
 Project: Cassandra
  Issue Type: Wish
Reporter: Sergio Esteves
Priority: Minor

 Currently, a Cassandra cluster spanned across different datacenters 
 replicates all data to all datacenters when an update is performed. This is a 
 problem for the scalability of Cassandra as the number of datacenters 
 increases.
 It would be desirable to have some way to make Cassandra aware of the 
 location of data requests so that it could place replicas close to users and 
 avoid replicating to remote datacenters that are far away.
 To this end, we thought of implementing a new replication strategy and some 
 possible solutions to achieve our goals are:
 1) Using a byte from every row key to identify the location of the primary 
 datacenter where data should be stored (i.e., where it is likely to be 
 accessed).
 2) Using an additional CF for every row to specify the origin of the data.
 3) Replicating only to the 2 closest datacenters from the user (for 
 reliability reasons) upon a write update. For reads, a user would try to 
 fetch data from the 2 closest datacenters; if data is not available it would 
 try the other remaining datacenters. If data fails to be retrieved too many 
 times, it means that the client has moved to other part of the planet, and 
 thus data should be migrated accordingly. We could have some problems here, 
 like having the same rows, but with different CFs in different DCs (i.e., if 
 users perform updates to the same rows from different remote places).
 What would be the best way to do this?
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8356) Slice query on a super column family with counters doesn't get all the data

2014-11-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221557#comment-14221557
 ] 

Aleksey Yeschenko commented on CASSANDRA-8356:
--

Looks like another (hopefully the last one this time) gift from CASSANDRA-3237.

Can you try to reproduce with regular, non-counter columns, but otherwise same 
schema/data? I don't think this is going to be counters specific.

Thanks. 

 Slice query on a super column family with counters doesn't get all the data
 ---

 Key: CASSANDRA-8356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8356
 Project: Cassandra
  Issue Type: Bug
Reporter: Nicolas Lalevée
Assignee: Aleksey Yeschenko
 Fix For: 2.0.12


 We've finally been able to upgrade our cluster to 2.0.11, after 
 CASSANDRA-7188 being fixed.
 But now slice queries on a super column family with counters doesn't return 
 all the expected data. We first though because of all the trouble we had that 
 we lost data, but there a way to actually get the data, so nothing is lost; 
 it just that cassandra seems to incorrectly skip it.
 See the following CQL log:
 {noformat}
 cqlsh:Theme desc table theme_view;
 CREATE TABLE theme_view (
   key bigint,
   column1 varint,
   column2 text,
   value counter,
   PRIMARY KEY ((key), column1, column2)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=1.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:Theme select * from theme_view where key = 99421 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2011-03-26 | 5
  99421 | -12 | 2011-03-27 | 2
  99421 | -12 | 2011-03-28 |40
  99421 | -12 | 2011-03-29 |14
  99421 | -12 | 2011-03-30 |17
  99421 | -12 | 2011-03-31 | 5
  99421 | -12 | 2011-04-01 |37
  99421 | -12 | 2011-04-02 | 7
  99421 | -12 | 2011-04-03 | 4
 (10 rows)
 cqlsh:Theme select * from theme_view where key = 99421 and column1 = -12 
 limit 10;
  key   | column1 | column2| value
 ---+-++---
  99421 | -12 | 2011-03-25 |59
  99421 | -12 | 2014-05-06 |15
  99421 | -12 | 2014-06-06 | 7
  99421 | -12 | 2014-06-10 |22
  99421 | -12 | 2014-06-11 |34
  99421 | -12 | 2014-06-12 |35
  99421 | -12 | 2014-06-13 |26
  99421 | -12 | 2014-06-14 |16
  99421 | -12 | 2014-06-15 |24
  99421 | -12 | 2014-06-16 |25
 (10 rows)
 {noformat}
 As you can see the second query should return data from 2012, but it is not. 
 Via thrift, we have the exact same bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6717) Modernize schema tables

2014-11-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221646#comment-14221646
 ] 

Tyler Hobbs commented on CASSANDRA-6717:


{quote}
Actually, I think we should instead store the clustering order in 
`system_schema.tables` instead. That way we wouldn't need the `is_reversed` 
boolean or the `component_index`.

Would've allowed us to get rid of `component_index` too, if not for the 
composite partition key columns.
{quote}

I'm sure I'm missing something, but why can't we handle partition key columns 
the same way as clustering columns?

 Modernize schema tables
 ---

 Key: CASSANDRA-6717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6717
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 3.0


 There is a few problems/improvements that can be done with the way we store 
 schema:
 # CASSANDRA-4988: as explained on the ticket, storing the comparator is now 
 redundant (or almost, we'd need to store whether the table is COMPACT or not 
 too, which we don't currently is easy and probably a good idea anyway), it 
 can be entirely reconstructed from the infos in schema_columns (the same is 
 true of key_validator and subcomparator, and replacing default_validator by a 
 COMPACT_VALUE column in all case is relatively simple). And storing the 
 comparator as an opaque string broke concurrent updates of sub-part of said 
 comparator (concurrent collection addition or altering 2 separate clustering 
 columns typically) so it's really worth removing it.
 # CASSANDRA-4603: it's time to get rid of those ugly json maps. I'll note 
 that schema_keyspaces is a problem due to its use of COMPACT STORAGE, but I 
 think we should fix it once and for-all nonetheless (see below).
 # For CASSANDRA-6382 and to allow indexing both map keys and values at the 
 same time, we'd need to be able to have more than one index definition for a 
 given column.
 # There is a few mismatches in table options between the one stored in the 
 schema and the one used when declaring/altering a table which would be nice 
 to fix. The compaction, compression and replication maps are one already 
 mentioned from CASSANDRA-4603, but also for some reason 
 'dclocal_read_repair_chance' in CQL is called just 'local_read_repair_chance' 
 in the schema table, and 'min/max_compaction_threshold' are column families 
 option in the schema but just compaction options for CQL (which makes more 
 sense).
 None of those issues are major, and we could probably deal with them 
 independently but it might be simpler to just fix them all in one shot so I 
 wanted to sum them all up here. In particular, the fact that 
 'schema_keyspaces' uses COMPACT STORAGE is annoying (for the replication map, 
 but it may limit future stuff too) which suggest we should migrate it to a 
 new, non COMPACT table. And while that's arguably a detail, it wouldn't hurt 
 to rename schema_columnfamilies to schema_tables for the years to come since 
 that's the prefered vernacular for CQL.
 Overall, what I would suggest is to move all schema tables to a new keyspace, 
 named 'schema' for instance (or 'system_schema' but I prefer the shorter 
 version), and fix all the issues above at once. Since we currently don't 
 exchange schema between nodes of different versions, all we'd need to do that 
 is a one shot startup migration, and overall, I think it could be simpler for 
 clients to deal with one clear migration than to have to handle minor 
 individual changes all over the place. I also think it's somewhat cleaner 
 conceptually to have schema tables in their own keyspace since they are 
 replicated through a different mechanism than other system tables.
 If we do that, we could, for instance, migrate to the following schema tables 
 (details up for discussion of course):
 {noformat}
 CREATE TYPE user_type (
   name text,
   column_names listtext,
   column_types listtext
 )
 CREATE TABLE keyspaces (
   name text PRIMARY KEY,
   durable_writes boolean,
   replication mapstring, string,
   user_types mapstring, user_type
 )
 CREATE TYPE trigger_definition (
   name text,
   options maptex, text
 )
 CREATE TABLE tables (
   keyspace text,
   name text,
   id uuid,
   table_type text, // COMPACT, CQL or SUPER
   dropped_columns maptext, bigint,
   triggers maptext, trigger_definition,
   // options
   comment text,
   compaction maptext, text,
   compression maptext, text,
   read_repair_chance double,
   dclocal_read_repair_chance double,
   gc_grace_seconds int,
   caching text,
   rows_per_partition_to_cache text,
   default_time_to_live int,
   min_index_interval int,
   max_index_interval int,
   speculative_retry text,
   populate_io_cache_on_flush 

  1   2   >