[jira] [Created] (CASSANDRA-4006) Add truncate_timeout configuration setting

2012-03-06 Thread Aaron Morton (Created) (JIRA)
Add truncate_timeout configuration setting
--

 Key: CASSANDRA-4006
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4006
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.8
Reporter: Aaron Morton
Priority: Minor


http://www.mail-archive.com/user@cassandra.apache.org/msg20877.html

from TruncateResponseHandler

{code:java}
long timeout = DatabaseDescriptor.getRpcTimeout() - 
(System.currentTimeMillis() - startTime);
boolean success;
try
{
success = condition.await(timeout, TimeUnit.MILLISECONDS); // TODO 
truncate needs a much longer timeout
}
{code}



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3985) Always ensure enough space for Compaction

2012-03-01 Thread Aaron Morton (Created) (JIRA)
Always ensure enough space for Compaction
-

 Key: CASSANDRA-3985
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3985
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Minor


From http://www.mail-archive.com/user@cassandra.apache.org/msg20757.html

CompactionTask.execute() checks if there is a valid compactionFileLocation only 
if partialCompactionsAcceptable() . upgradesstables results in a CompactionTask 
with userdefined set, so the valid location check is not performed. 

The result is a NPE, partial stack 

{code:java}
$ nodetool -h localhost upgradesstables
Error occured while upgrading the sstables for keyspace MyKeySpace
java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:203)
at 
org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:219)
at 
org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:995)
at 
org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:1648)
snip
Caused by: java.lang.NullPointerException
at java.io.File.init(File.java:222)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTempSSTablePath(ColumnFamilyStore.java:641)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTempSSTablePath(ColumnFamilyStore.java:652)
at 
org.apache.cassandra.db.ColumnFamilyStore.createCompactionWriter(ColumnFamilyStore.java:1888)
at 
org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:151)
at 
org.apache.cassandra.db.compaction.CompactionManager$4.perform(CompactionManager.java:229)
at 
org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:182)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
{code}

(night time here, will fix tomorrow, anyone else feel free to fix it.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3692) Always use microsecond timestamps in the System Table

2012-01-03 Thread Aaron Morton (Created) (JIRA)
Always use microsecond timestamps in the System Table
-

 Key: CASSANDRA-3692
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3692
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.6
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Minor


Code in o.a.c.db.SystemTable used a combination of milliseconds, microseconds 
and 0 for column timestamps. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3654) Warn when the stored Gossip Generation is from the future

2011-12-20 Thread Aaron Morton (Created) (JIRA)
Warn when the stored Gossip Generation is from the future
-

 Key: CASSANDRA-3654
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3654
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.6
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Trivial


I had a case where the server was first started with the current time set way 
in the future. So the gossip generation was initialized with a very high value 
(background 
http://thelastpickle.com/2011/12/15/Anatomy-of-a-Cassandra-Partition/)

There were some other issues at play, but a log message warning of the high 
generation would have helped. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-3548) NPE in AntiEntropyService$RepairSession.completed()

2011-12-01 Thread Aaron Morton (Created) (JIRA)
NPE in AntiEntropyService$RepairSession.completed()
---

 Key: CASSANDRA-3548
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3548
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.1
 Environment: Free BSD 8.2, JVM vendor/version: OpenJDK 64-Bit Server 
VM/1.6.0
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Minor


This may be related to CASSANDRA-3519 (cluster it was observed on is still 
1.0.1), however i think there is still a race condition.

Observed on a 2 DC cluster, during a repair that spanned the DC's.  

{noformat}
INFO [AntiEntropyStage:1] 2011-11-28 06:22:56,225 StreamingRepairTask.java 
(line 136) [streaming task #69187510-1989-11e1--5ff37d368cb6] Forwarding 
streaming repair of 8602 
ranges to /10.6.130.70 (to be streamed with /10.37.114.10)
...
 INFO [AntiEntropyStage:66] 2011-11-29 11:20:57,109 StreamingRepairTask.java 
(line 253) [streaming task #69187510-1989-11e1--5ff37d368cb6] task succeeded
ERROR [AntiEntropyStage:66] 2011-11-29 11:20:57,109 
AbstractCassandraDaemon.java (line 133) Fatal exception in thread 
Thread[AntiEntropyStage:66,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.AntiEntropyService$RepairSession.completed(AntiEntropyService.java:712)
at 
org.apache.cassandra.service.AntiEntropyService$RepairSession$Differencer$1.run(AntiEntropyService.java:912)
at 
org.apache.cassandra.streaming.StreamingRepairTask$2.run(StreamingRepairTask.java:186)
at 
org.apache.cassandra.streaming.StreamingRepairTask$StreamingRepairResponse.doVerb(StreamingRepairTask.java:255)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
{noformat}

One of the nodes involved in the repair session failed, e.g. (Not sure if this 
is from the same repair session as the streaming task above, but it illustrates 
the issue)

{noformat}
ERROR [AntiEntropySessions:1] 2011-11-28 19:39:52,507 AntiEntropyService.java 
(line 688) [repair #2bf19860-197f-11e1--5ff37d368cb6] session completed 
with the following error
java.io.IOException: Endpoint /10.29.60.10 died
at 
org.apache.cassandra.service.AntiEntropyService$RepairSession.failedNode(AntiEntropyService.java:725)
at 
org.apache.cassandra.service.AntiEntropyService$RepairSession.convict(AntiEntropyService.java:762)
at 
org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:192)
at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:559)
at org.apache.cassandra.gms.Gossiper.access$700(Gossiper.java:62)
at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:167)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
ERROR [GossipTasks:1] 2011-11-28 19:39:52,507 StreamOutSession.java (line 232) 
StreamOutSession /10.29.60.10 failed because {} died or was restarted/removed
ERROR [GossipTasks:1] 2011-11-28 19:39:52,571 Gossiper.java (line 172) Gossip 
error
java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:782)
at java.util.ArrayList$Itr.next(ArrayList.java:754)
at 
org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:190)
at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:559)
at org.apache.cassandra.gms.Gossiper.access$700(Gossiper.java:62)
at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:167)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at 

[jira] [Created] (CASSANDRA-3519) ConcurrentModificationException in FailureDetector

2011-11-21 Thread Aaron Morton (Created) (JIRA)
ConcurrentModificationException in FailureDetector
--

 Key: CASSANDRA-3519
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3519
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.1
 Environment: Free BSD 8.2 
/java -version
java version 1.6.0_07
Diablo Java(TM) SE Runtime Environment (build 1.6.0_07-b02)
Diablo Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)


Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Minor


Noticed in a 2 DC cluster, error was on node in DC 2 streaming to a node in DC 
1. 

{code:java}

INFO [GossipTasks:1] 2011-11-20 18:36:05,153 Gossiper.java (line 759) 
InetAddress /10.6.130.70 is now dead.
ERROR [GossipTasks:1] 2011-11-20 18:36:25,252 StreamOutSession.java (line 232) 
StreamOutSession /10.6.130.70 failed because {} died or was restarted/removed
ERROR [AntiEntropySessions:21] 2011-11-20 18:36:25,252 AntiEntropyService.java 
(line 688) [repair #7fb5b1b0-11f1-11e1--baed0a2090fe] session completed 
with the following err
or
java.io.IOException: Endpoint /10.6.130.70 died
at 
org.apache.cassandra.service.AntiEntropyService$RepairSession.failedNode(AntiEntropyService.java:725)
at 
org.apache.cassandra.service.AntiEntropyService$RepairSession.convict(AntiEntropyService.java:762)
at 
org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:192)
at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:559)
at org.apache.cassandra.gms.Gossiper.access$700(Gossiper.java:62)
at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:167)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
ERROR [GossipTasks:1] 2011-11-20 18:36:25,256 Gossiper.java (line 172) Gossip 
error
java.util.ConcurrentModificationException
at 
java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
at java.util.AbstractList$Itr.next(AbstractList.java:343)
at 
org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:190)
at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:559)
at org.apache.cassandra.gms.Gossiper.access$700(Gossiper.java:62)
at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:167)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
ERROR [AntiEntropySessions:21] 2011-11-20 18:36:25,256 
AbstractCassandraDaemon.java (line 133) Fatal exception in thread 
Thread[AntiEntropySessions:21,5,RMI Runtime]
java.lang.RuntimeException: java.io.IOException: Endpoint /10.6.130.70 died
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.IOException: Endpoint /10.6.130.70 died
at 

[jira] [Created] (CASSANDRA-3510) Incorrect query results due to invalid SSTable.maxTimestamp

2011-11-20 Thread Aaron Morton (Created) (JIRA)
Incorrect query results due to invalid SSTable.maxTimestamp
---

 Key: CASSANDRA-3510
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3510
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.3
Reporter: Aaron Morton
Priority: Critical



related to CASSANDRA-3446

(sorry this is so long, took me a bit to work through it all and there is a lot 
of new code :) )
 
h1. Summary

SSTable.maxTimestamp for files created before 1.0 defaults to Long.MIN_VALUE, 
and this means the wrong data is returned from queries. 
 
h2. Details 

Noticed on a cluster that was upgraded from 0.8.X to 1.X, it then had trouble 
similar to CASSANDRA-3446. It was rolled back to 0.8 and the migrated to 1.0.3. 

4 Node cluster, all files upgraded to hb format. 

In a super CF there are situations where a get for a sub columns returns a 
different value than a get for the column. .e.g. 

{noformat}
[default@XXX] get Users[ascii('username')]['meta']['password'];
= (column=password, value=3130323130343130, timestamp=1307352647576000)

[default@XX] get Users[ascii('username')]['meta']; 
(snip)   
= (column=password, value=3034323131303034, timestamp=1319563673493000)
{noformat}

The correct value is the second one. 

I added logging after line 109 in 
o.a.c.db.CollectionController.collectTimeOrderedData() to log the sstable name 
and the file max timestamp, this is what I got:

{code:java}
for (SSTableReader sstable : view.sstables)
{
long currentMaxTs = sstable.getMaxTimestamp();
logger.debug(String.format(Got sstable %s and max TS %d, sstable, 
currentMaxTs));
reduceNameFilter(reducedFilter, container, currentMaxTs);
{code}

{noformat}
DEBUG 14:08:46,012 Got sstable 
SSTableReader(path='/var/lib/cassandra/data/X/Users-hb-12348-Data.db') and max 
TS 1321824847534000
DEBUG 14:08:47,231 Got sstable 
SSTableReader(path='/var/lib/cassandra/data/X/Users-hb-12346-Data.db') and max 
TS 1321813380793000
DEBUG 14:08:49,879 Got sstable 
SSTableReader(path='/var/lib/cassandra/data/X/Users-hb-12330-Data.db') and max 
TS -9223372036854775808
DEBUG 14:08:49,880 Got sstable 
SSTableReader(path='/var/lib/cassandra/data/X/Users-hb-12325-Data.db') and max 
TS -9223372036854775808
{noformat}

The key I was reading is present in files 12330 and 12325, the first contains 
the *old / wrong* value with timestamp 1307352647576000 above. The second 
contains the *new / correct* value with timestamp 1319563673493000.

When CollectionController.collectTimeOrderedData() processes the 12325 file 
(after processing the 12330 file) while looping over the sstables the call to 
reduceNameFilter() removes the column  from the filter because the column read 
from the 12330 file has a time stamp of 1307352647576000 and the 12325 file 
incorrectly has a max time stamp of -9223372036854775808 .

SSTableMetadata is reading the max time stamp from the stats file, but it is 
Long.MIN_VALUE. I think this happens because scrub creates the SSTableWriter 
using cfs.createCompactionWriter() which sets the maxTimestamp in the meta data 
collector according to the maxTimestamp in the meta data for the file(s) that 
will be scrubbed / compacted. But for pre 1.0 format files the default in 
SSTableMetadata is Long.MIN_VALUE, (see SSTableMetaData.deserialize() and the 
ctor). So scrubbing a pre 1.0 file will write stats files that have 
maxTimestamp as Long.MIN_VALUE.

During scrubbing the SSTableWriter does not update the maxTimestamp because 
append(AbstractCompactedRow) is called which expects the that 
cfs.createCompactionWriter() was able to set the correct maxTimestamp on the 
meta data. Compaction also uses append(AbstractCompactedRow) so may create an 
SSTable with an incorrect maxTimestamp if one of the input files started life 
as a pre 1.0 file and has a bad maxTimestamp. 

It looks like the only time the maxTimestamp is calculated is when the SSTable 
is originally written. So the error from the old files will be carried along. 

e.g. If the files a,b and c have the maxTimestamps 10, 100 and Long.MIN_VALUE 
compaction will write a SSTable with maxTimestamp 100. However file c may 
actually contain columns with a timestamp  100 which will be in the compacted 
file.

h1. Reproduce

1. Start a clean 0.8.7

2. Add a schema (details of the schema do not matter):
{noformat}
[default@unknown] create keyspace dev;   
5f834620-140b-11e1--242d50cf1fdf
Waiting for schema agreement...
... schemas agree across the cluster
[default@unknown] 
[default@unknown] use dev;
Authenticated to keyspace: dev
[default@dev] 
[default@dev] create column family super_dev with column_type = 'Super' 
... and key_validation_class = 'AsciiType' and comparator = 'AsciiType' and 
... subcomparator = 'AsciiType' and default_validation_class = 'AsciiType';
60490720-140b-11e1--242d50cf1fdf
Waiting for 

[jira] [Created] (CASSANDRA-3391) CFM.toAvro() incorrectly serialises key_validation_class defn

2011-10-20 Thread Aaron Morton (Created) (JIRA)
CFM.toAvro() incorrectly serialises key_validation_class defn
-

 Key: CASSANDRA-3391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
Reporter: Aaron Morton
Assignee: Aaron Morton
Priority: Minor


see http://www.mail-archive.com/user@cassandra.apache.org/msg18132.html

Repo with 

{code}
create keyspace Stats with placement_strategy = 
'org.apache.cassandra.locator.SimpleStrategy' and 
strategy_options={replication_factor:1};

use Stats;

create column family Sample_Stats with 
default_validation_class=CounterColumnType
and key_validation_class='CompositeType(UTF8Type,UTF8Type)'
and comparator='CompositeType(UTF8Type, UTF8Type)'
and replicate_on_write=true;

[default@Stats] describe cluster;
Cluster Information:
   Snitch: org.apache.cassandra.locator.SimpleSnitch
   Partitioner: org.apache.cassandra.dht.RandomPartitioner
   Schema versions: 
1d39bbf0-fb60-11e0--242d50cf1ffd: [127.0.0.1]
{code}

Stop and restart the node

{code:java}
ERROR 10:12:22,729 Exception encountered during startup
java.lang.RuntimeException: Could not inflate CFMetaData for {keyspace: 
Stats, name: Sample_Stats, column_type: Standard, comparator_type: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),
 subcomparator_type: null, comment: , row_cache_size: 0.0, 
key_cache_size: 20.0, read_repair_chance: 1.0, replicate_on_write: 
true, gc_grace_seconds: 864000, default_validation_class: 
org.apache.cassandra.db.marshal.CounterColumnType, key_validation_class: 
org.apache.cassandra.db.marshal.CompositeType, min_compaction_threshold: 4, 
max_compaction_threshold: 32, row_cache_save_period_in_seconds: 0, 
key_cache_save_period_in_seconds: 14400, row_cache_keys_to_save: 
2147483647, merge_shards_chance: 0.1, id: 1000, column_metadata: [], 
row_cache_provider: 
org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider, key_alias: 
null, compaction_strategy: 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, 
compaction_strategy_options: {}, compression_options: {}}
at org.apache.cassandra.config.CFMetaData.fromAvro(CFMetaData.java:362)
at org.apache.cassandra.config.KSMetaData.fromAvro(KSMetaData.java:193)
at org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:99)
at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:502)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:161)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:337)
at 
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:106)
Caused by: org.apache.cassandra.config.ConfigurationException: Invalid 
definition for comparator org.apache.cassandra.db.marshal.CompositeType.
at 
org.apache.cassandra.db.marshal.TypeParser.getRawAbstractType(TypeParser.java:319)
at 
org.apache.cassandra.db.marshal.TypeParser.getAbstractType(TypeParser.java:247)
at org.apache.cassandra.db.marshal.TypeParser.parse(TypeParser.java:83)
at org.apache.cassandra.db.marshal.TypeParser.parse(TypeParser.java:92)
at org.apache.cassandra.config.CFMetaData.fromAvro(CFMetaData.java:358)
... 6 more
Caused by: org.apache.cassandra.config.ConfigurationException: Nonsensical 
empty parameter list for CompositeType
at 
org.apache.cassandra.db.marshal.CompositeType.getInstance(CompositeType.java:67)
at 
org.apache.cassandra.db.marshal.CompositeType.getInstance(CompositeType.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.cassandra.db.marshal.TypeParser.getRawAbstractType(TypeParser.java:307)
... 10 more
{code}

Will post the patch in a minute. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira