[jira] [Updated] (CASSANDRA-6495) LOCAL_SERIAL use QUORAM consistency level to validate expected columns

2014-01-14 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-6495:
-

Attachment: trunk_6495.diff

 LOCAL_SERIAL  use QUORAM consistency level to validate expected columns
 ---

 Key: CASSANDRA-6495
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6495
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: trunk_6495.diff


 If CAS is done at LOCAL_SERIAL consistency level, only the nodes from the 
 local data center should be involved. 
 Here we are using QUORAM to validate the expected columns. This will require 
 nodes from more than one DC. 
 We should use LOCAL_QUORAM here when CAS is done at LOCAL_SERIAL. 
 Also if we have 2 DCs with DC1:3,DC2:3, a single DC down will cause CAS to 
 not work even for LOCAL_SERIAL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5357) Query cache / partition head cache

2014-01-14 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870521#comment-13870521
 ] 

Sylvain Lebresne commented on CASSANDRA-5357:
-

bq. If you have a single row per partition, how much of the table you cache is 
purely a function of cache size.

If that was related to my remark above, I don't think I understood that 
sentence, sorry.

 Query cache / partition head cache
 --

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Fix For: 2.1

 Attachments: 0001-Cache-a-configurable-amount-of-columns-v2.patch, 
 0001-Cache-a-configurable-amount-of-columns.patch


 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6498) Null pointer exception in custom secondary indexes

2014-01-14 Thread Miguel Angel Fernandez Diaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miguel Angel Fernandez Diaz updated CASSANDRA-6498:
---

Attachment: CASSANDRA-6498.patch

In order to avoid this null pointer exception, we shouldn't assume that 
highestSelectivityIndex (which is a SecondaryIndex) has a IndexCfs because that 
depends on the implementation type.
 
Therefore, a nice way to solve this issue would be to include an abstract 
method in the SecondaryIndex class, add a implementation of the method where we 
really know there is a IndexCfs and otherwise delegate the implementation of 
this method to those who are creating a custom 2i.

I submit a patch that implements this solution.

 Null pointer exception in custom secondary indexes
 --

 Key: CASSANDRA-6498
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6498
 Project: Cassandra
  Issue Type: Bug
Reporter: Andrés de la Peña
  Labels: 2i, secondaryIndex, secondary_index
 Attachments: CASSANDRA-6498.patch


 StorageProxy#estimateResultRowsPerRange raises a null pointer exception when 
 using a custom 2i implementation that not uses a column family as underlying 
 storage:
 {code}
 resultRowsPerRange = highestSelectivityIndex.getIndexCfs().getMeanColumns();
 {code}
 According to the documentation, the method SecondaryIndex#getIndexCfs should 
 return null when no column family is used.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6579) LIMIT 1 fails while doing a select with index field in where clause

2014-01-14 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6579.
-

   Resolution: Duplicate
Fix Version/s: (was: 2.0.4)
   2.0.5

Thanks for the report. Resolving as duplicate however since the patch from 
CASSANDRA-6555 fixes this issue too. I did pushed the repro code from the 
description as a dtest though, as more tests never hurt. 

 LIMIT 1 fails while doing a select with index field in where clause
 ---

 Key: CASSANDRA-6579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6579
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: UBUNTU 12.04, single node
Reporter: David Morales
Assignee: Sylvain Lebresne
 Fix For: 2.0.5


 create table test(field1 text, field2 timeuuid, field3 boolean, primary 
 key(field1, field2));
 create index test_index on test(field3);
 insert into test(field1, field2, field3) values ('hola', now(), false);
 insert into test(field1, field2, field3) values ('hola', now(), false);
 Now doing a select:
 select count(*) from test where field3 = false limit 1;
 will result in this excepcion
 java.lang.IllegalArgumentException: fromIndex(0)  toIndex(-1)
 at java.util.ArrayList.subListRangeCheck(ArrayList.java:924)
 at java.util.ArrayList.subList(ArrayList.java:914)
 at 
 org.apache.cassandra.service.pager.AbstractQueryPager.discardLast(AbstractQueryPager.java:243)
 at 
 org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:86)
 at 
 org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:36)
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:202)
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:169)
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:58)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188)
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:222)
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:212)
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1958)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2014-01-14 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870606#comment-13870606
 ] 

Andreas Schnitzerling commented on CASSANDRA-6283:
--

Hello,

this problem is occurred, when node was not shutdown probably. As I know, that 
issue is known as CASSANDRA-6531. Here is the stack trace:
{panel:title=system.log}
ERROR [ReadStage:2385] 2014-01-14 10:57:11,875 CassandraDaemon.java (line 187) 
Exception in thread Thread[ReadStage:2385,5,main]
java.lang.RuntimeException: java.lang.IllegalArgumentException: bufferSize must 
be positive
at 
org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:49)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalArgumentException: bufferSize must be positive
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:75)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.init(CompressedRandomAccessReader.java:76)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:43)
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.createReader(CompressedPoolingSegmentedFile.java:48)
at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:39)
at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1195)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:57)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:42)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1516)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1335)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:245)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:105)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1710)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1698)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39)
... 4 more
ERROR [Finalizer] 2014-01-14 10:57:12,005 RandomAccessReader.java (line 398) 
LEAK finalizer had to clean up 
java.lang.Exception: RAR for 
D:\Programme\cassandra\data\nieste\niesteinverters\nieste-niesteinverters-jb-2669-Data.db
 allocated
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:66)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.init(CompressedRandomAccessReader.java:76)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:43)
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.createReader(CompressedPoolingSegmentedFile.java:48)
at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:39)
at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1195)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:57)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:42)
at 

[jira] [Updated] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-14 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6271:


Attachment: tmp.patch

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: oprate.svg, tmp.patch


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Reopened] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-14 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict reopened CASSANDRA-6271:
-

Reproduced In: 2.1
Since Version: 2.1

RecoveryManagerTest is failing due to a typo in 
AtomicBTreeColumns.ColumnUpdater. Attaching trivial fix.

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: oprate.svg, tmp.patch


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-4511) Secondary index support for CQL3 collections

2014-01-14 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-4511.
-

Resolution: Fixed

This is a regression from CASSANDRA-6271, going to follow up there but it's not 
particularly specific to collection indexing so re-closing this.

 Secondary index support for CQL3 collections 
 -

 Key: CASSANDRA-4511
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4511
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0 beta 1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1

 Attachments: 4511.txt


 We should allow to 2ndary index on collections. A typical use case would be 
 to add a 'tag setString' to say a user profile and to query users based on 
 what tag they have.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-14 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6271:


Attachment: 0001-Always-call-ReplaceFunction.txt

Another problem is that the ColumnUpdater is not called for the first insert in 
a partition. This breaks some dtests (and is the reason for the last comments 
on CASSANDRA-4511).

Attaching a simple patch that fixes that. I'll note that this patch also 
include the tmp.patch fix as it happens.

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 0001-Always-call-ReplaceFunction.txt, oprate.svg, 
 tmp.patch


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-14 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870696#comment-13870696
 ] 

Benedict commented on CASSANDRA-6271:
-

LGTM.

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 0001-Always-call-ReplaceFunction.txt, oprate.svg, 
 tmp.patch


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-14 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870700#comment-13870700
 ] 

Sylvain Lebresne commented on CASSANDRA-6271:
-

As a side note, would have been nice to preserve the comment on top of:
{noformat}
if (reconciled == update)
indexer.update(replaced, reconciled);
else
indexer.update(update, reconciled);
{noformat}
It's pretty hard to understand what this code is about otherwise (especially 
since the code made no sense in trunk, it's a reliqua of old times that needs 
to be removed; will open a separate ticket to remove in 2.0 while we're at it).

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 0001-Always-call-ReplaceFunction.txt, oprate.svg, 
 tmp.patch


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6580) Deadcode in AtomicSortedColumns

2014-01-14 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-6580:
---

 Summary: Deadcode in AtomicSortedColumns
 Key: CASSANDRA-6580
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6580
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial


In AtomicSortedColumns we have this:
{noformat}  
// for memtable updates we only care about oldcolumn, reconciledcolumn, but 
when compacting
// we need to make sure we update indexes no matter the order we merge
if (reconciledColumn == column)
indexer.update(oldColumn, reconciledColumn);
else
indexer.update(column, reconciledColumn);
{noformat}
This makes no sense anymore however since AtomicSortedColumns is not used 
anymore during compaction (and index removal is dealt with by the CompactedRow 
implementations).

Attaching trivial patch against 2.0. This affects 1.2 too and maybe before 
(haven't check) but it's harmless anyway and we probably won't have much 
release of pre-2.0 versions anymore. Still, the code is a tad confusing so it's 
maybe worth cleaning it up in 2.0.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6580) Deadcode in AtomicSortedColumns

2014-01-14 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6580:


Attachment: 6580.txt

 Deadcode in AtomicSortedColumns
 ---

 Key: CASSANDRA-6580
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6580
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial
 Attachments: 6580.txt


 In AtomicSortedColumns we have this:
 {noformat}  
 // for memtable updates we only care about oldcolumn, reconciledcolumn, but 
 when compacting
 // we need to make sure we update indexes no matter the order we merge
 if (reconciledColumn == column)
 indexer.update(oldColumn, reconciledColumn);
 else
 indexer.update(column, reconciledColumn);
 {noformat}
 This makes no sense anymore however since AtomicSortedColumns is not used 
 anymore during compaction (and index removal is dealt with by the 
 CompactedRow implementations).
 Attaching trivial patch against 2.0. This affects 1.2 too and maybe before 
 (haven't check) but it's harmless anyway and we probably won't have much 
 release of pre-2.0 versions anymore. Still, the code is a tad confusing so 
 it's maybe worth cleaning it up in 2.0.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-14 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6271:


Attachment: tmp2.patch

Added a unit test, which also caught that we had missed the apply(null, insert) 
in NodeBuilder, so fixed that.

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 0001-Always-call-ReplaceFunction.txt, oprate.svg, 
 tmp.patch, tmp2.patch


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Fixup for 6271

2014-01-14 Thread slebresne
Updated Branches:
  refs/heads/trunk 5edf94842 - 2fd2d8978


Fixup for 6271


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2fd2d897
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2fd2d897
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2fd2d897

Branch: refs/heads/trunk
Commit: 2fd2d89782b7d6fa36c0cb9c710711af39da35ed
Parents: 5edf948
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Jan 14 14:50:36 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Jan 14 14:50:36 2014 +0100

--
 .../apache/cassandra/db/AtomicBTreeColumns.java | 27 ++--
 .../org/apache/cassandra/utils/btree/BTree.java |  7 -
 .../cassandra/utils/btree/NodeBuilder.java  |  2 +-
 .../cassandra/utils/btree/ReplaceFunction.java  |  7 -
 4 files changed, 26 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fd2d897/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java 
b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
index 6fe8758..c6067fb 100644
--- a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
@@ -183,24 +183,23 @@ public class AtomicBTreeColumns extends ColumnFamily
 this.indexer = indexer;
 }
 
+public Cell apply(Cell inserted)
+{
+indexer.insert(inserted);
+delta += inserted.dataSize();
+return transform.apply(inserted);
+}
+
 public Cell apply(Cell replaced, Cell update)
 {
-if (replaced == null)
-{
-indexer.insert(update);
-delta += update.dataSize();
-}
+Cell reconciled = update.reconcile(replaced, allocator);
+if (reconciled == update)
+indexer.update(replaced, reconciled);
 else
-{
-Cell reconciled = update.reconcile(replaced, allocator);
-if (reconciled == update)
-indexer.update(replaced, reconciled);
-else
-indexer.update(update, reconciled);
-delta += reconciled.dataSize() - replaced.dataSize();
-}
+indexer.update(update, reconciled);
+delta += reconciled.dataSize() - replaced.dataSize();
 
-return transform.apply(update);
+return transform.apply(reconciled);
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fd2d897/src/java/org/apache/cassandra/utils/btree/BTree.java
--
diff --git a/src/java/org/apache/cassandra/utils/btree/BTree.java 
b/src/java/org/apache/cassandra/utils/btree/BTree.java
index 44f75b1..1721fb0 100644
--- a/src/java/org/apache/cassandra/utils/btree/BTree.java
+++ b/src/java/org/apache/cassandra/utils/btree/BTree.java
@@ -5,6 +5,7 @@ import java.util.Collection;
 import java.util.Comparator;
 
 import com.google.common.base.Function;
+import com.google.common.collect.Collections2;
 
 public class BTree
 {
@@ -128,7 +129,11 @@ public class BTree
   Function?, Boolean terminateEarly)
 {
 if (btree.length == 0)
+{
+if (replaceF != null)
+updateWith = Collections2.transform(updateWith, replaceF);
 return build(updateWith, comparator, updateWithIsSorted);
+}
 
 if (!updateWithIsSorted)
 updateWith = sorted(updateWith, comparator, updateWith.size());
@@ -168,7 +173,7 @@ public class BTree
 else if (replaceF != null)
 {
 // new element but still need to apply replaceF to handle 
indexing and size-tracking
-v = replaceF.apply(null, v);
+v = replaceF.apply(v);
 }
 
 merged[mergedCount++] = v;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fd2d897/src/java/org/apache/cassandra/utils/btree/NodeBuilder.java
--
diff --git a/src/java/org/apache/cassandra/utils/btree/NodeBuilder.java 
b/src/java/org/apache/cassandra/utils/btree/NodeBuilder.java
index 5dbe5df..e526394 100644
--- a/src/java/org/apache/cassandra/utils/btree/NodeBuilder.java
+++ b/src/java/org/apache/cassandra/utils/btree/NodeBuilder.java
@@ -231,7 +231,7 @@ final class NodeBuilder
 {
 ensureRoom(buildKeyPosition + 1);
 if (replaceF != null)
-key = replaceF.apply(null, 

[jira] [Updated] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-14 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6271:


Attachment: tmp3.patch

Fixed patch generation and added extra test line to catch the original bug case.

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 0001-Always-call-ReplaceFunction.txt, oprate.svg, 
 tmp.patch, tmp2.patch, tmp3.patch


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6580) Deadcode in AtomicSortedColumns

2014-01-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870753#comment-13870753
 ] 

Jonathan Ellis commented on CASSANDRA-6580:
---

+1

 Deadcode in AtomicSortedColumns
 ---

 Key: CASSANDRA-6580
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6580
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial
 Attachments: 6580.txt


 In AtomicSortedColumns we have this:
 {noformat}  
 // for memtable updates we only care about oldcolumn, reconciledcolumn, but 
 when compacting
 // we need to make sure we update indexes no matter the order we merge
 if (reconciledColumn == column)
 indexer.update(oldColumn, reconciledColumn);
 else
 indexer.update(column, reconciledColumn);
 {noformat}
 This makes no sense anymore however since AtomicSortedColumns is not used 
 anymore during compaction (and index removal is dealt with by the 
 CompactedRow implementations).
 Attaching trivial patch against 2.0. This affects 1.2 too and maybe before 
 (haven't check) but it's harmless anyway and we probably won't have much 
 release of pre-2.0 versions anymore. Still, the code is a tad confusing so 
 it's maybe worth cleaning it up in 2.0.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Remove dead code

2014-01-14 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 8b8c159f4 - 0e55e9ff6


Remove dead code

patch by slebresne; reviewed by jbellis for CASSANDRA-6580


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0e55e9ff
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0e55e9ff
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0e55e9ff

Branch: refs/heads/cassandra-2.0
Commit: 0e55e9ff6695504c4698115d8856620b752cf713
Parents: 8b8c159
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Jan 14 15:34:53 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Jan 14 15:35:34 2014 +0100

--
 src/java/org/apache/cassandra/db/AtomicSortedColumns.java | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0e55e9ff/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java 
b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index b44d8bf..1c0bf1b 100644
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@ -330,12 +330,7 @@ public class AtomicSortedColumns extends ColumnFamily
 Column reconciledColumn = column.reconcile(oldColumn, 
allocator);
 if (map.replace(name, oldColumn, reconciledColumn))
 {
-// for memtable updates we only care about oldcolumn, 
reconciledcolumn, but when compacting
-// we need to make sure we update indexes no matter the 
order we merge
-if (reconciledColumn == column)
-indexer.update(oldColumn, reconciledColumn);
-else
-indexer.update(column, reconciledColumn);
+indexer.update(oldColumn, reconciledColumn);
 return reconciledColumn.dataSize() - oldColumn.dataSize();
 }
 // We failed to replace column due to a concurrent update or a 
concurrent removal. Keep trying.



[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2014-01-14 Thread slebresne
Merge branch 'cassandra-2.0' into trunk

Conflicts:
src/java/org/apache/cassandra/db/AtomicSortedColumns.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d9691e82
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d9691e82
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d9691e82

Branch: refs/heads/trunk
Commit: d9691e823930982d120c2a237c7087245b003f0d
Parents: 2fd2d89 0e55e9f
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Jan 14 15:37:40 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Jan 14 15:37:40 2014 +0100

--

--




[1/2] git commit: Remove dead code

2014-01-14 Thread slebresne
Updated Branches:
  refs/heads/trunk 2fd2d8978 - d9691e823


Remove dead code

patch by slebresne; reviewed by jbellis for CASSANDRA-6580


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0e55e9ff
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0e55e9ff
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0e55e9ff

Branch: refs/heads/trunk
Commit: 0e55e9ff6695504c4698115d8856620b752cf713
Parents: 8b8c159
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Jan 14 15:34:53 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Jan 14 15:35:34 2014 +0100

--
 src/java/org/apache/cassandra/db/AtomicSortedColumns.java | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0e55e9ff/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java 
b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index b44d8bf..1c0bf1b 100644
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@ -330,12 +330,7 @@ public class AtomicSortedColumns extends ColumnFamily
 Column reconciledColumn = column.reconcile(oldColumn, 
allocator);
 if (map.replace(name, oldColumn, reconciledColumn))
 {
-// for memtable updates we only care about oldcolumn, 
reconciledcolumn, but when compacting
-// we need to make sure we update indexes no matter the 
order we merge
-if (reconciledColumn == column)
-indexer.update(oldColumn, reconciledColumn);
-else
-indexer.update(column, reconciledColumn);
+indexer.update(oldColumn, reconciledColumn);
 return reconciledColumn.dataSize() - oldColumn.dataSize();
 }
 // We failed to replace column due to a concurrent update or a 
concurrent removal. Keep trying.



[jira] [Resolved] (CASSANDRA-6580) Deadcode in AtomicSortedColumns

2014-01-14 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6580.
-

   Resolution: Fixed
Fix Version/s: 2.0.5
 Reviewer: Jonathan Ellis

Committed, thanks

 Deadcode in AtomicSortedColumns
 ---

 Key: CASSANDRA-6580
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6580
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial
 Fix For: 2.0.5

 Attachments: 6580.txt


 In AtomicSortedColumns we have this:
 {noformat}  
 // for memtable updates we only care about oldcolumn, reconciledcolumn, but 
 when compacting
 // we need to make sure we update indexes no matter the order we merge
 if (reconciledColumn == column)
 indexer.update(oldColumn, reconciledColumn);
 else
 indexer.update(column, reconciledColumn);
 {noformat}
 This makes no sense anymore however since AtomicSortedColumns is not used 
 anymore during compaction (and index removal is dealt with by the 
 CompactedRow implementations).
 Attaching trivial patch against 2.0. This affects 1.2 too and maybe before 
 (haven't check) but it's harmless anyway and we probably won't have much 
 release of pre-2.0 versions anymore. Still, the code is a tad confusing so 
 it's maybe worth cleaning it up in 2.0.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Fix previous merge

2014-01-14 Thread slebresne
Updated Branches:
  refs/heads/trunk d9691e823 - f6f50ddff


Fix previous merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6f50ddf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6f50ddf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6f50ddf

Branch: refs/heads/trunk
Commit: f6f50ddffe0821617fe29482f9ec918608560381
Parents: d9691e8
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Jan 14 15:43:57 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Jan 14 15:43:57 2014 +0100

--
 src/java/org/apache/cassandra/db/AtomicBTreeColumns.java | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6f50ddf/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java 
b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
index c6067fb..c475a0e 100644
--- a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
@@ -193,10 +193,7 @@ public class AtomicBTreeColumns extends ColumnFamily
 public Cell apply(Cell replaced, Cell update)
 {
 Cell reconciled = update.reconcile(replaced, allocator);
-if (reconciled == update)
-indexer.update(replaced, reconciled);
-else
-indexer.update(update, reconciled);
+indexer.update(replaced, reconciled);
 delta += reconciled.dataSize() - replaced.dataSize();
 
 return transform.apply(reconciled);



[jira] [Created] (CASSANDRA-6581) Experiment faster file transfer with UDT

2014-01-14 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-6581:
-

 Summary: Experiment faster file transfer with UDT
 Key: CASSANDRA-6581
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6581
 Project: Cassandra
  Issue Type: New Feature
Reporter: Yuki Morishita
Priority: Minor
 Fix For: 3.0


UDT is UDP based data transfer protocol and expected to be faster than TCP.
There is java library that wraps C++ UDT lib at 
https://github.com/barchart/barchart-udt.

Experiment to see if we can achieve faster data transfer using above library.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2014-01-14 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870952#comment-13870952
 ] 

Andreas Schnitzerling commented on CASSANDRA-6283:
--

Yesterday I Updated one node with 2.0.4-rel incl. Finalizer-Patch (see results 
above). nodetool repair -par caused node to repair endless and collecting 
about 65K Files in datafolder. I updated now to pre-2.0.5 from today (commit 
f6f50ddffe0821617fe29482f9ec918608560381). After starting, a lot of LEAK 
messages and File-Not-Found messages. But Files reduce.
{panel:title=system.log (pre-2.0.5)}
ERROR [SSTableBatchOpen:1] 2014-01-14 18:18:42,753 CassandraDaemon.java:139 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
D:\Programme\cassandra\data\KSlogdata\CFlogdata\KSlogdata-CFlogdata-jb-27051-Index.db
 (Das System kann die angegebene Datei nicht finden)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:109)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:97)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.buildSummary(SSTableReader.java:595)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:575) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:527) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:328) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:230) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:364) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
~[na:1.7.0_25]
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
~[na:1.7.0_25]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_25]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
~[na:1.7.0_25]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
~[na:1.7.0_25]
at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_25]
Caused by: java.io.FileNotFoundException: 
D:\Programme\cassandra\data\KSlogdata\CFlogdata\KSlogdata-CFlogdata-jb-27051-Index.db
 (Das System kann die angegebene Datei nicht finden)
at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_25]
at java.io.RandomAccessFile.init(Unknown Source) ~[na:1.7.0_25]
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:63)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:105)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
... 13 common frames omitted
...
ERROR [Finalizer] 2014-01-14 18:27:45,076 RandomAccessReader.java:401 - LEAK 
finalizer had to clean up 
java.lang.Exception: RAR for 
D:\Programme\cassandra\data\system\compactions_in_progress\system-compactions_in_progress-ka-5012-Statistics.db
 allocated
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:65)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:105)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:97)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:88)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:98)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:167)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:125)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 

[jira] [Updated] (CASSANDRA-6498) Null pointer exception in custom secondary indexes

2014-01-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6498:
--

Reviewer: Sam Tunnicliffe

WDYT [~beobal]?

 Null pointer exception in custom secondary indexes
 --

 Key: CASSANDRA-6498
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6498
 Project: Cassandra
  Issue Type: Bug
Reporter: Andrés de la Peña
  Labels: 2i, secondaryIndex, secondary_index
 Attachments: CASSANDRA-6498.patch


 StorageProxy#estimateResultRowsPerRange raises a null pointer exception when 
 using a custom 2i implementation that not uses a column family as underlying 
 storage:
 {code}
 resultRowsPerRange = highestSelectivityIndex.getIndexCfs().getMeanColumns();
 {code}
 According to the documentation, the method SecondaryIndex#getIndexCfs should 
 return null when no column family is used.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5357) Query cache / partition head cache

2014-01-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870962#comment-13870962
 ] 

Jonathan Ellis commented on CASSANDRA-5357:
---

We're talking about static CFs aka partition key == primary key, right?

Then there is one row per partition, so there is no need for a special rows 
per partition = all setting.  The case you describe of wanting to cache a 
full table is not dependent on rows per partition but on cache size = number 
of partitions cached.

 Query cache / partition head cache
 --

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Fix For: 2.1

 Attachments: 0001-Cache-a-configurable-amount-of-columns-v2.patch, 
 0001-Cache-a-configurable-amount-of-columns.patch


 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[1/3] git commit: Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL patch by Sankalp Kohli; reviewed by jbellis for CASSANDRA-6495

2014-01-14 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 0e55e9ff6 - 97c6bbe60
  refs/heads/trunk f6f50ddff - 4910ce802


Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL
patch by Sankalp Kohli; reviewed by jbellis for CASSANDRA-6495


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97c6bbe6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97c6bbe6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97c6bbe6

Branch: refs/heads/cassandra-2.0
Commit: 97c6bbe6093de16f3370a511023b861a381da7fe
Parents: 0e55e9f
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jan 14 11:51:36 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jan 14 11:51:48 2014 -0600

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/97c6bbe6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1bf5615..ef2df51 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.5
+ * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
  * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
  * Delete unfinished compaction incrementally (CASSANDRA-6086)
  * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/97c6bbe6/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index be91d0d..c8ee297 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -243,7 +243,7 @@ public class StorageProxy implements StorageProxyMBean
 assert !expected.isEmpty();
 readCommand = new SliceByNamesReadCommand(keyspaceName, key, 
cfName, timestamp, new 
NamesQueryFilter(ImmutableSortedSet.copyOf(metadata.comparator, 
expected.getColumnNames(;
 }
-ListRow rows = read(Arrays.asList(readCommand), 
ConsistencyLevel.QUORUM);
+ListRow rows = read(Arrays.asList(readCommand), 
consistencyForPaxos == ConsistencyLevel.LOCAL_SERIAL? 
ConsistencyLevel.LOCAL_QUORUM : ConsistencyLevel.QUORUM);
 ColumnFamily current = rows.get(0).cf;
 if (!casApplies(expected, current))
 {



[2/3] git commit: Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL patch by Sankalp Kohli; reviewed by jbellis for CASSANDRA-6495

2014-01-14 Thread jbellis
Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL
patch by Sankalp Kohli; reviewed by jbellis for CASSANDRA-6495


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97c6bbe6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97c6bbe6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97c6bbe6

Branch: refs/heads/trunk
Commit: 97c6bbe6093de16f3370a511023b861a381da7fe
Parents: 0e55e9f
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jan 14 11:51:36 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jan 14 11:51:48 2014 -0600

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/97c6bbe6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1bf5615..ef2df51 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.5
+ * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
  * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
  * Delete unfinished compaction incrementally (CASSANDRA-6086)
  * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/97c6bbe6/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index be91d0d..c8ee297 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -243,7 +243,7 @@ public class StorageProxy implements StorageProxyMBean
 assert !expected.isEmpty();
 readCommand = new SliceByNamesReadCommand(keyspaceName, key, 
cfName, timestamp, new 
NamesQueryFilter(ImmutableSortedSet.copyOf(metadata.comparator, 
expected.getColumnNames(;
 }
-ListRow rows = read(Arrays.asList(readCommand), 
ConsistencyLevel.QUORUM);
+ListRow rows = read(Arrays.asList(readCommand), 
consistencyForPaxos == ConsistencyLevel.LOCAL_SERIAL? 
ConsistencyLevel.LOCAL_QUORUM : ConsistencyLevel.QUORUM);
 ColumnFamily current = rows.get(0).cf;
 if (!casApplies(expected, current))
 {



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-01-14 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4910ce80
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4910ce80
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4910ce80

Branch: refs/heads/trunk
Commit: 4910ce8020d9c2e9747a9f5f5dedd3b8c998bd6e
Parents: f6f50dd 97c6bbe
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Jan 14 11:51:54 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Jan 14 11:51:54 2014 -0600

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4910ce80/CHANGES.txt
--
diff --cc CHANGES.txt
index 6e47cff,ef2df51..4069a77
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,31 -1,5 +1,32 @@@
 +2.1
 + * Introduce AtomicBTreeColumns (CASSANDRA-6271)
 + * Multithreaded commitlog (CASSANDRA-3578)
 + * allocate fixed index summary memory pool and resample cold index summaries 
 +   to use less memory (CASSANDRA-5519)
 + * Removed multithreaded compaction (CASSANDRA-6142)
 + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337)
 + * change logging from log4j to logback (CASSANDRA-5883)
 + * switch to LZ4 compression for internode communication (CASSANDRA-5887)
 + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971)
 + * Remove 1.2 network compatibility code (CASSANDRA-5960)
 + * Remove leveled json manifest migration code (CASSANDRA-5996)
 + * Remove CFDefinition (CASSANDRA-6253)
 + * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278)
 + * User-defined types for CQL3 (CASSANDRA-5590)
 + * Use of o.a.c.metrics in nodetool (CASSANDRA-5871, 6406)
 + * Batch read from OTC's queue and cleanup (CASSANDRA-1632)
 + * Secondary index support for collections (CASSANDRA-4511, 6383)
 + * SSTable metadata(Stats.db) format change (CASSANDRA-6356)
 + * Push composites support in the storage engine
 +   (CASSANDRA-5417, CASSANDRA-6520)
 + * Add snapshot space used to cfstats (CASSANDRA-6231)
 + * Add cardinality estimator for key count estimation (CASSANDRA-5906)
 + * CF id is changed to be non-deterministic. Data dir/key cache are created
 +   uniquely for CF id (CASSANDRA-5202)
 +
 +
  2.0.5
+  * Use LOCAL_QUORUM for LWT operations at LOCAL_SERIAL (CASSANDRA-6495)
   * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
   * Delete unfinished compaction incrementally (CASSANDRA-6086)
   * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4910ce80/src/java/org/apache/cassandra/service/StorageProxy.java
--



[jira] [Comment Edited] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2014-01-14 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870952#comment-13870952
 ] 

Andreas Schnitzerling edited comment on CASSANDRA-6283 at 1/14/14 5:52 PM:
---

Yesterday I updated one node with 2.0.4-rel incl. finalizer-patch (see results 
above). Nodetool repair -par caused node to repair endless and collecting 
about 65K Files in datafolder. I updated now to pre-2.0.5 from today (commit 
f6f50ddffe0821617fe29482f9ec918608560381). After starting, a lot of LEAK 
messages and File-Not-Found messages appeared in system.log. But files reduce.
{panel:title=system.log (pre-2.0.5)}
ERROR [SSTableBatchOpen:1] 2014-01-14 18:18:42,753 CassandraDaemon.java:139 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
D:\Programme\cassandra\data\KSlogdata\CFlogdata\KSlogdata-CFlogdata-jb-27051-Index.db
 (Das System kann die angegebene Datei nicht finden)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:109)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:97)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.buildSummary(SSTableReader.java:595)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:575) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:527) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:328) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:230) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:364) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
~[na:1.7.0_25]
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
~[na:1.7.0_25]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_25]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
~[na:1.7.0_25]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
~[na:1.7.0_25]
at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_25]
Caused by: java.io.FileNotFoundException: 
D:\Programme\cassandra\data\KSlogdata\CFlogdata\KSlogdata-CFlogdata-jb-27051-Index.db
 (Das System kann die angegebene Datei nicht finden)
at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_25]
at java.io.RandomAccessFile.init(Unknown Source) ~[na:1.7.0_25]
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:63)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:105)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
... 13 common frames omitted
...
ERROR [Finalizer] 2014-01-14 18:27:45,076 RandomAccessReader.java:401 - LEAK 
finalizer had to clean up 
java.lang.Exception: RAR for 
D:\Programme\cassandra\data\system\compactions_in_progress\system-compactions_in_progress-ka-5012-Statistics.db
 allocated
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:65)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:105)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:97)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:88)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:98)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:167)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:125)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 

[jira] [Comment Edited] (CASSANDRA-6283) Windows 7 data files keept open / can't be deleted after compaction.

2014-01-14 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870952#comment-13870952
 ] 

Andreas Schnitzerling edited comment on CASSANDRA-6283 at 1/14/14 5:57 PM:
---

Yesterday I updated one node with 2.0.4-rel incl. finalizer-patch (see results 
above). Nodetool repair -par caused node to repair endless and collecting 
about 65K Files in datafolder. I updated now to pre-2.0.5 from today (commit 
f6f50ddffe0821617fe29482f9ec918608560381). After starting, a lot of LEAK 
messages and File-Not-Found messages appeared in system.log. But files reduce.
{panel:title=system.log (pre-2.0.5)}
ERROR [SSTableBatchOpen:1] 2014-01-14 18:18:42,753 CassandraDaemon.java:139 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
java.lang.RuntimeException: java.io.FileNotFoundException: 
D:\Programme\cassandra\data\KSlogdata\CFlogdata\KSlogdata-CFlogdata-jb-27051-Index.db
 (Das System kann die angegebene Datei nicht finden)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:109)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:97)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.buildSummary(SSTableReader.java:595)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:575) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:527) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:328) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:230) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:364) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
~[na:1.7.0_25]
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
~[na:1.7.0_25]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_25]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
~[na:1.7.0_25]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
~[na:1.7.0_25]
at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_25]
Caused by: java.io.FileNotFoundException: 
D:\Programme\cassandra\data\KSlogdata\CFlogdata\KSlogdata-CFlogdata-jb-27051-Index.db
 (Das System kann die angegebene Datei nicht finden)
at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_25]
at java.io.RandomAccessFile.init(Unknown Source) ~[na:1.7.0_25]
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:63)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:105)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
... 13 common frames omitted
...
ERROR [Finalizer] 2014-01-14 18:27:45,076 RandomAccessReader.java:401 - LEAK 
finalizer had to clean up 
java.lang.Exception: RAR for 
D:\Programme\cassandra\data\system\compactions_in_progress\system-compactions_in_progress-ka-5012-Statistics.db
 allocated
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:65)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:105)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:97)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:88)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:98)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:167)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:125)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1-SNAPSHOT.jar:2.1-SNAPSHOT]
at 

[jira] [Updated] (CASSANDRA-6465) DES scores fluctuate too much for cache pinning

2014-01-14 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-6465:
---

Attachment: throughput.png
99th_latency.png

Attached are two graphs of throughput and 99th percentile latencies for four 
runs of stress.  Two runs kept the time penalty in DES, and two had it removed. 
 There was a normal stress read of 3 million rows with and without the time 
penalty, and a second run where one of the three nodes was suspended 30 seconds 
into the run and resumed 60 seconds into the run.

In short, there's no difference in throughput or median/95th/99th latencies 
when a node goes down with the time penalty removed, so it looks like rapid 
read protection does indeed dominate there.

 DES scores fluctuate too much for cache pinning
 ---

 Key: CASSANDRA-6465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11, 2 DC cluster
Reporter: Chris Burroughs
Assignee: Tyler Hobbs
Priority: Minor
  Labels: gossip
 Fix For: 2.0.5

 Attachments: 99th_latency.png, des-score-graph.png, 
 des.sample.15min.csv, get-scores.py, throughput.png


 To quote the conf:
 {noformat}
 # if set greater than zero and read_repair_chance is  1.0, this will allow
 # 'pinning' of replicas to hosts in order to increase cache capacity.
 # The badness threshold will control how much worse the pinned host has to be
 # before the dynamic snitch will prefer other replicas over it.  This is
 # expressed as a double which represents a percentage.  Thus, a value of
 # 0.2 means Cassandra would continue to prefer the static snitch values
 # until the pinned host was 20% worse than the fastest.
 dynamic_snitch_badness_threshold: 0.1
 {noformat}
 An assumption of this feature is that scores will vary by less than 
 dynamic_snitch_badness_threshold during normal operations.  Attached is the 
 result of polling a node for the scores of 6 different endpoints at 1 Hz for 
 15 minutes.  The endpoints to sample were chosen with `nodetool getendpoints` 
 for row that is known to get reads.  The node was acting as a coordinator for 
 a few hundred req/second, so it should have sufficient data to work with.  
 Other traces on a second cluster have produced similar results.
  * The scores vary by far more than I would expect, as show by the difficulty 
 of seeing anything useful in that graph.
  * The difference between the best and next-best score is usually  10% 
 (default dynamic_snitch_badness_threshold).
 Neither ClientRequest nor ColumFamily metrics showed wild changes during the 
 data gathering period.
 Attachments:
  * jython script cobbled together to gather the data (based on work on the 
 mailing list from Maki Watanabe a while back)
  * csv of DES scores for 6 endpoints, polled about once a second
  * Attempt at making a graph



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6465) DES scores fluctuate too much for cache pinning

2014-01-14 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-6465:
---

Attachment: 6465-v1.patch

6465-v1.patch (and 
[branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-6465]) removes the 
timePenalty component from the DES score.

 DES scores fluctuate too much for cache pinning
 ---

 Key: CASSANDRA-6465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11, 2 DC cluster
Reporter: Chris Burroughs
Assignee: Tyler Hobbs
Priority: Minor
  Labels: gossip
 Fix For: 2.0.5

 Attachments: 6465-v1.patch, 99th_latency.png, des-score-graph.png, 
 des.sample.15min.csv, get-scores.py, throughput.png


 To quote the conf:
 {noformat}
 # if set greater than zero and read_repair_chance is  1.0, this will allow
 # 'pinning' of replicas to hosts in order to increase cache capacity.
 # The badness threshold will control how much worse the pinned host has to be
 # before the dynamic snitch will prefer other replicas over it.  This is
 # expressed as a double which represents a percentage.  Thus, a value of
 # 0.2 means Cassandra would continue to prefer the static snitch values
 # until the pinned host was 20% worse than the fastest.
 dynamic_snitch_badness_threshold: 0.1
 {noformat}
 An assumption of this feature is that scores will vary by less than 
 dynamic_snitch_badness_threshold during normal operations.  Attached is the 
 result of polling a node for the scores of 6 different endpoints at 1 Hz for 
 15 minutes.  The endpoints to sample were chosen with `nodetool getendpoints` 
 for row that is known to get reads.  The node was acting as a coordinator for 
 a few hundred req/second, so it should have sufficient data to work with.  
 Other traces on a second cluster have produced similar results.
  * The scores vary by far more than I would expect, as show by the difficulty 
 of seeing anything useful in that graph.
  * The difference between the best and next-best score is usually  10% 
 (default dynamic_snitch_badness_threshold).
 Neither ClientRequest nor ColumFamily metrics showed wild changes during the 
 data gathering period.
 Attachments:
  * jython script cobbled together to gather the data (based on work on the 
 mailing list from Maki Watanabe a while back)
  * csv of DES scores for 6 endpoints, polled about once a second
  * Attempt at making a graph



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6465) DES scores fluctuate too much for cache pinning

2014-01-14 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871089#comment-13871089
 ] 

Brandon Williams commented on CASSANDRA-6465:
-

Can we get some numbers on score fluctuation with the time penalty removed to 
be certain this fixes it?

 DES scores fluctuate too much for cache pinning
 ---

 Key: CASSANDRA-6465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11, 2 DC cluster
Reporter: Chris Burroughs
Assignee: Tyler Hobbs
Priority: Minor
  Labels: gossip
 Fix For: 2.0.5

 Attachments: 6465-v1.patch, 99th_latency.png, des-score-graph.png, 
 des.sample.15min.csv, get-scores.py, throughput.png


 To quote the conf:
 {noformat}
 # if set greater than zero and read_repair_chance is  1.0, this will allow
 # 'pinning' of replicas to hosts in order to increase cache capacity.
 # The badness threshold will control how much worse the pinned host has to be
 # before the dynamic snitch will prefer other replicas over it.  This is
 # expressed as a double which represents a percentage.  Thus, a value of
 # 0.2 means Cassandra would continue to prefer the static snitch values
 # until the pinned host was 20% worse than the fastest.
 dynamic_snitch_badness_threshold: 0.1
 {noformat}
 An assumption of this feature is that scores will vary by less than 
 dynamic_snitch_badness_threshold during normal operations.  Attached is the 
 result of polling a node for the scores of 6 different endpoints at 1 Hz for 
 15 minutes.  The endpoints to sample were chosen with `nodetool getendpoints` 
 for row that is known to get reads.  The node was acting as a coordinator for 
 a few hundred req/second, so it should have sufficient data to work with.  
 Other traces on a second cluster have produced similar results.
  * The scores vary by far more than I would expect, as show by the difficulty 
 of seeing anything useful in that graph.
  * The difference between the best and next-best score is usually  10% 
 (default dynamic_snitch_badness_threshold).
 Neither ClientRequest nor ColumFamily metrics showed wild changes during the 
 data gathering period.
 Attachments:
  * jython script cobbled together to gather the data (based on work on the 
 mailing list from Maki Watanabe a while back)
  * csv of DES scores for 6 endpoints, polled about once a second
  * Attempt at making a graph



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6582) ALTER TYPE RENAME hangs

2014-01-14 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-6582:
-

 Summary: ALTER TYPE RENAME hangs
 Key: CASSANDRA-6582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6582
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra trunk-4910ce8
java version 1.7.0_45
Reporter: Russ Hatch


I can't rename a user defined type using 'ALTER TYPE RENAME'.

Steps to reproduce:

{noformat}
[cqlsh 4.1.1 | Cassandra 2.1-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
19.39.0]
Use HELP for help.
cqlsh 
rhatch@whatup:~/git/cstar/cassandra$ ccm clear
rhatch@whatup:~/git/cstar/cassandra$ ccm remove
rhatch@whatup:~/git/cstar/cassandra$ ccm create test_cluster
Current cluster is now: test_cluster
rhatch@whatup:~/git/cstar/cassandra$ ccm populate -n 1
rhatch@whatup:~/git/cstar/cassandra$ ccm start
rhatch@whatup:~/git/cstar/cassandra$ ccm node1 cqlsh
Connected to test_cluster at 127.0.0.1:9160.
[cqlsh 4.1.1 | Cassandra 2.1-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
19.39.0]
cqlsh create keyspace user_type_renaming with replication = 
{'class':'SimpleStrategy', 'replication_factor':1} ;
cqlsh use user_type_renaming ;
cqlsh:user_type_renaming   CREATE TYPE simple_type (
  ...   user_number int
  ...   );
cqlsh:user_type_renaming   ALTER TYPE simple_type rename to 
renamed_type;
{noformat}

And here's the log contents after the failure:

{noformat}
INFO  [MigrationStage:1] 2014-01-14 13:11:21,521 DefsTables.java:410 - Loading 
org.apache.cassandra.db.marshal.UserType(user_type_renaming,73696d706c655f74797065,757365725f6e756d626572:org.apache.cassandra.db.marshal.Int32Type)
ERROR [Thrift:1] 2014-01-14 13:11:36,684 CassandraDaemon.java:139 - Exception 
in thread Thread[Thrift:1,5,main]
java.lang.AssertionError: null
at org.apache.cassandra.config.Schema.getKSMetaData(Schema.java:228) 
~[main/:na]
at 
org.apache.cassandra.cql3.statements.AlterTypeStatement$TypeRename.makeUpdatedType(AlterTypeStatement.java:357)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.AlterTypeStatement.announceMigration(AlterTypeStatement.java:108)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:71)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:194)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:228) 
~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:218) 
~[main/:na]
at 
org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1966)
 ~[main/:na]
at 
org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486)
 ~[thrift/:na]
at 
org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470)
 ~[thrift/:na]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
~[libthrift-0.9.1.jar:0.9.1]
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194)
 ~[main/:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
~[na:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_45]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6583) ALTER TYPE DROP complains no keyspace is active (when keyspace is active)

2014-01-14 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-6583:
-

 Summary: ALTER TYPE DROP complains no keyspace is active (when 
keyspace is active)
 Key: CASSANDRA-6583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6583
 Project: Cassandra
  Issue Type: Bug
 Environment: trunk-4910ce8
java version 1.7.0_45
Reporter: Russ Hatch
Priority: Minor


ALTER TYPE DROP complains that there is no active keyspace (even when the 
session has an active keyspace). The drop works when the prefix is provided.

steps to reproduce:
{noformat}
ccm create test_cluster
ccm populate -n 1
ccm start
ccm node1 cqlsh
Connected to test_cluster at 127.0.0.1:9160.
[cqlsh 4.1.1 | Cassandra 2.1-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
19.39.0]
Use HELP for help.
cqlsh create keyspace user_type_dropping with replication = 
{'class':'SimpleStrategy', 'replication_factor':1} ;
cqlsh use user_type_dropping ;
cqlsh:user_type_dropping   CREATE TYPE simple_type (
  ...   user_number int
  ...   );
cqlsh:user_type_dropping DROP TYPE simple_type;
Bad Request: You have not set a keyspace for this session
cqlsh:user_type_dropping DROP TYPE user_type_dropping.simple_type;
{noformat}




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6583) DROP TYPE complains no keyspace is active (when keyspace is active)

2014-01-14 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-6583:
--

Summary: DROP TYPE complains no keyspace is active (when keyspace is 
active)  (was: ALTER TYPE DROP complains no keyspace is active (when keyspace 
is active))

 DROP TYPE complains no keyspace is active (when keyspace is active)
 ---

 Key: CASSANDRA-6583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6583
 Project: Cassandra
  Issue Type: Bug
 Environment: trunk-4910ce8
 java version 1.7.0_45
Reporter: Russ Hatch
Priority: Minor

 ALTER TYPE DROP complains that there is no active keyspace (even when the 
 session has an active keyspace). The drop works when the prefix is provided.
 steps to reproduce:
 {noformat}
 ccm create test_cluster
 ccm populate -n 1
 ccm start
 ccm node1 cqlsh
 Connected to test_cluster at 127.0.0.1:9160.
 [cqlsh 4.1.1 | Cassandra 2.1-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
 19.39.0]
 Use HELP for help.
 cqlsh create keyspace user_type_dropping with replication = 
 {'class':'SimpleStrategy', 'replication_factor':1} ;
 cqlsh use user_type_dropping ;
 cqlsh:user_type_dropping   CREATE TYPE simple_type (
   ...   user_number int
   ...   );
 cqlsh:user_type_dropping DROP TYPE simple_type;
 Bad Request: You have not set a keyspace for this session
 cqlsh:user_type_dropping DROP TYPE user_type_dropping.simple_type;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Reopened] (CASSANDRA-6472) Node hangs when Drop Keyspace / Table is executed

2014-01-14 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reopened CASSANDRA-6472:
---

Reproduced In: 2.1

This still appears to be happening.

Steps to reproduce:
{noformat}
ccm create test_cluster
ccm populate -n 1
ccm start
ccm node1 cqlsh

Connected to test_cluster at 127.0.0.1:9160.
[cqlsh 4.1.1 | Cassandra 2.1-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
19.39.0]
Use HELP for help.
cqlsh create keyspace test_table_dropping with replication = 
{'class':'SimpleStrategy', 'replication_factor':1} ;
cqlsh use test_table_dropping ;
cqlsh:test_table_dropping CREATE TABLE simple_table (
   ...   id uuid PRIMARY KEY,
   ...   sometext text);
cqlsh:test_table_dropping DROP TABLE simple_table;
{noformat}

At this point the cql session hangs. I don't see any exceptions in the log, but 
this message appears:
{noformat}
INFO  [Thrift:1] 2014-01-14 13:23:40,341 MigrationManager.java:288 - Drop 
ColumnFamily 'user_type_dropping/simple_table'
{noformat}

 Node hangs when Drop Keyspace / Table is executed
 -

 Key: CASSANDRA-6472
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6472
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: amorton
Assignee: Benedict
 Fix For: 2.1


 from http://www.mail-archive.com/user@cassandra.apache.org/msg33566.html
 CommitLogSegmentManager.flushDataFrom() returns a FutureTask to wait on the 
 flushes, but the task is not started in flushDataFrom(). 
 The CLSM manager thread does not use the result and forceRecycleAll 
 (eventually called when making schema mods) does not start it so hangs when 
 calling get().
 plan to patch so flushDataFrom() returns a Future. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6472) Node hangs when Drop Keyspace / Table is executed

2014-01-14 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871120#comment-13871120
 ] 

Russ Hatch commented on CASSANDRA-6472:
---

I just noticed that if I ctrl-C, ctrl-D to kill my cql session, then open a new 
cql session the table is in fact gone. So the main problem is just cqlsh 
hanging after the statement.

 Node hangs when Drop Keyspace / Table is executed
 -

 Key: CASSANDRA-6472
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6472
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: amorton
Assignee: Benedict
 Fix For: 2.1


 from http://www.mail-archive.com/user@cassandra.apache.org/msg33566.html
 CommitLogSegmentManager.flushDataFrom() returns a FutureTask to wait on the 
 flushes, but the task is not started in flushDataFrom(). 
 The CLSM manager thread does not use the result and forceRecycleAll 
 (eventually called when making schema mods) does not start it so hangs when 
 calling get().
 plan to patch so flushDataFrom() returns a Future. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-6582) ALTER TYPE RENAME hangs

2014-01-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6582:
-

Assignee: Sylvain Lebresne

 ALTER TYPE RENAME hangs
 ---

 Key: CASSANDRA-6582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6582
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra trunk-4910ce8
 java version 1.7.0_45
Reporter: Russ Hatch
Assignee: Sylvain Lebresne

 I can't rename a user defined type using 'ALTER TYPE RENAME'.
 Steps to reproduce:
 {noformat}
 [cqlsh 4.1.1 | Cassandra 2.1-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
 19.39.0]
 Use HELP for help.
 cqlsh 
 rhatch@whatup:~/git/cstar/cassandra$ ccm clear
 rhatch@whatup:~/git/cstar/cassandra$ ccm remove
 rhatch@whatup:~/git/cstar/cassandra$ ccm create test_cluster
 Current cluster is now: test_cluster
 rhatch@whatup:~/git/cstar/cassandra$ ccm populate -n 1
 rhatch@whatup:~/git/cstar/cassandra$ ccm start
 rhatch@whatup:~/git/cstar/cassandra$ ccm node1 cqlsh
 Connected to test_cluster at 127.0.0.1:9160.
 [cqlsh 4.1.1 | Cassandra 2.1-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
 19.39.0]
 cqlsh create keyspace user_type_renaming with replication = 
 {'class':'SimpleStrategy', 'replication_factor':1} ;
 cqlsh use user_type_renaming ;
 cqlsh:user_type_renaming   CREATE TYPE simple_type (
   ...   user_number int
   ...   );
 cqlsh:user_type_renaming   ALTER TYPE simple_type rename to 
 renamed_type;
 {noformat}
 And here's the log contents after the failure:
 {noformat}
 INFO  [MigrationStage:1] 2014-01-14 13:11:21,521 DefsTables.java:410 - 
 Loading 
 org.apache.cassandra.db.marshal.UserType(user_type_renaming,73696d706c655f74797065,757365725f6e756d626572:org.apache.cassandra.db.marshal.Int32Type)
 ERROR [Thrift:1] 2014-01-14 13:11:36,684 CassandraDaemon.java:139 - Exception 
 in thread Thread[Thrift:1,5,main]
 java.lang.AssertionError: null
   at org.apache.cassandra.config.Schema.getKSMetaData(Schema.java:228) 
 ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.AlterTypeStatement$TypeRename.makeUpdatedType(AlterTypeStatement.java:357)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.AlterTypeStatement.announceMigration(AlterTypeStatement.java:108)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:71)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:194)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:228) 
 ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:218) 
 ~[main/:na]
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1966)
  ~[main/:na]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486)
  ~[thrift/:na]
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470)
  ~[thrift/:na]
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194)
  ~[main/:na]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_45]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-6583) DROP TYPE complains no keyspace is active (when keyspace is active)

2014-01-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6583:
-

Assignee: Sylvain Lebresne

 DROP TYPE complains no keyspace is active (when keyspace is active)
 ---

 Key: CASSANDRA-6583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6583
 Project: Cassandra
  Issue Type: Bug
 Environment: trunk-4910ce8
 java version 1.7.0_45
Reporter: Russ Hatch
Assignee: Sylvain Lebresne
Priority: Minor

 ALTER TYPE DROP complains that there is no active keyspace (even when the 
 session has an active keyspace). The drop works when the prefix is provided.
 steps to reproduce:
 {noformat}
 ccm create test_cluster
 ccm populate -n 1
 ccm start
 ccm node1 cqlsh
 Connected to test_cluster at 127.0.0.1:9160.
 [cqlsh 4.1.1 | Cassandra 2.1-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
 19.39.0]
 Use HELP for help.
 cqlsh create keyspace user_type_dropping with replication = 
 {'class':'SimpleStrategy', 'replication_factor':1} ;
 cqlsh use user_type_dropping ;
 cqlsh:user_type_dropping   CREATE TYPE simple_type (
   ...   user_number int
   ...   );
 cqlsh:user_type_dropping DROP TYPE simple_type;
 Bad Request: You have not set a keyspace for this session
 cqlsh:user_type_dropping DROP TYPE user_type_dropping.simple_type;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6465) DES scores fluctuate too much for cache pinning

2014-01-14 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-6465:
---

Attachment: des-scores-with-penalty.csv
des-scores-without-penalty.csv

Attached are the DES scores from a run with and without the time penalty.  This 
was done with a three node CCM cluster. node1 coordinated all reads, and node2 
and node3 were the replicas for all reads.  In both runs, node2 served most of 
the reads (as reported by cfstats).

 DES scores fluctuate too much for cache pinning
 ---

 Key: CASSANDRA-6465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11, 2 DC cluster
Reporter: Chris Burroughs
Assignee: Tyler Hobbs
Priority: Minor
  Labels: gossip
 Fix For: 2.0.5

 Attachments: 6465-v1.patch, 99th_latency.png, des-score-graph.png, 
 des-scores-with-penalty.csv, des-scores-without-penalty.csv, 
 des.sample.15min.csv, get-scores.py, throughput.png


 To quote the conf:
 {noformat}
 # if set greater than zero and read_repair_chance is  1.0, this will allow
 # 'pinning' of replicas to hosts in order to increase cache capacity.
 # The badness threshold will control how much worse the pinned host has to be
 # before the dynamic snitch will prefer other replicas over it.  This is
 # expressed as a double which represents a percentage.  Thus, a value of
 # 0.2 means Cassandra would continue to prefer the static snitch values
 # until the pinned host was 20% worse than the fastest.
 dynamic_snitch_badness_threshold: 0.1
 {noformat}
 An assumption of this feature is that scores will vary by less than 
 dynamic_snitch_badness_threshold during normal operations.  Attached is the 
 result of polling a node for the scores of 6 different endpoints at 1 Hz for 
 15 minutes.  The endpoints to sample were chosen with `nodetool getendpoints` 
 for row that is known to get reads.  The node was acting as a coordinator for 
 a few hundred req/second, so it should have sufficient data to work with.  
 Other traces on a second cluster have produced similar results.
  * The scores vary by far more than I would expect, as show by the difficulty 
 of seeing anything useful in that graph.
  * The difference between the best and next-best score is usually  10% 
 (default dynamic_snitch_badness_threshold).
 Neither ClientRequest nor ColumFamily metrics showed wild changes during the 
 data gathering period.
 Attachments:
  * jython script cobbled together to gather the data (based on work on the 
 mailing list from Maki Watanabe a while back)
  * csv of DES scores for 6 endpoints, polled about once a second
  * Attempt at making a graph



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-01-14 Thread brandonwilliams
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eb354fb4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eb354fb4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eb354fb4

Branch: refs/heads/trunk
Commit: eb354fb4b1d468983d861b548259a2e116cd3a4a
Parents: 4910ce8 200e494
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 15:16:14 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 15:16:14 2014 -0600

--
 .../cassandra/locator/DynamicEndpointSnitch.java  | 18 ++
 1 file changed, 2 insertions(+), 16 deletions(-)
--




[2/3] git commit: Remove time penalty from DES. Patch by Tyler Hobbs, reviewed by brandonwilliams for CASSANDRA-6465

2014-01-14 Thread brandonwilliams
Remove time penalty from DES.
Patch by Tyler Hobbs, reviewed by brandonwilliams for CASSANDRA-6465


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/200e494e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/200e494e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/200e494e

Branch: refs/heads/trunk
Commit: 200e494e7fd305cacb638e13a98b18356d124def
Parents: 97c6bbe
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 15:15:41 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 15:15:41 2014 -0600

--
 .../cassandra/locator/DynamicEndpointSnitch.java  | 18 ++
 1 file changed, 2 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/200e494e/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
--
diff --git a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java 
b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
index ff8c70a..535dbb3 100644
--- a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
+++ b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
@@ -228,32 +228,18 @@ public class DynamicEndpointSnitch extends 
AbstractEndpointSnitch implements ILa
 
 }
 double maxLatency = 1;
-long maxPenalty = 1;
-HashMapInetAddress, Long penalties = new HashMapInetAddress, 
Long(samples.size());
-// We're going to weight the latency and time since last reply for 
each host against the worst one we see, to arrive at sort of a 'badness 
percentage' for both of them.
-// first, find the worst for each.
+// We're going to weight the latency for each host against the worst 
one we see, to
+// arrive at sort of a 'badness percentage' for them. First, find the 
worst for each:
 for (Map.EntryInetAddress, ExponentiallyDecayingSample entry : 
samples.entrySet())
 {
 double mean = entry.getValue().getSnapshot().getMedian();
 if (mean  maxLatency)
 maxLatency = mean;
-long timePenalty = lastReceived.containsKey(entry.getKey()) ? 
lastReceived.get(entry.getKey()) : System.nanoTime();
-timePenalty = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - 
timePenalty);
-timePenalty = timePenalty  UPDATE_INTERVAL_IN_MS ? 
UPDATE_INTERVAL_IN_MS : timePenalty;
-// a convenient place to remember this since we've already 
calculated it and need it later
-penalties.put(entry.getKey(), timePenalty);
-if (timePenalty  maxPenalty)
-maxPenalty = timePenalty;
 }
 // now make another pass to do the weighting based on the maximums we 
found before
 for (Map.EntryInetAddress, ExponentiallyDecayingSample entry: 
samples.entrySet())
 {
 double score = entry.getValue().getSnapshot().getMedian() / 
maxLatency;
-if (penalties.containsKey(entry.getKey()))
-score += penalties.get(entry.getKey()) / ((double) maxPenalty);
-else
-// there's a chance a host was added to the samples after our 
previous loop to get the time penalties.  Add 1.0 to it, or '100% bad' for the 
time penalty.
-score += 1; // maxPenalty / maxPenalty
 // finally, add the severity without any weighting, since hosts 
scale this relative to their own load and the size of the task causing the 
severity.
 // Severity is basically a measure of compaction activity 
(CASSANDRA-3722).
 score += StorageService.instance.getSeverity(entry.getKey());



[1/3] git commit: Remove time penalty from DES. Patch by Tyler Hobbs, reviewed by brandonwilliams for CASSANDRA-6465

2014-01-14 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-2.0 97c6bbe60 - 200e494e7
  refs/heads/trunk 4910ce802 - eb354fb4b


Remove time penalty from DES.
Patch by Tyler Hobbs, reviewed by brandonwilliams for CASSANDRA-6465


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/200e494e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/200e494e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/200e494e

Branch: refs/heads/cassandra-2.0
Commit: 200e494e7fd305cacb638e13a98b18356d124def
Parents: 97c6bbe
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 15:15:41 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 15:15:41 2014 -0600

--
 .../cassandra/locator/DynamicEndpointSnitch.java  | 18 ++
 1 file changed, 2 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/200e494e/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
--
diff --git a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java 
b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
index ff8c70a..535dbb3 100644
--- a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
+++ b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
@@ -228,32 +228,18 @@ public class DynamicEndpointSnitch extends 
AbstractEndpointSnitch implements ILa
 
 }
 double maxLatency = 1;
-long maxPenalty = 1;
-HashMapInetAddress, Long penalties = new HashMapInetAddress, 
Long(samples.size());
-// We're going to weight the latency and time since last reply for 
each host against the worst one we see, to arrive at sort of a 'badness 
percentage' for both of them.
-// first, find the worst for each.
+// We're going to weight the latency for each host against the worst 
one we see, to
+// arrive at sort of a 'badness percentage' for them. First, find the 
worst for each:
 for (Map.EntryInetAddress, ExponentiallyDecayingSample entry : 
samples.entrySet())
 {
 double mean = entry.getValue().getSnapshot().getMedian();
 if (mean  maxLatency)
 maxLatency = mean;
-long timePenalty = lastReceived.containsKey(entry.getKey()) ? 
lastReceived.get(entry.getKey()) : System.nanoTime();
-timePenalty = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - 
timePenalty);
-timePenalty = timePenalty  UPDATE_INTERVAL_IN_MS ? 
UPDATE_INTERVAL_IN_MS : timePenalty;
-// a convenient place to remember this since we've already 
calculated it and need it later
-penalties.put(entry.getKey(), timePenalty);
-if (timePenalty  maxPenalty)
-maxPenalty = timePenalty;
 }
 // now make another pass to do the weighting based on the maximums we 
found before
 for (Map.EntryInetAddress, ExponentiallyDecayingSample entry: 
samples.entrySet())
 {
 double score = entry.getValue().getSnapshot().getMedian() / 
maxLatency;
-if (penalties.containsKey(entry.getKey()))
-score += penalties.get(entry.getKey()) / ((double) maxPenalty);
-else
-// there's a chance a host was added to the samples after our 
previous loop to get the time penalties.  Add 1.0 to it, or '100% bad' for the 
time penalty.
-score += 1; // maxPenalty / maxPenalty
 // finally, add the severity without any weighting, since hosts 
scale this relative to their own load and the size of the task causing the 
severity.
 // Severity is basically a measure of compaction activity 
(CASSANDRA-3722).
 score += StorageService.instance.getSeverity(entry.getKey());



[6/6] git commit: Merge branch 'cassandra-2.0' into trunk

2014-01-14 Thread brandonwilliams
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9dc58541
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9dc58541
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9dc58541

Branch: refs/heads/trunk
Commit: 9dc585413ac308cd32daa9fea6da6a34ec9d04a0
Parents: eb354fb 4f50406
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 15:20:07 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 15:20:07 2014 -0600

--

--




[5/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-01-14 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4f50406a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4f50406a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4f50406a

Branch: refs/heads/cassandra-2.0
Commit: 4f50406af7b5df75049d0fe411d2338aed0f2daa
Parents: 200e494 0d38b25
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 15:20:00 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 15:20:00 2014 -0600

--

--




[4/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-01-14 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4f50406a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4f50406a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4f50406a

Branch: refs/heads/trunk
Commit: 4f50406af7b5df75049d0fe411d2338aed0f2daa
Parents: 200e494 0d38b25
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 15:20:00 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 15:20:00 2014 -0600

--

--




[2/6] git commit: Don't shutdown a nonexistant native server on 1.2, either.

2014-01-14 Thread brandonwilliams
Don't shutdown a nonexistant native server on 1.2, either.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d38b25f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d38b25f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d38b25f

Branch: refs/heads/cassandra-2.0
Commit: 0d38b25f6fa3b7b52f5e64780f6591eabc7fc76d
Parents: 3405878
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 15:19:51 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 15:19:51 2014 -0600

--
 src/java/org/apache/cassandra/service/StorageService.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d38b25f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 5ae02e9..043a1eb 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -355,7 +355,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 {
 throw new IllegalStateException(No configured daemon);
 }
-daemon.nativeServer.stop();
+if (daemon.nativeServer != null)
+daemon.nativeServer.stop();
 }
 
 public boolean isNativeTransportRunning()



[3/6] git commit: Don't shutdown a nonexistant native server on 1.2, either.

2014-01-14 Thread brandonwilliams
Don't shutdown a nonexistant native server on 1.2, either.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d38b25f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d38b25f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d38b25f

Branch: refs/heads/trunk
Commit: 0d38b25f6fa3b7b52f5e64780f6591eabc7fc76d
Parents: 3405878
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 15:19:51 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 15:19:51 2014 -0600

--
 src/java/org/apache/cassandra/service/StorageService.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d38b25f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 5ae02e9..043a1eb 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -355,7 +355,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 {
 throw new IllegalStateException(No configured daemon);
 }
-daemon.nativeServer.stop();
+if (daemon.nativeServer != null)
+daemon.nativeServer.stop();
 }
 
 public boolean isNativeTransportRunning()



[1/6] git commit: Don't shutdown a nonexistant native server on 1.2, either.

2014-01-14 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.2 34058783f - 0d38b25f6
  refs/heads/cassandra-2.0 200e494e7 - 4f50406af
  refs/heads/trunk eb354fb4b - 9dc585413


Don't shutdown a nonexistant native server on 1.2, either.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d38b25f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d38b25f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d38b25f

Branch: refs/heads/cassandra-1.2
Commit: 0d38b25f6fa3b7b52f5e64780f6591eabc7fc76d
Parents: 3405878
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 15:19:51 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 15:19:51 2014 -0600

--
 src/java/org/apache/cassandra/service/StorageService.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d38b25f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 5ae02e9..043a1eb 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -355,7 +355,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 {
 throw new IllegalStateException(No configured daemon);
 }
-daemon.nativeServer.stop();
+if (daemon.nativeServer != null)
+daemon.nativeServer.stop();
 }
 
 public boolean isNativeTransportRunning()



[jira] [Commented] (CASSANDRA-5202) CFs should have globally and temporally unique CF IDs to prevent reusing data from earlier incarnation of same CF name

2014-01-14 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871204#comment-13871204
 ] 

Pavel Yaskevich commented on CASSANDRA-5202:


[~yukim] Looked at 1-4 patches, everything looks good but I have one question 
to clarify regarding #3 - we can do sstable.descriptor.equals(descriptor) now 
as descriptor would also have an absolute path because of the new find 
method, is that correct?

 CFs should have globally and temporally unique CF IDs to prevent reusing 
 data from earlier incarnation of same CF name
 

 Key: CASSANDRA-5202
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5202
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: OS: Windows 7, 
 Server: Cassandra 1.1.9 release drop
 Client: astyanax 1.56.21, 
 JVM: Sun/Oracle JVM 64 bit (jdk1.6.0_27)
Reporter: Marat Bedretdinov
Assignee: Yuki Morishita
  Labels: test
 Fix For: 2.1

 Attachments: 0001-make-2i-CFMetaData-have-parent-s-CF-ID.patch, 
 0002-Don-t-scrub-2i-CF-if-index-type-is-CUSTOM.patch, 
 0003-Fix-user-defined-compaction.patch, 0004-Fix-serialization-test.patch, 
 0005-Create-system_auth-tables-with-fixed-CFID.patch, 0005-auth-v2.txt, 
 5202.txt, astyanax-stress-driver.zip


 Attached is a driver that sequentially:
 1. Drops keyspace
 2. Creates keyspace
 4. Creates 2 column families
 5. Seeds 1M rows with 100 columns
 6. Queries these 2 column families
 The above steps are repeated 1000 times.
 The following exception is observed at random (race - SEDA?):
 ERROR [ReadStage:55] 2013-01-29 19:24:52,676 AbstractCassandraDaemon.java 
 (line 135) Exception in thread Thread[ReadStage:55,5,main]
 java.lang.AssertionError: DecoratedKey(-1, ) != 
 DecoratedKey(62819832764241410631599989027761269388, 313a31) in 
 C:\var\lib\cassandra\data\user_role_reverse_index\business_entity_role\user_role_reverse_index-business_entity_role-hf-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1367)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1229)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1164)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:822)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1271)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 This exception appears in the server at the time of client submitting a query 
 request (row slice) and not at the time data is seeded. The client times out 
 and this data can no longer be queried as the same exception would always 
 occur from there on.
 Also on iteration 201, it appears that dropping column families failed and as 
 a result their recreation failed with unique column family name violation 
 (see exception below). Note that the data files are actually gone, so it 
 appears that the server runtime responsible for creating column family was 
 out of sync with the piece that dropped them:
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5105 ms
 Iteration: 200; Total running time for 1000 queries is 232; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5361 ms
 Iteration: 201; Total running time for 1000 queries is 222; Average running 
 time of 1000 queries is 0 ms
 Starting dropping 

[jira] [Created] (CASSANDRA-6584) LOCAL_SERIAL doesn't work from Thrift

2014-01-14 Thread Nicolas Favre-Felix (JIRA)
Nicolas Favre-Felix created CASSANDRA-6584:
--

 Summary: LOCAL_SERIAL doesn't work from Thrift
 Key: CASSANDRA-6584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6584
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nicolas Favre-Felix


Calling cas from Thrift with CL.LOCAL_SERIAL fails with an AssertionError 
since ThriftConversion.fromThrift has no case statement for LOCAL_SERIAL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5202) CFs should have globally and temporally unique CF IDs to prevent reusing data from earlier incarnation of same CF name

2014-01-14 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871284#comment-13871284
 ] 

Yuki Morishita commented on CASSANDRA-5202:
---

Yes, that is my intention. Since descriptor begins to get directory part from 
Directories, there is no need to compare file names in string.

 CFs should have globally and temporally unique CF IDs to prevent reusing 
 data from earlier incarnation of same CF name
 

 Key: CASSANDRA-5202
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5202
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: OS: Windows 7, 
 Server: Cassandra 1.1.9 release drop
 Client: astyanax 1.56.21, 
 JVM: Sun/Oracle JVM 64 bit (jdk1.6.0_27)
Reporter: Marat Bedretdinov
Assignee: Yuki Morishita
  Labels: test
 Fix For: 2.1

 Attachments: 0001-make-2i-CFMetaData-have-parent-s-CF-ID.patch, 
 0002-Don-t-scrub-2i-CF-if-index-type-is-CUSTOM.patch, 
 0003-Fix-user-defined-compaction.patch, 0004-Fix-serialization-test.patch, 
 0005-Create-system_auth-tables-with-fixed-CFID.patch, 0005-auth-v2.txt, 
 5202.txt, astyanax-stress-driver.zip


 Attached is a driver that sequentially:
 1. Drops keyspace
 2. Creates keyspace
 4. Creates 2 column families
 5. Seeds 1M rows with 100 columns
 6. Queries these 2 column families
 The above steps are repeated 1000 times.
 The following exception is observed at random (race - SEDA?):
 ERROR [ReadStage:55] 2013-01-29 19:24:52,676 AbstractCassandraDaemon.java 
 (line 135) Exception in thread Thread[ReadStage:55,5,main]
 java.lang.AssertionError: DecoratedKey(-1, ) != 
 DecoratedKey(62819832764241410631599989027761269388, 313a31) in 
 C:\var\lib\cassandra\data\user_role_reverse_index\business_entity_role\user_role_reverse_index-business_entity_role-hf-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1367)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1229)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1164)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:822)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1271)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 This exception appears in the server at the time of client submitting a query 
 request (row slice) and not at the time data is seeded. The client times out 
 and this data can no longer be queried as the same exception would always 
 occur from there on.
 Also on iteration 201, it appears that dropping column families failed and as 
 a result their recreation failed with unique column family name violation 
 (see exception below). Note that the data files are actually gone, so it 
 appears that the server runtime responsible for creating column family was 
 out of sync with the piece that dropped them:
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5105 ms
 Iteration: 200; Total running time for 1000 queries is 232; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5361 ms
 Iteration: 201; Total running time for 1000 queries is 222; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Starting creating column families
 Exception in thread main 
 

[jira] [Commented] (CASSANDRA-5202) CFs should have globally and temporally unique CF IDs to prevent reusing data from earlier incarnation of same CF name

2014-01-14 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871412#comment-13871412
 ] 

Pavel Yaskevich commented on CASSANDRA-5202:


sounds good to me, +1.

 CFs should have globally and temporally unique CF IDs to prevent reusing 
 data from earlier incarnation of same CF name
 

 Key: CASSANDRA-5202
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5202
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: OS: Windows 7, 
 Server: Cassandra 1.1.9 release drop
 Client: astyanax 1.56.21, 
 JVM: Sun/Oracle JVM 64 bit (jdk1.6.0_27)
Reporter: Marat Bedretdinov
Assignee: Yuki Morishita
  Labels: test
 Fix For: 2.1

 Attachments: 0001-make-2i-CFMetaData-have-parent-s-CF-ID.patch, 
 0002-Don-t-scrub-2i-CF-if-index-type-is-CUSTOM.patch, 
 0003-Fix-user-defined-compaction.patch, 0004-Fix-serialization-test.patch, 
 0005-Create-system_auth-tables-with-fixed-CFID.patch, 0005-auth-v2.txt, 
 5202.txt, astyanax-stress-driver.zip


 Attached is a driver that sequentially:
 1. Drops keyspace
 2. Creates keyspace
 4. Creates 2 column families
 5. Seeds 1M rows with 100 columns
 6. Queries these 2 column families
 The above steps are repeated 1000 times.
 The following exception is observed at random (race - SEDA?):
 ERROR [ReadStage:55] 2013-01-29 19:24:52,676 AbstractCassandraDaemon.java 
 (line 135) Exception in thread Thread[ReadStage:55,5,main]
 java.lang.AssertionError: DecoratedKey(-1, ) != 
 DecoratedKey(62819832764241410631599989027761269388, 313a31) in 
 C:\var\lib\cassandra\data\user_role_reverse_index\business_entity_role\user_role_reverse_index-business_entity_role-hf-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1367)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1229)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1164)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:822)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1271)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 This exception appears in the server at the time of client submitting a query 
 request (row slice) and not at the time data is seeded. The client times out 
 and this data can no longer be queried as the same exception would always 
 occur from there on.
 Also on iteration 201, it appears that dropping column families failed and as 
 a result their recreation failed with unique column family name violation 
 (see exception below). Note that the data files are actually gone, so it 
 appears that the server runtime responsible for creating column family was 
 out of sync with the piece that dropped them:
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5105 ms
 Iteration: 200; Total running time for 1000 queries is 232; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5361 ms
 Iteration: 201; Total running time for 1000 queries is 222; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Starting creating column families
 Exception in thread main 
 com.netflix.astyanax.connectionpool.exceptions.BadRequestException: 
 BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=2468(2469), 
 

[jira] [Created] (CASSANDRA-6585) Make node tool exit code non zero when it fails to create snapshot

2014-01-14 Thread Vishy Kasar (JIRA)
Vishy Kasar created CASSANDRA-6585:
--

 Summary: Make node tool exit code non zero when it fails to create 
snapshot
 Key: CASSANDRA-6585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6585
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Vishy Kasar
 Fix For: 1.2.14


When node tool snapshot is invoked on a bootstrapping node, it does not create 
the snapshot as expected. However node tool returns a zero exit code in that 
case. Can we make the node tool return a non zero exit code when create 
snapshot fails?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-6585) Make node tool exit code non zero when it fails to create snapshot

2014-01-14 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-6585:
---

Assignee: Brandon Williams

 Make node tool exit code non zero when it fails to create snapshot
 --

 Key: CASSANDRA-6585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6585
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Vishy Kasar
Assignee: Brandon Williams
 Fix For: 1.2.14


 When node tool snapshot is invoked on a bootstrapping node, it does not 
 create the snapshot as expected. However node tool returns a zero exit code 
 in that case. Can we make the node tool return a non zero exit code when 
 create snapshot fails?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6585) Make node tool exit code non zero when it fails to create snapshot

2014-01-14 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6585:


Attachment: 6585.txt

It's likely just lucky timing that is preventing a snap from being created 
since there's no check for this.  Patch adds a check if we're bootstrapping, 
and if so throws and error refusing to snap that will exit non-zero, since I 
can't think of a scenario where snapping during bootstrap makes any kind of 
sense.

 Make node tool exit code non zero when it fails to create snapshot
 --

 Key: CASSANDRA-6585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6585
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Vishy Kasar
Assignee: Brandon Williams
 Fix For: 1.2.14

 Attachments: 6585.txt


 When node tool snapshot is invoked on a bootstrapping node, it does not 
 create the snapshot as expected. However node tool returns a zero exit code 
 in that case. Can we make the node tool return a non zero exit code when 
 create snapshot fails?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[2/5] git commit: Don't scrub 2i CF if index type is CUSTOM

2014-01-14 Thread yukim
Don't scrub 2i CF if index type is CUSTOM


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0bfe9efd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0bfe9efd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0bfe9efd

Branch: refs/heads/trunk
Commit: 0bfe9efd859eccd6bb6c6a253ad3912650831ec0
Parents: 3e31143
Author: Yuki Morishita yu...@apache.org
Authored: Thu Jan 9 12:51:04 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Jan 14 20:23:00 2014 -0600

--
 .../org/apache/cassandra/db/ColumnFamilyStore.java | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0bfe9efd/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 6d3e21a..892e881 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -40,11 +40,8 @@ import org.apache.cassandra.cache.IRowCacheEntry;
 import org.apache.cassandra.cache.RowCacheKey;
 import org.apache.cassandra.cache.RowCacheSentinel;
 import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutor;
-import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.*;
 import org.apache.cassandra.config.CFMetaData.SpeculativeRetry;
-import org.apache.cassandra.config.ColumnDefinition;
-import org.apache.cassandra.config.DatabaseDescriptor;
-import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
 import org.apache.cassandra.db.commitlog.CommitLog;
 import org.apache.cassandra.db.commitlog.ReplayPosition;
@@ -464,8 +461,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 if (def.isIndexed())
 {
-CFMetaData indexMetadata = 
CFMetaData.newIndexMetadata(metadata, def, 
SecondaryIndex.getIndexComparator(metadata, def));
-scrubDataDirectories(indexMetadata);
+CellNameType indexComparator = 
SecondaryIndex.getIndexComparator(metadata, def);
+if (indexComparator != null)
+{
+CFMetaData indexMetadata = 
CFMetaData.newIndexMetadata(metadata, def, indexComparator);
+scrubDataDirectories(indexMetadata);
+}
 }
 }
 }



[3/5] git commit: Fix user defined compaction

2014-01-14 Thread yukim
Fix user defined compaction


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be214175
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be214175
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be214175

Branch: refs/heads/trunk
Commit: be2141757010aacbcb2c6ebaa00623db14e192bd
Parents: 0bfe9ef
Author: Yuki Morishita yu...@apache.org
Authored: Thu Jan 9 15:23:11 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Jan 14 20:23:22 2014 -0600

--
 .../org/apache/cassandra/db/Directories.java| 10 +
 .../db/compaction/CompactionManager.java| 23 ++--
 2 files changed, 17 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be214175/src/java/org/apache/cassandra/db/Directories.java
--
diff --git a/src/java/org/apache/cassandra/db/Directories.java 
b/src/java/org/apache/cassandra/db/Directories.java
index 9eb254e..a124d67 100644
--- a/src/java/org/apache/cassandra/db/Directories.java
+++ b/src/java/org/apache/cassandra/db/Directories.java
@@ -248,6 +248,16 @@ public class Directories
 return null;
 }
 
+public Descriptor find(String filename)
+{
+for (File dir : sstableDirectories)
+{
+if (new File(dir, filename).exists())
+return Descriptor.fromFilename(dir, filename).left;
+}
+return null;
+}
+
 public File getDirectoryForNewSSTables()
 {
 File path = getWriteableLocationAsFile();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be214175/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index e4f5237..7927574 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -303,7 +303,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 public void forceUserDefinedCompaction(String dataFiles)
 {
 String[] filenames = dataFiles.split(,);
-MultimapPairString, String, Descriptor descriptors = 
ArrayListMultimap.create();
+MultimapColumnFamilyStore, Descriptor descriptors = 
ArrayListMultimap.create();
 
 for (String filename : filenames)
 {
@@ -314,19 +314,14 @@ public class CompactionManager implements 
CompactionManagerMBean
 logger.warn(Schema does not exist for file {}. Skipping., 
filename);
 continue;
 }
-File directory = new File(desc.ksname + File.separator + 
desc.cfname);
 // group by keyspace/columnfamily
-PairDescriptor, String p = Descriptor.fromFilename(directory, 
filename.trim());
-PairString, String key = Pair.create(p.left.ksname, 
p.left.cfname);
-descriptors.put(key, p.left);
+ColumnFamilyStore cfs = 
Keyspace.open(desc.ksname).getColumnFamilyStore(desc.cfname);
+descriptors.put(cfs, cfs.directories.find(filename.trim()));
 }
 
 ListFuture? futures = new ArrayList();
-for (PairString, String key : descriptors.keySet())
-{
-ColumnFamilyStore cfs = 
Keyspace.open(key.left).getColumnFamilyStore(key.right);
-futures.add(submitUserDefined(cfs, descriptors.get(key), 
getDefaultGcBefore(cfs)));
-}
+for (ColumnFamilyStore cfs : descriptors.keySet())
+futures.add(submitUserDefined(cfs, descriptors.get(cfs), 
getDefaultGcBefore(cfs)));
 FBUtilities.waitOnFutures(futures);
 }
 
@@ -369,16 +364,12 @@ public class CompactionManager implements 
CompactionManagerMBean
 }
 
 // This acquire a reference on the sstable
-// This is not efficent, do not use in any critical path
+// This is not efficient, do not use in any critical path
 private SSTableReader lookupSSTable(final ColumnFamilyStore cfs, 
Descriptor descriptor)
 {
 for (SSTableReader sstable : cfs.getSSTables())
 {
-// .equals() with no other changes won't work because in 
sstable.descriptor, the directory is an absolute path.
-// We could construct descriptor with an absolute path too but I 
haven't found any satisfying way to do that
-// (DB.getDataFileLocationForTable() may not return the right path 
if you have multiple volumes). Hence the
-// endsWith.
-if (sstable.descriptor.toString().endsWith(descriptor.toString()))
+if 

[1/5] git commit: make 2i CFMetaData have parent's CF ID

2014-01-14 Thread yukim
Updated Branches:
  refs/heads/trunk 9dc585413 - ea565aac9


make 2i CFMetaData have parent's CF ID


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3e31143e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3e31143e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3e31143e

Branch: refs/heads/trunk
Commit: 3e31143e1c8658e9ab529fe6f705bf836e7f7a64
Parents: 9dc5854
Author: Yuki Morishita yu...@apache.org
Authored: Thu Jan 9 11:20:31 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Jan 14 20:22:37 2014 -0600

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3e31143e/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 78ee300..3dc7022 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -558,6 +558,15 @@ public final class CFMetaData
 .memtableFlushPeriod(3600 * 1000);
 }
 
+/**
+ * Creates CFMetaData for secondary index CF.
+ * Secondary index CF has the same CF ID as parent's.
+ *
+ * @param parent Parent CF where secondary index is created
+ * @param info Column definition containing secondary index definition
+ * @param indexComparator Comparator for secondary index
+ * @return CFMetaData for secondary index
+ */
 public static CFMetaData newIndexMetadata(CFMetaData parent, 
ColumnDefinition info, CellNameType indexComparator)
 {
 // Depends on parent's cache setting, turn on its index CF's cache.
@@ -566,7 +575,7 @@ public final class CFMetaData
  ? Caching.KEYS_ONLY
  : Caching.NONE;
 
-return new CFMetaData(parent.ksName, 
parent.indexColumnFamilyName(info), ColumnFamilyType.Standard, indexComparator)
+return new CFMetaData(parent.ksName, 
parent.indexColumnFamilyName(info), ColumnFamilyType.Standard, indexComparator, 
parent.cfId)
  .keyValidator(info.type)
  .readRepairChance(0.0)
  .dcLocalReadRepairChance(0.0)



[5/5] git commit: Create system_auth tables with fixed CFID

2014-01-14 Thread yukim
Create system_auth tables with fixed CFID


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea565aac
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea565aac
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea565aac

Branch: refs/heads/trunk
Commit: ea565aac9702698b2bfc2db7c3ca84da3f96121a
Parents: 29d5dd0
Author: Yuki Morishita yu...@apache.org
Authored: Tue Jan 14 21:12:38 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Jan 14 21:12:38 2014 -0600

--
 src/java/org/apache/cassandra/auth/Auth.java| 24 
 .../cassandra/auth/CassandraAuthorizer.java | 14 +---
 .../cassandra/auth/PasswordAuthenticator.java   | 19 +---
 .../org/apache/cassandra/config/CFMetaData.java | 17 +-
 .../apache/cassandra/config/CFMetaDataTest.java |  2 +-
 5 files changed, 33 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea565aac/src/java/org/apache/cassandra/auth/Auth.java
--
diff --git a/src/java/org/apache/cassandra/auth/Auth.java 
b/src/java/org/apache/cassandra/auth/Auth.java
index 36e55bf..90b1215 100644
--- a/src/java/org/apache/cassandra/auth/Auth.java
+++ b/src/java/org/apache/cassandra/auth/Auth.java
@@ -25,12 +25,15 @@ import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.KSMetaData;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.cql3.UntypedResultSet;
 import org.apache.cassandra.cql3.QueryProcessor;
 import org.apache.cassandra.cql3.QueryOptions;
+import org.apache.cassandra.cql3.statements.CFStatement;
+import org.apache.cassandra.cql3.statements.CreateTableStatement;
 import org.apache.cassandra.cql3.statements.SelectStatement;
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.exceptions.RequestExecutionException;
@@ -127,7 +130,7 @@ public class Auth
 return;
 
 setupAuthKeyspace();
-setupUsersTable();
+setupTable(USERS_CF, USERS_CF_SCHEMA);
 
 DatabaseDescriptor.getAuthenticator().setup();
 DatabaseDescriptor.getAuthorizer().setup();
@@ -187,15 +190,26 @@ public class Auth
 }
 }
 
-private static void setupUsersTable()
+/**
+ * Set up table from given CREATE TABLE statement under system_auth 
keyspace, if not already done so.
+ *
+ * @param name name of the table
+ * @param cql CREATE TABLE statement
+ */
+public static void setupTable(String name, String cql)
 {
-if (Schema.instance.getCFMetaData(AUTH_KS, USERS_CF) == null)
+if (Schema.instance.getCFMetaData(AUTH_KS, name) == null)
 {
 try
 {
-QueryProcessor.process(USERS_CF_SCHEMA, ConsistencyLevel.ANY);
+CFStatement parsed = 
(CFStatement)QueryProcessor.parseStatement(cql);
+parsed.prepareKeyspace(AUTH_KS);
+CreateTableStatement statement = (CreateTableStatement) 
parsed.prepare().statement;
+CFMetaData cfm = 
statement.getCFMetaData().clone(CFMetaData.generateLegacyCfId(AUTH_KS, name));
+assert cfm.cfName.equals(name);
+MigrationManager.announceNewColumnFamily(cfm);
 }
-catch (RequestExecutionException e)
+catch (Exception e)
 {
 throw new AssertionError(e);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ea565aac/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
--
diff --git a/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java 
b/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
index 8f257db..85d2b16 100644
--- a/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
+++ b/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
@@ -25,7 +25,6 @@ import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.cql3.UntypedResultSet;
 import org.apache.cassandra.cql3.QueryProcessor;
 import org.apache.cassandra.cql3.QueryOptions;
@@ -33,7 +32,6 @@ import org.apache.cassandra.cql3.statements.SelectStatement;
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.db.marshal.UTF8Type;
 import org.apache.cassandra.exceptions.*;
-import org.apache.cassandra.service.ClientState;
 import 

[4/5] git commit: Fix serialization test

2014-01-14 Thread yukim
Fix serialization test


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/29d5dd03
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/29d5dd03
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/29d5dd03

Branch: refs/heads/trunk
Commit: 29d5dd03ae7f6b4741e07ce01014acaca93a6e6e
Parents: be21417
Author: Yuki Morishita yu...@apache.org
Authored: Thu Jan 9 17:06:34 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Jan 14 20:23:35 2014 -0600

--
 test/data/serialization/2.0/db.Row.bin   | Bin 587 - 0 bytes
 .../org/apache/cassandra/db/SerializationsTest.java  |   8 
 2 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/29d5dd03/test/data/serialization/2.0/db.Row.bin
--
diff --git a/test/data/serialization/2.0/db.Row.bin 
b/test/data/serialization/2.0/db.Row.bin
deleted file mode 100644
index c699448..000
Binary files a/test/data/serialization/2.0/db.Row.bin and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29d5dd03/test/unit/org/apache/cassandra/db/SerializationsTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/SerializationsTest.java 
b/test/unit/org/apache/cassandra/db/SerializationsTest.java
index 2bc1493..68686cb 100644
--- a/test/unit/org/apache/cassandra/db/SerializationsTest.java
+++ b/test/unit/org/apache/cassandra/db/SerializationsTest.java
@@ -200,8 +200,9 @@ public class SerializationsTest extends 
AbstractSerializationsTester
 @Test
 public void testRowRead() throws IOException
 {
-if (EXECUTE_WRITES)
-testRowWrite();
+// Since every table creation generates different CF ID,
+// we need to generate file every time
+testRowWrite();
 
 DataInputStream in = getInput(db.Row.bin);
 assert Row.serializer.deserialize(in, getVersion()) != null;
@@ -248,8 +249,7 @@ public class SerializationsTest extends 
AbstractSerializationsTester
 public void testMutationRead() throws IOException
 {
 // mutation deserialization requires being able to look up the 
keyspace in the schema,
-// so we need to rewrite this each time.  We can go back to testing 
on-disk data
-// once we pull RM.keyspace field out.
+// so we need to rewrite this each time. plus, CF ID is different for 
every run.
 testMutationWrite();
 
 DataInputStream in = getInput(db.RowMutation.bin);



[jira] [Resolved] (CASSANDRA-5202) CFs should have globally and temporally unique CF IDs to prevent reusing data from earlier incarnation of same CF name

2014-01-14 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-5202.
---

Resolution: Fixed

Committed, thanks!

 CFs should have globally and temporally unique CF IDs to prevent reusing 
 data from earlier incarnation of same CF name
 

 Key: CASSANDRA-5202
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5202
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: OS: Windows 7, 
 Server: Cassandra 1.1.9 release drop
 Client: astyanax 1.56.21, 
 JVM: Sun/Oracle JVM 64 bit (jdk1.6.0_27)
Reporter: Marat Bedretdinov
Assignee: Yuki Morishita
  Labels: test
 Fix For: 2.1

 Attachments: 0001-make-2i-CFMetaData-have-parent-s-CF-ID.patch, 
 0002-Don-t-scrub-2i-CF-if-index-type-is-CUSTOM.patch, 
 0003-Fix-user-defined-compaction.patch, 0004-Fix-serialization-test.patch, 
 0005-Create-system_auth-tables-with-fixed-CFID.patch, 0005-auth-v2.txt, 
 5202.txt, astyanax-stress-driver.zip


 Attached is a driver that sequentially:
 1. Drops keyspace
 2. Creates keyspace
 4. Creates 2 column families
 5. Seeds 1M rows with 100 columns
 6. Queries these 2 column families
 The above steps are repeated 1000 times.
 The following exception is observed at random (race - SEDA?):
 ERROR [ReadStage:55] 2013-01-29 19:24:52,676 AbstractCassandraDaemon.java 
 (line 135) Exception in thread Thread[ReadStage:55,5,main]
 java.lang.AssertionError: DecoratedKey(-1, ) != 
 DecoratedKey(62819832764241410631599989027761269388, 313a31) in 
 C:\var\lib\cassandra\data\user_role_reverse_index\business_entity_role\user_role_reverse_index-business_entity_role-hf-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1367)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1229)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1164)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:822)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1271)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 This exception appears in the server at the time of client submitting a query 
 request (row slice) and not at the time data is seeded. The client times out 
 and this data can no longer be queried as the same exception would always 
 occur from there on.
 Also on iteration 201, it appears that dropping column families failed and as 
 a result their recreation failed with unique column family name violation 
 (see exception below). Note that the data files are actually gone, so it 
 appears that the server runtime responsible for creating column family was 
 out of sync with the piece that dropped them:
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5105 ms
 Iteration: 200; Total running time for 1000 queries is 232; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5361 ms
 Iteration: 201; Total running time for 1000 queries is 222; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Starting creating column families
 Exception in thread main 
 com.netflix.astyanax.connectionpool.exceptions.BadRequestException: 
 BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=2468(2469), 
 attempts=1]InvalidRequestException(why:Keyspace 

[jira] [Updated] (CASSANDRA-6584) LOCAL_SERIAL doesn't work from Thrift

2014-01-14 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6584:


Attachment: 6584.txt

 LOCAL_SERIAL doesn't work from Thrift
 -

 Key: CASSANDRA-6584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6584
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nicolas Favre-Felix
Assignee: Brandon Williams
  Labels: easyfix
 Attachments: 6584.txt


 Calling cas from Thrift with CL.LOCAL_SERIAL fails with an AssertionError 
 since ThriftConversion.fromThrift has no case statement for LOCAL_SERIAL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-6584) LOCAL_SERIAL doesn't work from Thrift

2014-01-14 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-6584:
---

Assignee: Brandon Williams

 LOCAL_SERIAL doesn't work from Thrift
 -

 Key: CASSANDRA-6584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6584
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nicolas Favre-Felix
Assignee: Brandon Williams
  Labels: easyfix
 Attachments: 6584.txt


 Calling cas from Thrift with CL.LOCAL_SERIAL fails with an AssertionError 
 since ThriftConversion.fromThrift has no case statement for LOCAL_SERIAL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[1/3] git commit: Add LOCAL_SERIAL to ThriftConversion Patch by brandonwilliams for CASSANDRA-6584

2014-01-14 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-2.0 4f50406af - 05f120991
  refs/heads/trunk ea565aac9 - 54f728c77


Add LOCAL_SERIAL to ThriftConversion
Patch by brandonwilliams for CASSANDRA-6584


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/05f12099
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/05f12099
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/05f12099

Branch: refs/heads/cassandra-2.0
Commit: 05f120991cfbfa6eb019d2cb78b9afe41e26e3bf
Parents: 4f50406
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 21:21:27 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 21:21:27 2014 -0600

--
 src/java/org/apache/cassandra/thrift/ThriftConversion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/05f12099/src/java/org/apache/cassandra/thrift/ThriftConversion.java
--
diff --git a/src/java/org/apache/cassandra/thrift/ThriftConversion.java 
b/src/java/org/apache/cassandra/thrift/ThriftConversion.java
index 0d92641..24ce045 100644
--- a/src/java/org/apache/cassandra/thrift/ThriftConversion.java
+++ b/src/java/org/apache/cassandra/thrift/ThriftConversion.java
@@ -41,6 +41,7 @@ public class ThriftConversion
 case LOCAL_QUORUM: return 
org.apache.cassandra.db.ConsistencyLevel.LOCAL_QUORUM;
 case EACH_QUORUM: return 
org.apache.cassandra.db.ConsistencyLevel.EACH_QUORUM;
 case SERIAL: return 
org.apache.cassandra.db.ConsistencyLevel.SERIAL;
+case LOCAL_SERIAL: return 
org.apache.cassandra.db.ConsistencyLevel.LOCAL_SERIAL;
 case LOCAL_ONE: return 
org.apache.cassandra.db.ConsistencyLevel.LOCAL_ONE;
 }
 throw new AssertionError();



[2/3] git commit: Add LOCAL_SERIAL to ThriftConversion Patch by brandonwilliams for CASSANDRA-6584

2014-01-14 Thread brandonwilliams
Add LOCAL_SERIAL to ThriftConversion
Patch by brandonwilliams for CASSANDRA-6584


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/05f12099
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/05f12099
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/05f12099

Branch: refs/heads/trunk
Commit: 05f120991cfbfa6eb019d2cb78b9afe41e26e3bf
Parents: 4f50406
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 21:21:27 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 21:21:27 2014 -0600

--
 src/java/org/apache/cassandra/thrift/ThriftConversion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/05f12099/src/java/org/apache/cassandra/thrift/ThriftConversion.java
--
diff --git a/src/java/org/apache/cassandra/thrift/ThriftConversion.java 
b/src/java/org/apache/cassandra/thrift/ThriftConversion.java
index 0d92641..24ce045 100644
--- a/src/java/org/apache/cassandra/thrift/ThriftConversion.java
+++ b/src/java/org/apache/cassandra/thrift/ThriftConversion.java
@@ -41,6 +41,7 @@ public class ThriftConversion
 case LOCAL_QUORUM: return 
org.apache.cassandra.db.ConsistencyLevel.LOCAL_QUORUM;
 case EACH_QUORUM: return 
org.apache.cassandra.db.ConsistencyLevel.EACH_QUORUM;
 case SERIAL: return 
org.apache.cassandra.db.ConsistencyLevel.SERIAL;
+case LOCAL_SERIAL: return 
org.apache.cassandra.db.ConsistencyLevel.LOCAL_SERIAL;
 case LOCAL_ONE: return 
org.apache.cassandra.db.ConsistencyLevel.LOCAL_ONE;
 }
 throw new AssertionError();



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-01-14 Thread brandonwilliams
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54f728c7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54f728c7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54f728c7

Branch: refs/heads/trunk
Commit: 54f728c771742fba66ff751f305a3cf1f5676c7d
Parents: ea565aa 05f1209
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 21:22:07 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 21:22:07 2014 -0600

--
 src/java/org/apache/cassandra/thrift/ThriftConversion.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/54f728c7/src/java/org/apache/cassandra/thrift/ThriftConversion.java
--



[jira] [Updated] (CASSANDRA-6584) LOCAL_SERIAL doesn't work from Thrift

2014-01-14 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6584:
-

Reviewer: Aleksey Yeschenko

 LOCAL_SERIAL doesn't work from Thrift
 -

 Key: CASSANDRA-6584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6584
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nicolas Favre-Felix
Assignee: Brandon Williams
  Labels: easyfix
 Fix For: 2.0.5

 Attachments: 6584.txt


 Calling cas from Thrift with CL.LOCAL_SERIAL fails with an AssertionError 
 since ThriftConversion.fromThrift has no case statement for LOCAL_SERIAL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6584) LOCAL_SERIAL doesn't work from Thrift

2014-01-14 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871586#comment-13871586
 ] 

Jeremiah Jordan commented on CASSANDRA-6584:


LGTM

 LOCAL_SERIAL doesn't work from Thrift
 -

 Key: CASSANDRA-6584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6584
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nicolas Favre-Felix
Assignee: Brandon Williams
  Labels: easyfix
 Fix For: 2.0.5

 Attachments: 6584.txt


 Calling cas from Thrift with CL.LOCAL_SERIAL fails with an AssertionError 
 since ThriftConversion.fromThrift has no case statement for LOCAL_SERIAL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6584) LOCAL_SERIAL doesn't work from Thrift

2014-01-14 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871585#comment-13871585
 ] 

Aleksey Yeschenko commented on CASSANDRA-6584:
--

+1

 LOCAL_SERIAL doesn't work from Thrift
 -

 Key: CASSANDRA-6584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6584
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nicolas Favre-Felix
Assignee: Brandon Williams
  Labels: easyfix
 Fix For: 2.0.5

 Attachments: 6584.txt


 Calling cas from Thrift with CL.LOCAL_SERIAL fails with an AssertionError 
 since ThriftConversion.fromThrift has no case statement for LOCAL_SERIAL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[2/3] git commit: update changes

2014-01-14 Thread brandonwilliams
update changes


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7514e61b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7514e61b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7514e61b

Branch: refs/heads/trunk
Commit: 7514e61b48e9456cf6591abaf6dbf17b52217883
Parents: 05f1209
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 21:27:19 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 21:27:19 2014 -0600

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7514e61b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ef2df51..2bbf809 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,6 +3,8 @@
  * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
  * Delete unfinished compaction incrementally (CASSANDRA-6086)
  * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
+ * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
+ * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 Merged from 1.2:
  * fsync compression metadata (CASSANDRA-6531)
  * Validate CF existence on execution for prepared statement (CASSANDRA-6535)



[1/3] git commit: update changes

2014-01-14 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-2.0 05f120991 - 7514e61b4
  refs/heads/trunk 54f728c77 - 312286772


update changes


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7514e61b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7514e61b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7514e61b

Branch: refs/heads/cassandra-2.0
Commit: 7514e61b48e9456cf6591abaf6dbf17b52217883
Parents: 05f1209
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 21:27:19 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 21:27:19 2014 -0600

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7514e61b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ef2df51..2bbf809 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,6 +3,8 @@
  * Wait for gossip to settle before accepting client connections 
(CASSANDRA-4288)
  * Delete unfinished compaction incrementally (CASSANDRA-6086)
  * Allow specifying custom secondary index options in CQL3 (CASSANDRA-6480)
+ * Improve replica pinning for cache efficiency in DES (CASSANDRA-6485)
+ * Fix LOCAL_SERIAL from thrift (CASSANDRA-6584)
 Merged from 1.2:
  * fsync compression metadata (CASSANDRA-6531)
  * Validate CF existence on execution for prepared statement (CASSANDRA-6535)



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-01-14 Thread brandonwilliams
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/31228677
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/31228677
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/31228677

Branch: refs/heads/trunk
Commit: 3122867720d9075792cba31a5ef8242fdca58527
Parents: 54f728c 7514e61
Author: Brandon Williams brandonwilli...@apache.org
Authored: Tue Jan 14 21:27:26 2014 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Tue Jan 14 21:27:26 2014 -0600

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/31228677/CHANGES.txt
--



[jira] [Commented] (CASSANDRA-5549) Remove Table.switchLock

2014-01-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871641#comment-13871641
 ] 

Jonathan Ellis commented on CASSANDRA-5549:
---

Pushed more refactorage to my branch.

I have a nagging feeling that OpOrdering could be done with two classes instead 
of three, merging Barrier and Ordered:

{code}
public void consume()
{
SharedState state = this.state;
state.setReplacement(new State())
state.doSomethingToPrepareForBarrier();

state.opGroup = ordering.currentActiveGroup();
state.opGroup.expire()
state.opGroup.await();

this.state = state.getReplacement();
state.doSomethingWithExclusiveAccess();
}

public void produce()
{
Group opGroup = ordering.start();
try
{
state.doProduceWork();
}
finally
{
opGroup.finishOne();
}
}
{code}

(We could still provide an accepts() method for the benefit of getMemtableFor, 
but I don't see that requiring a 3rd class either.)

 Remove Table.switchLock
 ---

 Key: CASSANDRA-5549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5549
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 5549-removed-switchlock.png, 5549-sunnyvale.png


 As discussed in CASSANDRA-5422, Table.switchLock is a bottleneck on the write 
 path.  ReentrantReadWriteLock is not lightweight, even if there is no 
 contention per se between readers and writers of the lock (in Cassandra, 
 memtable updates and switches).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6504) counters++

2014-01-14 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871690#comment-13871690
 ] 

Aleksey Yeschenko commented on CASSANDRA-6504:
--

Pushed v2 to https://github.com/iamaleksey/cassandra/commits/6504-v2

Just the difference between (trunk with merge in 6505-v2) and 6504-v2: 
https://github.com/iamaleksey/cassandra/compare/6505-trunk...6504-v2

I am writing more unit tests now (for counter cache and counter mutation), but 
that shouldn't be blocking the review (the tests will be there before 6504 gets 
properly committed).

bq. I don't think we need the CounterId renewal stuff on cleanup anymore. So 1) 
we can remove it from Cleanup and remove a bunch of stuff from CounterId

Done. We *could* also move the counter id to system.local, since we don't care 
about history anymore, but I'm keeping it as is for now.

bq. 2) this means a node shouldn't change it's counterId at all anymore. So, 
since the counter cache stores only old local shards, we can skip storing the 
counterId in each cache key (we'd want to save the counterId at the start of 
the cache file or something just to assert it hasn't changed at reload time 
just in case but that should be enough).

True. Done. Also moved counter cache loading to SS.initServer(), after the 
potential counter id renewal (the only place where it can happen) and commit 
log replay.

Minor changes not originating from the review:
- CFPropDefs (CQL2 and CQL3) to inline KW_REPLICATEONWRITE for obsoleteKeywords
- CassandraServer.doInsert() to use MIN(mutations' timeouts) instead of MAX
- addition of the missing counter cache nodetool stuff

CounterContext stuff from v1 of 6505 that didn't get into 6505-v2, but did get 
into 6404-v2 branch:
- reuse of writeElement() for copyTo() - will go away with 6506
- changes to diff() - will go away with 6506
- inlining of IContext

Not changed, or done differently:

bq. Nit: in the yaml, it'd be more consistent to keep counter_cache_size_in_mb 
commented out to mean default (rather than adding it but with an empty 
value). We could then simplify the comment to something like Default to 
min(2.5% of the heap, 50MB). Set to 0 to disable.

It's blank, but not commented out, for consistency with key_cache_size_in_mb 
and row_cache_size_in_mb. Should comment out all of them or leave as is.

bq. Given that it's a rather hot path, might be worth using a 2 values long[] 
instead of a PairLong, Long?

Created a named class instead - ClockAndCount, that's also reused by the 
counter cache (used in place of the old CounterCacheEntry) instead. And 
switched currentValues to an array from an ArrayList, since generics are no 
longer involved.

All the other issues and nits have been fixed. Sorry for doing it in a rebase. 
If v3 ever happens, it's going to be a separate commit on top of the v2 branch.



 counters++
 --

 Key: CASSANDRA-6504
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6504
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.1


 Continuing CASSANDRA-4775 here.
 We are changing counter write path to explicitly 
 lock-read-modify-unlock-replicate, thus getting rid of the previously used 
 'local' (deltas) and 'remote' shards distinction. Unfortunately, we can't 
 simply start using 'remote' shards exclusively, since shard merge rules 
 prioritize the 'local' shards. Which is why we are introducing the third 
 shard type - 'global', the only shard type to be used in 2.1+.
 The updated merge rules are going to look like this:
 global + global = keep the shard with the highest logical clock
 global + local or remote = keep the global one
 local + local = sum counts (and logical clock)
 local + remote = keep the local one
 remote + remote = keep the shard with highest logical clock
 This is required for backward compatibility with pre-2.1 counters. To make 
 2.0-2.1 live upgrade possible, 'global' shard merge logic will have to be 
 back ported to 2.0. 2.0 will not produce them, but will be able to understand 
 the global shards coming from the 2.1 nodes during the live upgrade. See 
 CASSANDRA-6505.
 Other changes introduced in this issue:
 1. replicate_on_write is gone. From now on we only avoid replication at RF 1.
 2. REPLICATE_ON_WRITE stage is gone
 3. counter mutations are running in their own COUNTER_MUTATION stage now
 4. counter mutations have a separate counter_write_request_timeout setting
 5. mergeAndRemoveOldShards() code is gone, for now, until/unless a better 
 solution is found
 6. we only replicate the fresh global shard now, not the complete 
 (potentially quite large) counter context
 7. to help with concurrency and reduce lock contention, we cache node's 
 global shards in a new counter cache ({cf id, partition 

[jira] [Updated] (CASSANDRA-5357) Query cache / partition head cache

2014-01-14 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-5357:
---

Attachment: 0001-Cache-a-configurable-amount-of-columns.patch

support LIMIT queries, set rows_per_partition_to_cache to 'ALL' to get 
old-rowcache behavior.

 Query cache / partition head cache
 --

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Fix For: 2.1

 Attachments: 0001-Cache-a-configurable-amount-of-columns.patch


 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5357) Query cache / partition head cache

2014-01-14 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-5357:
---

Attachment: (was: 0001-Cache-a-configurable-amount-of-columns-v2.patch)

 Query cache / partition head cache
 --

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Fix For: 2.1

 Attachments: 0001-Cache-a-configurable-amount-of-columns.patch


 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5357) Query cache / partition head cache

2014-01-14 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-5357:
---

Attachment: (was: 0001-Cache-a-configurable-amount-of-columns.patch)

 Query cache / partition head cache
 --

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Fix For: 2.1

 Attachments: 0001-Cache-a-configurable-amount-of-columns.patch


 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)