[jira] [Commented] (CASSANDRA-2441) Cassandra crashes with segmentation fault on Debian 5.0 and Ubuntu 10.10

2012-10-15 Thread Manu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475992#comment-13475992
 ] 

Manu Zhang commented on CASSANDRA-2441:
---

180k works fine with OpenJDK 1.7.0_07. So update OpenJDK could be an option

 Cassandra crashes with segmentation fault on Debian 5.0 and Ubuntu 10.10
 

 Key: CASSANDRA-2441
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2441
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8 beta 1
 Environment: Both servers have identical hardware configuration: 
 Quad-Core AMD Opteron(tm) Processor 2374 HE, 4 GB RAM (rackspace servers)
 Java version 1.6.0_20
 OpenJDK Runtime Environment (IcedTea6 1.9.7) (6b20-1.9.7-0ubuntu1)
 OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
Reporter: Pavel Yaskevich
Assignee: Jonathan Ellis
Priority: Critical
 Fix For: 0.8 beta 1

 Attachments: 2441.txt, 2441.txt, jamm-0.2.1.jar


 Last working commit is c8d1984bf17cab58f40069e522d074c7b0077bc1 (merge from 
 0.7), branch: trunk.
 What I did is cloned git://git.apache.org/cassandra.git and did git reset 
 each commit with `ant clean  ant  ./bin/cassandra -f` until I got 
 cassandra started

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-2864) Alternative Row Cache Implementation

2012-10-15 Thread Daniel Doubleday (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475640#comment-13475640
 ] 

Daniel Doubleday edited comment on CASSANDRA-2864 at 10/15/12 6:55 AM:
---

Ok - next attempt:

Basic idea is optimistic locking. The DataTracker.View gets a generation 
number. The cache miss read compares the current generation number with the one 
the read was created with. If it finds that they dont match it doesn't write 
the row to the cache. Also there is some double checking on read.

I think the getThroughCacheMethod documents the idea mostly so I paste it here:

{noformat}
private ColumnFamily getThroughCache(UUID cfId, QueryFilter filter, int 
gcBefore)
{
assert isRowCacheEnabled()
   : String.format(Row cache is not enabled on column family [ + 
getColumnFamilyName() + ]);

RowCacheKey key = new RowCacheKey(cfId, filter.key);

CachedRow cachedRow = (CachedRow) 
CacheService.instance.rowCache.get(key);
if (cachedRow != null)
{
if (cachedRow.isValid())
{
RowCacheCollationController collationController = new 
RowCacheCollationController(this, memtables(), cachedRow, filter, gcBefore);
ColumnFamily returnCF = collationController.getColumnFamily();
if (!metadata.getDefaultValidator().isCommutative() || 
collationController.getView().generation == data.getView().generation)
return returnCF;
else
return getIgnoreCache(filter, gcBefore);
}
else
return getIgnoreCache(filter, gcBefore);

}
else
{
// for cache = false: we dont cache the cf itself
CollationController controller = 
collateTopLevelColumns(QueryFilter.getIdentityFilter(filter.key, new 
QueryPath(columnFamily)), gcBefore, false);
ColumnFamily cf = controller.getColumnFamily();
if (cf != null)
{
cachedRow = CachedRow.serialize(cf);
if (controller.getView().generation == 
data.getView().generation)
{
// we can try to set the row in the cache but if 
mergeRowCache runs before the putIfAbsent
// it wont see the row and we'll loose the update
boolean setInCache = 
CacheService.instance.rowCache.putIfAbsent(key, cachedRow);
if (setInCache)
{
// before flush switchMemtable is called which 
increments the view generation
// so only when the generation re-check is ok we can 
mark the cached row as valid
if (controller.getView().generation == 
data.getView().generation)
cachedRow.setValid(true);
else
CacheService.instance.rowCache.remove(key);
}
}
return filterColumnFamily(cf, filter, gcBefore);
}

return null;
}
}
{noformat}

I created a patch based on your branch. -I *think* that it would be also safe 
to call getThroughCache for counters now-. But I haven't done any testing so 
far but wanted to get your opinion first if that could work.

EDIT: Scratch the counters thing. Doesn't take flushing memtables into account 
yet.

  was (Author: doubleday):
Ok - next attempt:

Basic idea is optimistic locking. The DataTracker.View gets a generation 
number. The cache miss read compares the current generation number with the one 
the read was created with. If it finds that they dont match it doesn't write 
the row to the cache. Also there is some double checking on read.

I think the getThroughCacheMethod documents the idea mostly so I paste it here:

{noformat}
private ColumnFamily getThroughCache(UUID cfId, QueryFilter filter, int 
gcBefore)
{
assert isRowCacheEnabled()
   : String.format(Row cache is not enabled on column family [ + 
getColumnFamilyName() + ]);

RowCacheKey key = new RowCacheKey(cfId, filter.key);

CachedRow cachedRow = (CachedRow) 
CacheService.instance.rowCache.get(key);
if (cachedRow != null)
{
if (cachedRow.isValid())
{
RowCacheCollationController collationController = new 
RowCacheCollationController(this, memtables(), cachedRow, filter, gcBefore);
ColumnFamily returnCF = collationController.getColumnFamily();
if (!metadata.getDefaultValidator().isCommutative() || 
collationController.getView().generation == data.getView().generation)
return returnCF;
else
return getIgnoreCache(filter, gcBefore);
  

[jira] [Updated] (CASSANDRA-4783) AE in cql3 select

2012-10-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4783:


Attachment: 0002-Fix-mixing-list-set-operation-and-regular-updates.txt
0001-Fix-validation-of-IN-queries.txt

The reason for the assertionError in select is that we were not validating 
correctly IN queries. But basically, we don't support yet the kind of IN 
queries that was attempted. We certainly should, and now that we can do 
multi-slice queries we should, but that's on the todo list. So attaching a 
patch that just fix the validation for now, and let's leave lifting the 
limitation to another ticket.

Ian's assertion error is unrelated but is fairly easy to fix so attaching a 
second patch for that too (not sure it's worth the trouble of spawning a 
separate ticket but we can if someone prefers it).

 AE in cql3 select
 -

 Key: CASSANDRA-4783
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4783
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Brandon Williams
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Fix-validation-of-IN-queries.txt, 
 0002-Fix-mixing-list-set-operation-and-regular-updates.txt


 Caused by 'select * from foo where key='blah' and column in (...)
 {noformat}
 ERROR 18:35:46,169 Exception in thread Thread[Thrift:11,5,main]
 java.lang.AssertionError
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:443)
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:312)
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:200)
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:125)
 at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:61)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:130)
 at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:138)
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1658)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3721)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3709)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:196)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {noformat}
 Causes cqlsh to hang forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4786) NPE in migration stage after creating an index

2012-10-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4786:


Attachment: 4786.txt

So the NPE is because when we switch the memtable during reload, there could be 
a race where some other thread flush the current memtable first and thus 
maybeSwitchMemtable returns null.

This can be fixed by storing the comparator at the time a memtable is created, 
and when we try to change the memtable in reload, try switching the memtable 
until we know it has the right comparator (this also has the advantage that we 
won't switch the memtable unless there has been a comparator change). Patch 
attached to implement that.

Note that the patch also switch all the non-final variables from CFMetadata to 
volatile as they are definitively accessed from multiple threads.

 NPE in migration stage after creating an index
 --

 Key: CASSANDRA-4786
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4786
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Pavel Yaskevich
 Fix For: 1.2.0 beta 2

 Attachments: 4786.txt


 The dtests are generating this error after trying to create an index in cql2:
 {noformat}
 ERROR [MigrationStage:1] 2012-10-09 20:54:12,796 CassandraDaemon.java (line 
 132) Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.ColumnFamilyStore.reload(ColumnFamilyStore.java:162)
 at 
 org.apache.cassandra.db.DefsTable.updateColumnFamily(DefsTable.java:549)
 at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:479)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:344)
 at 
 org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:256)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 ERROR [Thrift:1] 2012-10-09 20:54:12,797 CustomTThreadPoolServer.java (line 
 214) Error occurred during processing of message.
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.NullPointerException
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:348)
 at 
 org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:238)
 at 
 org.apache.cassandra.service.MigrationManager.announceColumnFamilyUpdate(MigrationManager.java:209)
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:714)
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:816)
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1656)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3721)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3709)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:196)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.NullPointerException
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:344)
 ... 13 more
 Caused by: java.lang.NullPointerException
 at 
 org.apache.cassandra.db.ColumnFamilyStore.reload(ColumnFamilyStore.java:162)
 at 
 org.apache.cassandra.db.DefsTable.updateColumnFamily(DefsTable.java:549)
 at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:479)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:344)
 at 
 org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:256)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 ... 3 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA 

[jira] [Reopened] (CASSANDRA-4794) cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException

2012-10-15 Thread debadatta das (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

debadatta das reopened CASSANDRA-4794:
--


Issue is not resolved yet.

 cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException
 ---

 Key: CASSANDRA-4794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4794
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0 beta 1
 Environment: C++
Reporter: debadatta das
 Attachments: sample_AtomicBatchMutate.cpp


 Hi,
 We have installed cassandra 1.2.0 beta with thrift 0.7.0. We are using cpp 
 interface. When we use batch_mutate API, it works fine. But when we are using 
 the new atomic_batch_mutate API with same parameters as batch_mutate, it 
 fails with org::apache::cassandra::TimedOutException, what(): Default 
 TException. We get the same TException error even after increasing Send/Reciv 
 timeout values of Tsocket to 15 seconds or more.
 Details:
 cassandra ring:
 cassandra ring with single node
 consistency level paramter to atomic_batch_mutate
 ConsistencyLevel::ONE
 Thrift version:
 same results with thrift 0.5.0 and thrift 0.7.0.
 thrift 0.8.0 seems unsupported with cassanda 1.2.0. Gives compilation error 
 for cpp interface build.
 We are calling atomic_batch_mutate() with same parameters as batch_mutate.
 cassclient.atomic_batch_mutate(outermap1, ConsistencyLevel::ONE);
 where outmap1 is
 mapstring, mapstring, vectorMutation   outermap1;
 Please point out if anything is missing while using atomic_batch_mutate or 
 the reason behind the failure.
 The logs in cassandra system.log we get during atomic_batch_mutate failure 
 are:
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,604 MessagingService.java (line 
 800) 1 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,606 StatusLogger.java (line 53) 
 Pool Name Active Pending Blocked
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,607 StatusLogger.java (line 68) 
 ReadStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 RequestResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReadRepairStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 MutationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReplicateOnWriteStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 GossipStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 AntiEntropyStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MigrationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 StreamStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MemtablePostFlusher 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 FlushWriter 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 MiscStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 commitlog_archiver 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 InternalResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 73) 
 CompactionManager 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 85) 
 MessagingService n/a 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 95) 
 Cache Type Size Capacity KeysToSave Provider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 96) 
 KeyCache 227 74448896 all
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 102) 
 RowCache 0 0 all org.apache.cassandra.cache.SerializingCacheProvider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 109) 
 ColumnFamily Memtable ops,data
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 KeyspaceTest.CF_Test 1,71
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.local 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.peers 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.batchlog 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.NodeIdInfo 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.LocationInfo 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.Schema 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,613 

[jira] [Commented] (CASSANDRA-4794) cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException

2012-10-15 Thread debadatta das (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476069#comment-13476069
 ] 

debadatta das commented on CASSANDRA-4794:
--

Messages dont get dropped for other operations. As I mentioned earlier, 
batch_mutate works fine with the same request, whereas atomic_batch_mutate 
fails for the same parameters. I dont know much about cqlsh and whether we can 
call atomic_batch_mutate API through it. I will look into it and see if it can 
be reproduced on cqlsh.

Regards,
Debadatta

 cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException
 ---

 Key: CASSANDRA-4794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4794
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0 beta 1
 Environment: C++
Reporter: debadatta das
 Attachments: sample_AtomicBatchMutate.cpp


 Hi,
 We have installed cassandra 1.2.0 beta with thrift 0.7.0. We are using cpp 
 interface. When we use batch_mutate API, it works fine. But when we are using 
 the new atomic_batch_mutate API with same parameters as batch_mutate, it 
 fails with org::apache::cassandra::TimedOutException, what(): Default 
 TException. We get the same TException error even after increasing Send/Reciv 
 timeout values of Tsocket to 15 seconds or more.
 Details:
 cassandra ring:
 cassandra ring with single node
 consistency level paramter to atomic_batch_mutate
 ConsistencyLevel::ONE
 Thrift version:
 same results with thrift 0.5.0 and thrift 0.7.0.
 thrift 0.8.0 seems unsupported with cassanda 1.2.0. Gives compilation error 
 for cpp interface build.
 We are calling atomic_batch_mutate() with same parameters as batch_mutate.
 cassclient.atomic_batch_mutate(outermap1, ConsistencyLevel::ONE);
 where outmap1 is
 mapstring, mapstring, vectorMutation   outermap1;
 Please point out if anything is missing while using atomic_batch_mutate or 
 the reason behind the failure.
 The logs in cassandra system.log we get during atomic_batch_mutate failure 
 are:
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,604 MessagingService.java (line 
 800) 1 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,606 StatusLogger.java (line 53) 
 Pool Name Active Pending Blocked
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,607 StatusLogger.java (line 68) 
 ReadStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 RequestResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReadRepairStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 MutationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReplicateOnWriteStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 GossipStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 AntiEntropyStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MigrationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 StreamStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MemtablePostFlusher 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 FlushWriter 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 MiscStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 commitlog_archiver 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 InternalResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 73) 
 CompactionManager 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 85) 
 MessagingService n/a 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 95) 
 Cache Type Size Capacity KeysToSave Provider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 96) 
 KeyCache 227 74448896 all
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 102) 
 RowCache 0 0 all org.apache.cassandra.cache.SerializingCacheProvider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 109) 
 ColumnFamily Memtable ops,data
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 KeyspaceTest.CF_Test 1,71
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.local 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.peers 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 

[jira] [Created] (CASSANDRA-4805) live update compaction strategy destroy counter column family

2012-10-15 Thread sunjian (JIRA)
sunjian created CASSANDRA-4805:
--

 Summary: live update compaction strategy destroy counter column 
family 
 Key: CASSANDRA-4805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4805
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: centos 64 , cassandra 1.1.5
Reporter: sunjian


1. in a running cassandra cluster with 5 nodes
2. CLI : update column family {user_stats (a counter column family)} with 
compaction_strategy='LeveledCompaction'
3. nodetool -h host_ip compact


result : 

can't INCR/DECR the counter column any more , but it's OK to read .





counter column family definition :

String sql = CREATE TABLE  + 
this.columnFamilyEnum.getColumnFamilyName() +  ( +
COL_UID +  bigint , +
COL_COUNTER_TYPE +  text , +
COL_COUNTER_FOR_WHAT +  text , +
COL_COUNTER_VALUE +  counter , +
 PRIMARY KEY( 
+ COL_UID 
+ , 
+ COL_COUNTER_TYPE 
+ ,
+ COL_COUNTER_FOR_WHAT
+)) WITH read_repair_chance = 1.0 AND 
replicate_on_write=true   ;




[exception]

java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
identifier counter_value) 
at 
org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
 
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4805) live update compaction strategy destroy counter column family

2012-10-15 Thread sunjian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjian updated CASSANDRA-4805:
---

Description: 
1. in a running cassandra cluster with 5 nodes
2. CLI : update column family {user_stats (a counter column family)} with 
compaction_strategy='LeveledCompaction'
3. nodetool -h host_ip compact


result : 

can't INCR/DECR the counter column any more , but it's OK to read .





counter column family definition :

String sql = CREATE TABLE user_stats ( +
user_id bigint , +
counter_type text , +
counter_for_what text , +
counter_value counter , +
 PRIMARY KEY( 
+  user_id 
+ , 
+ counter_type 
+ ,
+ counter_for_what 
+)) WITH read_repair_chance = 1.0 AND 
replicate_on_write=true   ;




[exception]

java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
identifier counter_value) 
at 
org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
 
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
 


  was:
1. in a running cassandra cluster with 5 nodes
2. CLI : update column family {user_stats (a counter column family)} with 
compaction_strategy='LeveledCompaction'
3. nodetool -h host_ip compact


result : 

can't INCR/DECR the counter column any more , but it's OK to read .





counter column family definition :

String sql = CREATE TABLE  + 
this.columnFamilyEnum.getColumnFamilyName() +  ( +
COL_UID +  bigint , +
COL_COUNTER_TYPE +  text , +
COL_COUNTER_FOR_WHAT +  text , +
COL_COUNTER_VALUE +  counter , +
 PRIMARY KEY( 
+ COL_UID 
+ , 
+ COL_COUNTER_TYPE 
+ ,
+ COL_COUNTER_FOR_WHAT
+)) WITH read_repair_chance = 1.0 AND 
replicate_on_write=true   ;




[exception]

java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
identifier counter_value) 
at 
org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
 
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
 



 live update compaction strategy destroy counter column family 
 --

 Key: CASSANDRA-4805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4805
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: centos 64 , cassandra 1.1.5
Reporter: sunjian

 1. in a running cassandra cluster with 5 nodes
 2. CLI : update column family {user_stats (a counter column family)} with 
 compaction_strategy='LeveledCompaction'
 3. nodetool -h host_ip compact
 result : 
 can't INCR/DECR the counter column any more , but it's OK to read .
 
 counter column family definition :
   String sql = CREATE TABLE user_stats ( +
   user_id bigint , +
   counter_type text , +
   counter_for_what text , +
   counter_value counter , +
PRIMARY KEY( 
   +  user_id 
   + , 
   + counter_type 
   + ,
   + counter_for_what 
   +)) WITH read_repair_chance = 1.0 AND 
 replicate_on_write=true   ;
 [exception]
 java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
 identifier counter_value) 
   at 
 org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
  
   at 
 org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-2864) Alternative Row Cache Implementation

2012-10-15 Thread Daniel Doubleday (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Doubleday updated CASSANDRA-2864:


Attachment: (was: optimistic-locking.patch)

 Alternative Row Cache Implementation
 

 Key: CASSANDRA-2864
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2864
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Daniel Doubleday
Assignee: Daniel Doubleday
  Labels: cache
 Fix For: 1.3

 Attachments: 0001-CASSANDRA-2864-w-out-direct-counter-support.patch, 
 rowcache-with-snaptree-sketch.patch


 we have been working on an alternative implementation to the existing row 
 cache(s)
 We have 2 main goals:
 - Decrease memory - get more rows in the cache without suffering a huge 
 performance penalty
 - Reduce gc pressure
 This sounds a lot like we should be using the new serializing cache in 0.8. 
 Unfortunately our workload consists of loads of updates which would 
 invalidate the cache all the time.
 *Note: Updated Patch Description (Please check history if you're interested 
 where this was comming from)*
 h3. Rough Idea
 - Keep serialized row (ByteBuffer) in mem which represents unfiltered but 
 collated columns of all ssts but not memtable columns
 - Writes dont affect the cache at all. They go only to the memtables
 - Reads collect columns from memtables and row cache
 - Serialized Row is re-written (merged) with mem tables when flushed
 h3. Some Implementation Details
 h4. Reads
 - Basically the read logic differ from regular uncached reads only in that a 
 special CollationController which is deserializing columns from in memory 
 bytes
 - In the first version of this cache the serialized in memory format was the 
 same as the fs format but test showed that performance sufferd because a lot 
 of unnecessary deserialization takes place and that columns seeks are O( n ) 
 whithin one block
 - To improve on that a different in memory format was used. It splits length 
 meta info and data of columns so that the names can be binary searched. 
 {noformat}
 ===
 Header (24)
 ===
 MaxTimestamp:long  
 LocalDeletionTime:   int   
 MarkedForDeleteAt:   long  
 NumColumns:  int   
 ===
 Column Index (num cols * 12)  
 ===
 NameOffset:  int   
 ValueOffset: int   
 ValueLength: int   
 ===
 Column Data
 ===
 Name:byte[]
 Value:   byte[]
 SerializationFlags:  byte  
 Misc:? 
 Timestamp:   long  
 ---
 Misc Counter Column
 ---
 TSOfLastDelete:  long  
 ---
 Misc Expiring Column   
 ---
 TimeToLive:  int   
 LocalDeletionTime:   int   
 ===
 {noformat}
 - These rows are read by 2 new column interators which correspond to 
 SSTableNamesIterator and SSTableSliceIterator. During filtering only columns 
 that actually match are constructed. The searching / skipping is performed on 
 the raw ByteBuffer and does not create any objects.
 - A special CollationController is used to access and collate via cache and 
 said new iterators. It also supports skipping the cached row by max update 
 timestamp
 h4. Writes
 - Writes dont update or invalidate the cache.
 - In CFS.replaceFlushed memtables are merged before the data view is 
 switched. I fear that this is killing counters because they would be 
 overcounted but my understading of counters is somewhere between weak and 
 non-existing. I guess that counters if one wants to support them here would 
 need an additional unique local identifier in memory and in serialized cache 
 to be able to filter duplicates or something like that.
 {noformat}
 void replaceFlushed(Memtable memtable, SSTableReader sstable)
 {
 if (sstCache.getCapacity()  0) {
 mergeSSTCache(memtable);
 }
 data.replaceFlushed(memtable, sstable);
 CompactionManager.instance.submitBackground(this);
 }
 {noformat}
 Test Results: See comments below

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-2864) Alternative Row Cache Implementation

2012-10-15 Thread Daniel Doubleday (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Doubleday updated CASSANDRA-2864:


Attachment: optimistic-locking.patch

Second shot ...

This one is special casing cached reads for counters.

The relevant change in CFS looks like this:

{noformat}

ViewFragment viewFragment = memtables();

// cant use the cache for counters when key is in one of the flushing memtables
boolean commutative = metadata.getDefaultValidator().isCommutative();
if (commutative  viewFragment.keyIsFlushing(filter.key))
return getIgnoreCache(filter, gcBefore);

RowCacheCollationController collationController = new 
RowCacheCollationController(this, viewFragment, cachedRow, filter, gcBefore);
ColumnFamily returnCF = collationController.getColumnFamily();

// for counters we must make sure that flushing didnt start during this read
if (!commutative || collationController.getView().generation == 
data.getView().generation)
return returnCF;
else
return getIgnoreCache(filter, gcBefore);

{noformat}

One issue is that cache hit ratios will not reflect the edge cases.

 Alternative Row Cache Implementation
 

 Key: CASSANDRA-2864
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2864
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Daniel Doubleday
Assignee: Daniel Doubleday
  Labels: cache
 Fix For: 1.3

 Attachments: 0001-CASSANDRA-2864-w-out-direct-counter-support.patch, 
 optimistic-locking.patch, rowcache-with-snaptree-sketch.patch


 we have been working on an alternative implementation to the existing row 
 cache(s)
 We have 2 main goals:
 - Decrease memory - get more rows in the cache without suffering a huge 
 performance penalty
 - Reduce gc pressure
 This sounds a lot like we should be using the new serializing cache in 0.8. 
 Unfortunately our workload consists of loads of updates which would 
 invalidate the cache all the time.
 *Note: Updated Patch Description (Please check history if you're interested 
 where this was comming from)*
 h3. Rough Idea
 - Keep serialized row (ByteBuffer) in mem which represents unfiltered but 
 collated columns of all ssts but not memtable columns
 - Writes dont affect the cache at all. They go only to the memtables
 - Reads collect columns from memtables and row cache
 - Serialized Row is re-written (merged) with mem tables when flushed
 h3. Some Implementation Details
 h4. Reads
 - Basically the read logic differ from regular uncached reads only in that a 
 special CollationController which is deserializing columns from in memory 
 bytes
 - In the first version of this cache the serialized in memory format was the 
 same as the fs format but test showed that performance sufferd because a lot 
 of unnecessary deserialization takes place and that columns seeks are O( n ) 
 whithin one block
 - To improve on that a different in memory format was used. It splits length 
 meta info and data of columns so that the names can be binary searched. 
 {noformat}
 ===
 Header (24)
 ===
 MaxTimestamp:long  
 LocalDeletionTime:   int   
 MarkedForDeleteAt:   long  
 NumColumns:  int   
 ===
 Column Index (num cols * 12)  
 ===
 NameOffset:  int   
 ValueOffset: int   
 ValueLength: int   
 ===
 Column Data
 ===
 Name:byte[]
 Value:   byte[]
 SerializationFlags:  byte  
 Misc:? 
 Timestamp:   long  
 ---
 Misc Counter Column
 ---
 TSOfLastDelete:  long  
 ---
 Misc Expiring Column   
 ---
 TimeToLive:  int   
 LocalDeletionTime:   int   
 ===
 {noformat}
 - These rows are read by 2 new column interators which correspond to 
 SSTableNamesIterator and SSTableSliceIterator. During filtering only columns 
 that actually match are constructed. The searching / skipping is performed on 
 the raw ByteBuffer and does not create any objects.
 - A special CollationController is used to access and collate via cache and 
 said new iterators. It also supports skipping the cached row by max update 
 timestamp
 h4. Writes
 - Writes dont update or invalidate the cache.
 - In CFS.replaceFlushed memtables are merged before the data view is 
 switched. I fear that this is killing counters because they would be 
 overcounted but my understading of counters is somewhere between weak and 
 non-existing. I guess that counters if one wants to support them here would 
 

[jira] [Comment Edited] (CASSANDRA-2864) Alternative Row Cache Implementation

2012-10-15 Thread Daniel Doubleday (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476074#comment-13476074
 ] 

Daniel Doubleday edited comment on CASSANDRA-2864 at 10/15/12 10:41 AM:


Second shot ...

This one is special casing cached reads for counters.

The relevant change in CFS (cache hit case) looks like this:

{noformat}

ViewFragment viewFragment = memtables();

// cant use the cache for counters when key is in one of the flushing memtables
boolean commutative = metadata.getDefaultValidator().isCommutative();
if (commutative  viewFragment.keyIsFlushing(filter.key))
return getIgnoreCache(filter, gcBefore);

RowCacheCollationController collationController = new 
RowCacheCollationController(this, viewFragment, cachedRow, filter, gcBefore);
ColumnFamily returnCF = collationController.getColumnFamily();

// for counters we must make sure that flushing didnt start during this read
if (!commutative || collationController.getView().generation == 
data.getView().generation)
return returnCF;
else
return getIgnoreCache(filter, gcBefore);

{noformat}

One issue is that cache hit ratios will not reflect the edge cases.

  was (Author: doubleday):
Second shot ...

This one is special casing cached reads for counters.

The relevant change in CFS looks like this:

{noformat}

ViewFragment viewFragment = memtables();

// cant use the cache for counters when key is in one of the flushing memtables
boolean commutative = metadata.getDefaultValidator().isCommutative();
if (commutative  viewFragment.keyIsFlushing(filter.key))
return getIgnoreCache(filter, gcBefore);

RowCacheCollationController collationController = new 
RowCacheCollationController(this, viewFragment, cachedRow, filter, gcBefore);
ColumnFamily returnCF = collationController.getColumnFamily();

// for counters we must make sure that flushing didnt start during this read
if (!commutative || collationController.getView().generation == 
data.getView().generation)
return returnCF;
else
return getIgnoreCache(filter, gcBefore);

{noformat}

One issue is that cache hit ratios will not reflect the edge cases.
  
 Alternative Row Cache Implementation
 

 Key: CASSANDRA-2864
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2864
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Daniel Doubleday
Assignee: Daniel Doubleday
  Labels: cache
 Fix For: 1.3

 Attachments: 0001-CASSANDRA-2864-w-out-direct-counter-support.patch, 
 optimistic-locking.patch, rowcache-with-snaptree-sketch.patch


 we have been working on an alternative implementation to the existing row 
 cache(s)
 We have 2 main goals:
 - Decrease memory - get more rows in the cache without suffering a huge 
 performance penalty
 - Reduce gc pressure
 This sounds a lot like we should be using the new serializing cache in 0.8. 
 Unfortunately our workload consists of loads of updates which would 
 invalidate the cache all the time.
 *Note: Updated Patch Description (Please check history if you're interested 
 where this was comming from)*
 h3. Rough Idea
 - Keep serialized row (ByteBuffer) in mem which represents unfiltered but 
 collated columns of all ssts but not memtable columns
 - Writes dont affect the cache at all. They go only to the memtables
 - Reads collect columns from memtables and row cache
 - Serialized Row is re-written (merged) with mem tables when flushed
 h3. Some Implementation Details
 h4. Reads
 - Basically the read logic differ from regular uncached reads only in that a 
 special CollationController which is deserializing columns from in memory 
 bytes
 - In the first version of this cache the serialized in memory format was the 
 same as the fs format but test showed that performance sufferd because a lot 
 of unnecessary deserialization takes place and that columns seeks are O( n ) 
 whithin one block
 - To improve on that a different in memory format was used. It splits length 
 meta info and data of columns so that the names can be binary searched. 
 {noformat}
 ===
 Header (24)
 ===
 MaxTimestamp:long  
 LocalDeletionTime:   int   
 MarkedForDeleteAt:   long  
 NumColumns:  int   
 ===
 Column Index (num cols * 12)  
 ===
 NameOffset:  int   
 ValueOffset: int   
 ValueLength: int   
 ===
 Column Data
 ===
 Name:byte[]
 Value:   byte[]
 SerializationFlags:  byte  
 Misc:? 
 Timestamp:   long  
 ---

[jira] [Commented] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3

2012-10-15 Thread Marco Matarazzo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476079#comment-13476079
 ] 

Marco Matarazzo commented on CASSANDRA-4674:


Well, We would love to have it fixed.

 cqlsh COPY TO and COPY FROM don't work with cql3
 

 Key: CASSANDRA-4674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: cqlsh

 cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4805) live update compaction strategy destroy counter column family

2012-10-15 Thread sunjian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjian updated CASSANDRA-4805:
---

Description: 
1. in a running cassandra cluster with 5 nodes
2. CLI : update column family {user_stats (a counter column family)} with 
compaction_strategy='LeveledCompactionStrategy'
3. nodetool -h host_ip compact


result : 

can't INCR/DECR the counter column any more , but it's OK to read .





counter column family definition :

String sql = CREATE TABLE user_stats ( +
user_id bigint , +
counter_type text , +
counter_for_what text , +
counter_value counter , +
 PRIMARY KEY( 
+  user_id 
+ , 
+ counter_type 
+ ,
+ counter_for_what 
+)) WITH read_repair_chance = 1.0 AND 
replicate_on_write=true   ;




[exception]

java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
identifier counter_value) 
at 
org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
 
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
 


  was:
1. in a running cassandra cluster with 5 nodes
2. CLI : update column family {user_stats (a counter column family)} with 
compaction_strategy='LeveledCompaction'
3. nodetool -h host_ip compact


result : 

can't INCR/DECR the counter column any more , but it's OK to read .





counter column family definition :

String sql = CREATE TABLE user_stats ( +
user_id bigint , +
counter_type text , +
counter_for_what text , +
counter_value counter , +
 PRIMARY KEY( 
+  user_id 
+ , 
+ counter_type 
+ ,
+ counter_for_what 
+)) WITH read_repair_chance = 1.0 AND 
replicate_on_write=true   ;




[exception]

java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
identifier counter_value) 
at 
org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
 
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
 



 live update compaction strategy destroy counter column family 
 --

 Key: CASSANDRA-4805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4805
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: centos 64 , cassandra 1.1.5
Reporter: sunjian

 1. in a running cassandra cluster with 5 nodes
 2. CLI : update column family {user_stats (a counter column family)} with 
 compaction_strategy='LeveledCompactionStrategy'
 3. nodetool -h host_ip compact
 result : 
 can't INCR/DECR the counter column any more , but it's OK to read .
 
 counter column family definition :
   String sql = CREATE TABLE user_stats ( +
   user_id bigint , +
   counter_type text , +
   counter_for_what text , +
   counter_value counter , +
PRIMARY KEY( 
   +  user_id 
   + , 
   + counter_type 
   + ,
   + counter_for_what 
   +)) WITH read_repair_chance = 1.0 AND 
 replicate_on_write=true   ;
 [exception]
 java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
 identifier counter_value) 
   at 
 org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
  
   at 
 org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4806) Consistency of Append/Prepend on Lists need to be improved or clarified

2012-10-15 Thread JIRA
Michaël Figuière created CASSANDRA-4806:
---

 Summary: Consistency of Append/Prepend on Lists need to be 
improved or clarified
 Key: CASSANDRA-4806
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4806
 Project: Cassandra
  Issue Type: Improvement
Reporter: Michaël Figuière


Updates are idempotent in Cassandra, this rule makes it simple for developers 
or client libraries to deal with retries on error. So far the only exception 
was counters, and we worked around it saying they were meant to be used for 
analytics use cases.

Now with List datatype to be added in Cassandra 1.2 we have a similar issue as 
Append and Prepend operations that can be applied on them are not idempotent. 
The state of the list will be unknown whenever a timeout is received from the 
coordinator node saying that no acknowledge could be received on time from 
replicas or when the connection with the coordinator node is broken while a 
client wait for an update request to be acknowledged.

Of course the client can issue a read request on this List in the rare cases 
when such an unknown state appear, but this is not really elegant and such a 
check doesn't come with any visibility or atomicity guarantees.

I can see 3 options:
* Remove Append and Prepend operations. But this is a pity as they're really 
useful.
* Make the behavior of these commands quasi-idempotent. I imagine that if we 
attach the list of timestamps and/or hashes of recent update requests to each 
List column stored in Cassandra we would be able to avoid applying duplicate 
updates. 
* Explicitly document these operations as potentially unconsistent under these 
particular conditions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4740) Phantom TCP connections, failing hinted handoff

2012-10-15 Thread Mina Naguib (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476123#comment-13476123
 ] 

Mina Naguib commented on CASSANDRA-4740:


No clear picture yet, however I had the issue pop up again.

All nodes run the same java version ( 1.6.0_35-b10 ), however the phantom 
connection and timeouts in HH only appear on the node that's running kernel 
3.4.9.  Nodes running earlier kernels (2.6.39, 3.0, 3.1) haven't exhibited this.

Perhaps kernels 3.2 upwards (3.2 observed by John and BRandon, 3.4 by myself) 
have a bad interaction with the JVM.

I'm restarting that node with log4j.rootLogger set to TRACE to see if there's 
more info next time.

 Phantom TCP connections, failing hinted handoff
 ---

 Key: CASSANDRA-4740
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4740
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Linux 3.4.9, java 1.6.0_35-b10
Reporter: Mina Naguib
Priority: Minor
  Labels: connection, handoff, hinted, orphan, phantom, tcp, zombie
 Attachments: write_latency.png


 IP addresses in report anonymized:
 Had a server running cassandra (1.1.1.10) reboot ungracefully.  Reboot and 
 startup was successful and uneventful.  cassandra went back into service ok.
 From that point onwards however, several (but not all) machines in the 
 cassandra cluster started having difficulty with hinted handoff to that 
 machine.  This was despite nodetool ring showing Up across the board.
 Here's an example of an attempt, every 10 minutes, by a node (1.1.1.11) to 
 replay hints to the node that was rebooted:
 {code}
 INFO [HintedHandoff:1] 2012-10-01 11:07:23,293 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:07:33,295 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:07:33,295 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:17:23,312 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:17:33,319 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:17:33,319 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:27:23,335 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:27:33,337 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:27:33,337 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:37:23,357 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:37:33,358 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:37:33,359 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:47:23,412 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:47:33,414 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:47:33,414 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 {code}
 I started poking around, and discovered that several nodes held ESTABLISHED 
 TCP connections that didn't have a live endpoint on the rebooted node.  My 
 guess is they were live prior to the reboot, and after the reboot the nodes 
 still see them as live and unsuccessfully try to use them.
 Example, on the node that was rebooted:
 {code}
 .10 ~ # netstat -tn | grep 1.1.1.11
 tcp0  0 1.1.1.10:70001.1.1.11:40960ESTABLISHED
 tcp0  0 1.1.1.10:34370   1.1.1.11:7000 ESTABLISHED
 tcp0  0 1.1.1.10:45518   1.1.1.11:7000 ESTABLISHED
 {code}
 While on the node that's failing to hint to it:
 {code}
 .11 ~ # netstat -tn | grep 1.1.1.10
 tcp0  

[jira] [Commented] (CASSANDRA-4740) Phantom TCP connections, failing hinted handoff

2012-10-15 Thread Mina Naguib (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476138#comment-13476138
 ] 

Mina Naguib commented on CASSANDRA-4740:


Unfortunately logging level TRACE is too verbose.  On this node it produces 
1.7MB of logs per second.  I switched it back to INFO.

I discovered however that you don't need to restart cassandra for log level 
changes in log4j-server.properties to take effect.  Next time I see the problem 
I'll switch to TRACE and post back anything interesting I find.

 Phantom TCP connections, failing hinted handoff
 ---

 Key: CASSANDRA-4740
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4740
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Linux 3.4.9, java 1.6.0_35-b10
Reporter: Mina Naguib
Priority: Minor
  Labels: connection, handoff, hinted, orphan, phantom, tcp, zombie
 Attachments: write_latency.png


 IP addresses in report anonymized:
 Had a server running cassandra (1.1.1.10) reboot ungracefully.  Reboot and 
 startup was successful and uneventful.  cassandra went back into service ok.
 From that point onwards however, several (but not all) machines in the 
 cassandra cluster started having difficulty with hinted handoff to that 
 machine.  This was despite nodetool ring showing Up across the board.
 Here's an example of an attempt, every 10 minutes, by a node (1.1.1.11) to 
 replay hints to the node that was rebooted:
 {code}
 INFO [HintedHandoff:1] 2012-10-01 11:07:23,293 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:07:33,295 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:07:33,295 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:17:23,312 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:17:33,319 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:17:33,319 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:27:23,335 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:27:33,337 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:27:33,337 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:37:23,357 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:37:33,358 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:37:33,359 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:47:23,412 HintedHandOffManager.java 
 (line 294) Started hinted handoff for token: 
 122879743610338889583996386017027409691 with IP: /1.1.1.10
 INFO [HintedHandoff:1] 2012-10-01 11:47:33,414 HintedHandOffManager.java 
 (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
 INFO [HintedHandoff:1] 2012-10-01 11:47:33,414 HintedHandOffManager.java 
 (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
 {code}
 I started poking around, and discovered that several nodes held ESTABLISHED 
 TCP connections that didn't have a live endpoint on the rebooted node.  My 
 guess is they were live prior to the reboot, and after the reboot the nodes 
 still see them as live and unsuccessfully try to use them.
 Example, on the node that was rebooted:
 {code}
 .10 ~ # netstat -tn | grep 1.1.1.11
 tcp0  0 1.1.1.10:70001.1.1.11:40960ESTABLISHED
 tcp0  0 1.1.1.10:34370   1.1.1.11:7000 ESTABLISHED
 tcp0  0 1.1.1.10:45518   1.1.1.11:7000 ESTABLISHED
 {code}
 While on the node that's failing to hint to it:
 {code}
 .11 ~ # netstat -tn | grep 1.1.1.10
 tcp0  0 1.1.1.11:7000 1.1.1.10:34370   ESTABLISHED
 tcp0  0 1.1.1.11:7000 1.1.1.10:45518   ESTABLISHED
 tcp0  0 

[jira] [Updated] (CASSANDRA-4796) composite indexes don't always return results they should

2012-10-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4796:


Attachment: 4726.txt

Hum, I think that is due to some bad merge or something along that way. 
Basically we were using a column value instead of a key because 
SelectStatement.buildBound() was used on keys but was (wrongfully) using 
columns internally instead.

Patch attached to change that and that makes buildBound static to make it 
harder to do that kind of mistake again.

 composite indexes don't always return results they should
 -

 Key: CASSANDRA-4796
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4796
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 4726.txt


 composite_index_with_pk_test in the dtests is failing and it reproduces 
 manually.
 {noformat}
 cqlsh:fooCREATE TABLE blogs ( blog_id int,   
   time1 int, time2 int, author text,  
content text, PRIMARY KEY (blog_id, time1, 
 time2) ) ;
 cqlsh:foo create index on blogs(author);
 cqlsh:foo INSERT INTO blogs (blog_id, time1, time2, author, content) VALUES 
 (1, 0, 0, 'foo', 'bar1');
 cqlsh:foo INSERT INTO blogs (blog_id, time1, time2, author, content) VALUES 
 (1, 0, 1, 'foo', 'bar2');
 cqlsh:foo INSERT INTO blogs (blog_id, time1, time2, author, content) VALUES 
 (2, 1, 0, 'foo', 'baz');
 cqlsh:foo INSERT INTO blogs (blog_id, time1, time2, author, content) VALUES 
 (3, 0, 1, 'gux', 'qux');
 cqlsh:foo SELECT blog_id, content FROM blogs WHERE time1 = 1 AND 
 author='foo';
 cqlsh:foo
 {noformat}
 The expected result is:
 {noformat}
  blog_id | time1 | time2 | author | content
 -+---+---++-
2 | 1 | 0 |foo | baz
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4734) Move CQL3 consistency to protocol

2012-10-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476177#comment-13476177
 ] 

Jonathan Ellis commented on CASSANDRA-4734:
---

The most convincing argument for 4448 was the problems that CQL-level CL posed 
to PreparedStatements.  That's not an issue anymore.

Fundamentally I don't think I buy that per-CF is the right way to think about 
CL.  I can see an application-level default, that gets modified based on the 
operation (I want to make sure my user always gets read-my-writes behavior on 
this page) that isn't necessarily correlated with CL.

I really think the right thing to do here is leave it out until we have a 
better understanding of the use cases involved, because the risk of getting 
stuck with a misfeature is real.

 Move CQL3 consistency to protocol
 -

 Key: CASSANDRA-4734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4734
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Move-consistency-level-to-the-protocol-level-2.txt, 
 0001-Move-consistency-level-to-the-protocol-level.txt, 
 0002-Remove-remains-of-4448.txt, 0002-Thrift-generated-file-diffs-2.txt, 
 0003-Thrift-generated-file-diffs.txt


 Currently, in CQL3, you set the consistency level of an operation in
 the language, eg 'SELECT * FROM foo USING CONSISTENCY QUORUM'.  It now
 looks like this was a mistake, and that consistency should be set at
 the protocol level, i.e. as a separate parameter along with the query.
 The reasoning is that the CL applies to the guarantee provided by the
 operation being successful, not to the query itself.  Specifically,
 having the CL being part of the language means that CL is opaque to
 low level client libraries without themselves parsing the CQL, which
 we want to avoid.  Thus,
 - Those libraries can't implement automatic retries policy, where a query 
 would be retried with a smaller CL.  (I'm aware that this is often a Bad 
 Idea, but it does have legitimate uses and not having that available is seen 
 as a regression from the Thrift api.)
 - We had to introduce CASSANDRA-4448 to allow the client to configure some  
 form of default CL since the library can't handle that anymore, which is  
 hackish.
 - Executing prepared statements with different CL requires preparing multiple 
 statements.
 - CL only makes sense for BATCH operations as a whole, not the sub-statements 
 within the batch. Currently CQL3 fixes that by validating the given CLs 
 match, but it would be much more clear if the CL was on the protocol side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4802) Regular startup log has confusing Bootstrap/Replace/Move completed! without boostrap, replace, or move

2012-10-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4802:
--

Reviewer: brandon.williams
Assignee: Vijay

 Regular startup log has confusing Bootstrap/Replace/Move completed! without 
 boostrap, replace, or move
 

 Key: CASSANDRA-4802
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4802
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.12
 Environment: RHEL6, JDK1.6
Reporter: Karl Mueller
Assignee: Vijay
Priority: Trivial

 A regular startup completes successfully, but it has a confusing message the 
 end of the startup:
   INFO 15:19:29,137 Bootstrap/Replace/Move completed! Now serving reads.
 This happens despite no bootstrap, replace, or move.
 While purely cosmetic, this makes you wonder what the node just did - did it 
 just bootstrap?!  It should simply read something like Startup completed! 
 Now serving reads unless it actually has done one of the actions in the 
 error message.
 Complete log at the end:
 INFO 15:13:30,522 Log replay complete, 6274 replayed mutations
  INFO 15:13:30,527 Cassandra version: 1.0.12
  INFO 15:13:30,527 Thrift API version: 19.20.0
  INFO 15:13:30,527 Loading persisted ring state
  INFO 15:13:30,541 Starting up server gossip
  INFO 15:13:30,542 Enqueuing flush of Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,543 Writing Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,550 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-274-Data.db (80 bytes)
  INFO 15:13:30,563 Starting Messaging Service on port 7000
  INFO 15:13:30,571 Using saved token 31901471898837980949691369446728269823
  INFO 15:13:30,572 Enqueuing flush of Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,573 Writing Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,579 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-275-Data.db (163 bytes)
  INFO 15:13:30,581 Node kaos-cass02.xxx/1.2.3.4 state jump to normal
  INFO 15:13:30,598 Bootstrap/Replace/Move completed! Now serving reads.
  INFO 15:13:30,600 Will not load MX4J, mx4j-tools.jar is not in the classpath

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4807) Compaction progress counts more than 100%

2012-10-15 Thread Omid Aladini (JIRA)
Omid Aladini created CASSANDRA-4807:
---

 Summary: Compaction progress counts more than 100%
 Key: CASSANDRA-4807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4807
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
Reporter: Omid Aladini
Priority: Minor


'nodetool compactionstats' compaction progress counts more than 100%:

{code}
pending tasks: 74
  compaction typekeyspace   column family bytes compacted 
bytes total  progress
   ValidationKSPCF1   56192578305 
8465276891766.38%
   CompactionKSPCF2   162018591   
119913592 135.11%
{code}

Hadn't experienced this before 1.1.3. Is it due to changes in 1.1.4-1.1.6 ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3

2012-10-15 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476200#comment-13476200
 ] 

Aleksey Yeschenko commented on CASSANDRA-4674:
--

[~marco.matarazzo] It's a new issue, caused by a different reason. Affects cql3 
in trunk, not cql2.
I'm looking into it.

 cqlsh COPY TO and COPY FROM don't work with cql3
 

 Key: CASSANDRA-4674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: cqlsh

 cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3

2012-10-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reopened CASSANDRA-4674:
--


 cqlsh COPY TO and COPY FROM don't work with cql3
 

 Key: CASSANDRA-4674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: cqlsh

 cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4794) cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException

2012-10-15 Thread debadatta das (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476211#comment-13476211
 ] 

debadatta das commented on CASSANDRA-4794:
--

I see that cqlsh is a query language. So we can't use cassandra APIs there.

 cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException
 ---

 Key: CASSANDRA-4794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4794
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0 beta 1
 Environment: C++
Reporter: debadatta das
 Attachments: sample_AtomicBatchMutate.cpp


 Hi,
 We have installed cassandra 1.2.0 beta with thrift 0.7.0. We are using cpp 
 interface. When we use batch_mutate API, it works fine. But when we are using 
 the new atomic_batch_mutate API with same parameters as batch_mutate, it 
 fails with org::apache::cassandra::TimedOutException, what(): Default 
 TException. We get the same TException error even after increasing Send/Reciv 
 timeout values of Tsocket to 15 seconds or more.
 Details:
 cassandra ring:
 cassandra ring with single node
 consistency level paramter to atomic_batch_mutate
 ConsistencyLevel::ONE
 Thrift version:
 same results with thrift 0.5.0 and thrift 0.7.0.
 thrift 0.8.0 seems unsupported with cassanda 1.2.0. Gives compilation error 
 for cpp interface build.
 We are calling atomic_batch_mutate() with same parameters as batch_mutate.
 cassclient.atomic_batch_mutate(outermap1, ConsistencyLevel::ONE);
 where outmap1 is
 mapstring, mapstring, vectorMutation   outermap1;
 Please point out if anything is missing while using atomic_batch_mutate or 
 the reason behind the failure.
 The logs in cassandra system.log we get during atomic_batch_mutate failure 
 are:
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,604 MessagingService.java (line 
 800) 1 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,606 StatusLogger.java (line 53) 
 Pool Name Active Pending Blocked
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,607 StatusLogger.java (line 68) 
 ReadStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 RequestResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReadRepairStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 MutationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReplicateOnWriteStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 GossipStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 AntiEntropyStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MigrationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 StreamStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MemtablePostFlusher 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 FlushWriter 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 MiscStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 commitlog_archiver 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 InternalResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 73) 
 CompactionManager 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 85) 
 MessagingService n/a 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 95) 
 Cache Type Size Capacity KeysToSave Provider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 96) 
 KeyCache 227 74448896 all
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 102) 
 RowCache 0 0 all org.apache.cassandra.cache.SerializingCacheProvider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 109) 
 ColumnFamily Memtable ops,data
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 KeyspaceTest.CF_Test 1,71
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.local 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.peers 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.batchlog 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.NodeIdInfo 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.LocationInfo 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 

[jira] [Commented] (CASSANDRA-4446) nodetool drain sometimes doesn't mark commitlog fully flushed

2012-10-15 Thread Omid Aladini (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476213#comment-13476213
 ] 

Omid Aladini commented on CASSANDRA-4446:
-

I also experience this every time I drain / restart (up until latest 1.1.6) and 
getting this message in log:

{quote}
2012-10-12_15:50:36.92191  INFO 15:50:36,921 Log replay complete, N replayed 
mutations   
{quote}

with N being non-zero. I wonder if this is a cause of double-counts for Counter 
mutations.

 nodetool drain sometimes doesn't mark commitlog fully flushed
 -

 Key: CASSANDRA-4446
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4446
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.10
 Environment: ubuntu 10.04 64bit
 Linux HOSTNAME 2.6.32-345-ec2 #48-Ubuntu SMP Wed May 2 19:29:55 UTC 2012 
 x86_64 GNU/Linux
 sun JVM
 cassandra 1.0.10 installed from apache deb
Reporter: Robert Coli
 Attachments: 
 cassandra.1.0.10.replaying.log.after.exception.during.drain.txt


 I recently wiped a customer's QA cluster. I drained each node and verified 
 that they were drained. When I restarted the nodes, I saw the commitlog 
 replay create a memtable and then flush it. I have attached a sanitized log 
 snippet from a representative node at the time. 
 It appears to show the following :
 1) Drain begins
 2) Drain triggers flush
 3) Flush triggers compaction
 4) StorageService logs DRAINED message
 5) compaction thread excepts
 6) on restart, same CF creates a memtable
 7) and then flushes it [1]
 The columnfamily involved in the replay in 7) is the CF for which the 
 compaction thread excepted in 5). This seems to suggest a timing issue 
 whereby the exception in 5) prevents the flush in 3) from marking all the 
 segments flushed, causing them to replay after restart.
 In case it might be relevant, I did an online change of compaction strategy 
 from Leveled to SizeTiered during the uptime period preceding this drain.
 [1] Isn't commitlog replay not supposed to automatically trigger a flush in 
 modern cassandra?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4734) Move CQL3 consistency to protocol

2012-10-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476177#comment-13476177
 ] 

Jonathan Ellis edited comment on CASSANDRA-4734 at 10/15/12 3:50 PM:
-

The most convincing argument for 4448 was the problems that CQL-level CL posed 
to PreparedStatements.  That's not an issue anymore.

Fundamentally I don't think I buy that per-CF is the right way to think about 
CL.  I can see an application-level default, that gets modified based on the 
operation (I want to make sure my user always gets read-my-writes behavior on 
this page) that doesn't necessarily correspond to a clean breakdown by CF.

I really think the right thing to do here is leave it out until we have a 
better understanding of the use cases involved, because the risk of getting 
stuck with a misfeature is real.

  was (Author: jbellis):
The most convincing argument for 4448 was the problems that CQL-level CL 
posed to PreparedStatements.  That's not an issue anymore.

Fundamentally I don't think I buy that per-CF is the right way to think about 
CL.  I can see an application-level default, that gets modified based on the 
operation (I want to make sure my user always gets read-my-writes behavior on 
this page) that isn't necessarily correlated with CL.

I really think the right thing to do here is leave it out until we have a 
better understanding of the use cases involved, because the risk of getting 
stuck with a misfeature is real.
  
 Move CQL3 consistency to protocol
 -

 Key: CASSANDRA-4734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4734
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Move-consistency-level-to-the-protocol-level-2.txt, 
 0001-Move-consistency-level-to-the-protocol-level.txt, 
 0002-Remove-remains-of-4448.txt, 0002-Thrift-generated-file-diffs-2.txt, 
 0003-Thrift-generated-file-diffs.txt


 Currently, in CQL3, you set the consistency level of an operation in
 the language, eg 'SELECT * FROM foo USING CONSISTENCY QUORUM'.  It now
 looks like this was a mistake, and that consistency should be set at
 the protocol level, i.e. as a separate parameter along with the query.
 The reasoning is that the CL applies to the guarantee provided by the
 operation being successful, not to the query itself.  Specifically,
 having the CL being part of the language means that CL is opaque to
 low level client libraries without themselves parsing the CQL, which
 we want to avoid.  Thus,
 - Those libraries can't implement automatic retries policy, where a query 
 would be retried with a smaller CL.  (I'm aware that this is often a Bad 
 Idea, but it does have legitimate uses and not having that available is seen 
 as a regression from the Thrift api.)
 - We had to introduce CASSANDRA-4448 to allow the client to configure some  
 form of default CL since the library can't handle that anymore, which is  
 hackish.
 - Executing prepared statements with different CL requires preparing multiple 
 statements.
 - CL only makes sense for BATCH operations as a whole, not the sub-statements 
 within the batch. Currently CQL3 fixes that by validating the given CLs 
 match, but it would be much more clear if the CL was on the protocol side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4734) Move CQL3 consistency to protocol

2012-10-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4734:


Attachment: 0003-Thrift-generated-file-diffs-3.txt
0002-Remove-remains-of-4448-3.txt
0001-Move-consistency-level-to-the-protocol-level-3.txt

bq. I really think the right thing to do here is leave it out until we have a 
better understanding of the use cases involved

Ok, I buy that. Attaching v3 with 4448 removed.

 Move CQL3 consistency to protocol
 -

 Key: CASSANDRA-4734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4734
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Move-consistency-level-to-the-protocol-level-2.txt, 
 0001-Move-consistency-level-to-the-protocol-level-3.txt, 
 0001-Move-consistency-level-to-the-protocol-level.txt, 
 0002-Remove-remains-of-4448-3.txt, 0002-Remove-remains-of-4448.txt, 
 0002-Thrift-generated-file-diffs-2.txt, 
 0003-Thrift-generated-file-diffs-3.txt, 0003-Thrift-generated-file-diffs.txt


 Currently, in CQL3, you set the consistency level of an operation in
 the language, eg 'SELECT * FROM foo USING CONSISTENCY QUORUM'.  It now
 looks like this was a mistake, and that consistency should be set at
 the protocol level, i.e. as a separate parameter along with the query.
 The reasoning is that the CL applies to the guarantee provided by the
 operation being successful, not to the query itself.  Specifically,
 having the CL being part of the language means that CL is opaque to
 low level client libraries without themselves parsing the CQL, which
 we want to avoid.  Thus,
 - Those libraries can't implement automatic retries policy, where a query 
 would be retried with a smaller CL.  (I'm aware that this is often a Bad 
 Idea, but it does have legitimate uses and not having that available is seen 
 as a regression from the Thrift api.)
 - We had to introduce CASSANDRA-4448 to allow the client to configure some  
 form of default CL since the library can't handle that anymore, which is  
 hackish.
 - Executing prepared statements with different CL requires preparing multiple 
 statements.
 - CL only makes sense for BATCH operations as a whole, not the sub-statements 
 within the batch. Currently CQL3 fixes that by validating the given CLs 
 match, but it would be much more clear if the CL was on the protocol side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3

2012-10-15 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476228#comment-13476228
 ] 

Aleksey Yeschenko commented on CASSANDRA-4674:
--

Though COPY TO and COPY FROM still work in CQL3 as long as you don't provide 
any optional parameters to it.

 cqlsh COPY TO and COPY FROM don't work with cql3
 

 Key: CASSANDRA-4674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: cqlsh

 cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Git Push Summary

2012-10-15 Thread slebresne
Updated Tags:  refs/tags/cassandra-1.1.6 [created] 94dc0169d


Git Push Summary

2012-10-15 Thread slebresne
Updated Tags:  refs/tags/1.1.6-tentative [deleted] a0900f3d3


[jira] [Commented] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3

2012-10-15 Thread Marco Matarazzo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476239#comment-13476239
 ] 

Marco Matarazzo commented on CASSANDRA-4674:


I realized I hijacked another ticket thinking that mine was a well-known 
problem, when it's obvious it's not. I'm very sorry about that.

I have some specific column families on which COPY FROM does not work from 
cqlsh -3, and works perfectly with cqlsh -2. 

If it's ok with you, I'll hunt down the condition that makes it not work and, 
as soon as I have them pinned down, I will open a new bug.

Again, I'm sorry about that.


 cqlsh COPY TO and COPY FROM don't work with cql3
 

 Key: CASSANDRA-4674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: cqlsh

 cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3

2012-10-15 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476243#comment-13476243
 ] 

Aleksey Yeschenko commented on CASSANDRA-4674:
--

Well, you've accidentally uncovered a COPY TO/COPY FROM bug, so there is 
nothing to apologize about. Thank you (:
Turns out COPY TO/COPY FROM has had broken completion for a really long time, 
which in combination with a recent commit 
(2f979ed60fc4f9dab2db7ce9921ff2953acd714c for CASSANDRA-4488) broke COPY 
TO/COPY FROM with non-default parameters.

 cqlsh COPY TO and COPY FROM don't work with cql3
 

 Key: CASSANDRA-4674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: cqlsh

 cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


svn commit: r1398381 - in /cassandra/site: publish/download/index.html publish/index.html src/settings.py

2012-10-15 Thread slebresne
Author: slebresne
Date: Mon Oct 15 16:45:53 2012
New Revision: 1398381

URL: http://svn.apache.org/viewvc?rev=1398381view=rev
Log:
Update website for 1.1.6 release

Modified:
cassandra/site/publish/download/index.html
cassandra/site/publish/index.html
cassandra/site/src/settings.py

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1398381r1=1398380r2=1398381view=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Mon Oct 15 16:45:53 2012
@@ -49,8 +49,8 @@
   Cassandra releases include the core server, the a 
href=http://wiki.apache.org/cassandra/NodeTool;nodetool/a administration 
command-line interface, and a development shell (a 
href=http://cassandra.apache.org/doc/cql/CQL.html;ttcqlsh/tt/a and the 
old ttcassandra-cli/tt).
 
   p
-  The latest stable release of Apache Cassandra is 1.1.5
-  (released on 2012-09-10).  iIf you're just
+  The latest stable release of Apache Cassandra is 1.1.6
+  (released on 2012-10-15).  iIf you're just
   starting out, download this one./i
   /p
 
@@ -59,13 +59,13 @@
   ul
 li
 a class=filename 
-   
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.1.5/apache-cassandra-1.1.5-bin.tar.gz;
+   
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.1.6/apache-cassandra-1.1.6-bin.tar.gz;
onclick=javascript: 
pageTracker._trackPageview('/clicks/binary_download');
-  apache-cassandra-1.1.5-bin.tar.gz
+  apache-cassandra-1.1.6-bin.tar.gz
 /a
-[a 
href=http://www.apache.org/dist/cassandra/1.1.5/apache-cassandra-1.1.5-bin.tar.gz.asc;PGP/a]
-[a 
href=http://www.apache.org/dist/cassandra/1.1.5/apache-cassandra-1.1.5-bin.tar.gz.md5;MD5/a]
-[a 
href=http://www.apache.org/dist/cassandra/1.1.5/apache-cassandra-1.1.5-bin.tar.gz.sha1;SHA1/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.1.6/apache-cassandra-1.1.6-bin.tar.gz.asc;PGP/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.1.6/apache-cassandra-1.1.6-bin.tar.gz.md5;MD5/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.1.6/apache-cassandra-1.1.6-bin.tar.gz.sha1;SHA1/a]
 /li
 li
 a href=http://wiki.apache.org/cassandra/DebianPackaging;Debian 
installation instructions/a
@@ -169,13 +169,13 @@
   ul
 li
 a class=filename 
-   
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.1.5/apache-cassandra-1.1.5-src.tar.gz;
+   
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.1.6/apache-cassandra-1.1.6-src.tar.gz;
onclick=javascript: 
pageTracker._trackPageview('/clicks/source_download');
-  apache-cassandra-1.1.5-src.tar.gz
+  apache-cassandra-1.1.6-src.tar.gz
 /a
-[a 
href=http://www.apache.org/dist/cassandra/1.1.5/apache-cassandra-1.1.5-src.tar.gz.asc;PGP/a]
-[a 
href=http://www.apache.org/dist/cassandra/1.1.5/apache-cassandra-1.1.5-src.tar.gz.md5;MD5/a]
-[a 
href=http://www.apache.org/dist/cassandra/1.1.5/apache-cassandra-1.1.5-src.tar.gz.sha1;SHA1/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.1.6/apache-cassandra-1.1.6-src.tar.gz.asc;PGP/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.1.6/apache-cassandra-1.1.6-src.tar.gz.md5;MD5/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.1.6/apache-cassandra-1.1.6-src.tar.gz.sha1;SHA1/a]
 /li
   
 li

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1398381r1=1398380r2=1398381view=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Mon Oct 15 16:45:53 2012
@@ -75,8 +75,8 @@
   h2Download/h2
   div class=inner rc
 p
-The latest release is b1.1.5/b
-span class=relnotes(a 
href=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-1.1.5;Changes/a)/span
+The latest release is b1.1.6/b
+span class=relnotes(a 
href=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-1.1.6;Changes/a)/span
 /p
 
 pa class=filename href=/download/Download options/a/p

Modified: cassandra/site/src/settings.py
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/settings.py?rev=1398381r1=1398380r2=1398381view=diff
==
--- cassandra/site/src/settings.py (original)
+++ cassandra/site/src/settings.py Mon Oct 15 16:45:53 2012
@@ -98,8 +98,8 @@ class CassandraDef(object):
 veryoldstable_version = '0.8.10'
 veryoldstable_release_date = '2012-02-13'
 veryoldstable_exists = True
-stable_version = '1.1.5'
-stable_release_date = '2012-09-10'
+stable_version = '1.1.6'
+stable_release_date = 

[jira] [Updated] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3

2012-10-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-4674:
-

Attachment: CASSANDRA-4674.txt

 cqlsh COPY TO and COPY FROM don't work with cql3
 

 Key: CASSANDRA-4674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: cqlsh
 Attachments: CASSANDRA-4674.txt


 cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4794) cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException

2012-10-15 Thread debadatta das (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476261#comment-13476261
 ] 

debadatta das commented on CASSANDRA-4794:
--

Hi,
If someone can study the attached sample program in CPP and answer what is 
wrong in using atomic_batch_mutate, it will be very helpful. We are trying to 
test this API on our lab to find out efficiency and submit test results to 
datastax. So if this issue can be resolved soon, it will be very helpful.

Regards,
Debadatta

 cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException
 ---

 Key: CASSANDRA-4794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4794
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0 beta 1
 Environment: C++
Reporter: debadatta das
 Attachments: sample_AtomicBatchMutate.cpp


 Hi,
 We have installed cassandra 1.2.0 beta with thrift 0.7.0. We are using cpp 
 interface. When we use batch_mutate API, it works fine. But when we are using 
 the new atomic_batch_mutate API with same parameters as batch_mutate, it 
 fails with org::apache::cassandra::TimedOutException, what(): Default 
 TException. We get the same TException error even after increasing Send/Reciv 
 timeout values of Tsocket to 15 seconds or more.
 Details:
 cassandra ring:
 cassandra ring with single node
 consistency level paramter to atomic_batch_mutate
 ConsistencyLevel::ONE
 Thrift version:
 same results with thrift 0.5.0 and thrift 0.7.0.
 thrift 0.8.0 seems unsupported with cassanda 1.2.0. Gives compilation error 
 for cpp interface build.
 We are calling atomic_batch_mutate() with same parameters as batch_mutate.
 cassclient.atomic_batch_mutate(outermap1, ConsistencyLevel::ONE);
 where outmap1 is
 mapstring, mapstring, vectorMutation   outermap1;
 Please point out if anything is missing while using atomic_batch_mutate or 
 the reason behind the failure.
 The logs in cassandra system.log we get during atomic_batch_mutate failure 
 are:
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,604 MessagingService.java (line 
 800) 1 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,606 StatusLogger.java (line 53) 
 Pool Name Active Pending Blocked
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,607 StatusLogger.java (line 68) 
 ReadStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 RequestResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReadRepairStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 MutationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReplicateOnWriteStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 GossipStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 AntiEntropyStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MigrationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 StreamStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MemtablePostFlusher 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 FlushWriter 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 MiscStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 commitlog_archiver 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 InternalResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 73) 
 CompactionManager 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 85) 
 MessagingService n/a 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 95) 
 Cache Type Size Capacity KeysToSave Provider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 96) 
 KeyCache 227 74448896 all
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 102) 
 RowCache 0 0 all org.apache.cassandra.cache.SerializingCacheProvider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 109) 
 ColumnFamily Memtable ops,data
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 KeyspaceTest.CF_Test 1,71
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.local 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.peers 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.batchlog 0,0
 

[jira] [Commented] (CASSANDRA-4782) Commitlog not replayed after restart

2012-10-15 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476266#comment-13476266
 ] 

Robert Coli commented on CASSANDRA-4782:


If drain is required between versions to avoid this issue then CASSANDRA-4446, 
where drain sometimes doesn't actually drain, seems to have become more 
significant.

 Commitlog not replayed after restart
 

 Key: CASSANDRA-4782
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4782
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Fabien Rousseau
Assignee: Jonathan Ellis
Priority: Critical
 Fix For: 1.1.6

 Attachments: 4782.txt


 It seems that there are two corner cases where commitlog is not replayed 
 after a restart :
  - After a reboot of a server + restart of cassandra (1.1.0 to 1.1.4)
  - After doing an upgrade from cassandra 1.1.X to cassandra 1.1.5
 This is due to the fact that the commitlog segment id should always be an  
 incrementing number (see this condition : 
 https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L247
  )
 But this assertion can be broken :
 In the first case, it is generated by System.nanoTime() but it seems that 
 System.nanoTime() is using the boot time as the base/reference (at least on 
 java6  linux), thus after a reboot, System.nanoTime() can return a lower 
 number than before the reboot (and the javadoc says the reference is a 
 relative point in time...)
 In the second case, this was introduced by #4601 (which changes 
 System.nanoTime() by System.currentTimeMillis() thus people starting with 
 1.1.5 are safe)
 This could explain the following tickets : #4741 and #4481

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4808) nodetool doesnt work well with -ve tokens

2012-10-15 Thread Vijay (JIRA)
Vijay created CASSANDRA-4808:


 Summary: nodetool doesnt work well with -ve tokens
 Key: CASSANDRA-4808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4808
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.0 beta 1


./apache-cassandra-1.2.0-beta1-SNAPSHOT/bin/nodetool move \-2253536297082652573
Unrecognized option: -2253536297082652573
usage: java org.apache.cassandra.tools.NodeCmd --host arg command

 -cf,--column-family arg   only take a snapshot of the specified column
 family


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4571) Strange permament socket descriptors increasing leads to Too many open files

2012-10-15 Thread Joaquin Casares (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476278#comment-13476278
 ] 

Joaquin Casares commented on CASSANDRA-4571:


This can still be seen in 1.1.5 if the user is running Java 1.6.0_29. The 
current solution is to upgrade to 1.6.0_35.

 Strange permament socket descriptors increasing leads to Too many open files
 --

 Key: CASSANDRA-4571
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4571
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: CentOS 5.8 Linux 2.6.18-308.13.1.el5 #1 SMP Tue Aug 21 
 17:10:18 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux. 
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03, mixed mode)
Reporter: Serg Shnerson
Assignee: Jonathan Ellis
Priority: Critical
 Fix For: 1.1.5

 Attachments: 4571.txt


 On the two-node cluster there was found strange socket descriptors 
 increasing. lsof -n | grep java shows many rows like
 java   8380 cassandra  113r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  114r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  115r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  116r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  117r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  118r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  119r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  120r unix 0x8101a374a080
 938348482 socket
  And number of this rows constantly increasing. After about 24 hours this 
 situation leads to error.
 We use PHPCassa client. Load is not so high (aroud ~50kb/s on write). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/6] git commit: Merge branch 'cassandra-1.1' into trunk

2012-10-15 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.1 a0900f3d3 - 4d2e5e73b
  refs/heads/trunk fe5e4aef2 - 7e937b3d1


Merge branch 'cassandra-1.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e937b3d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e937b3d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e937b3d

Branch: refs/heads/trunk
Commit: 7e937b3d1308c0774e4b0366b6e66b14af1dd5f6
Parents: 2d83cfc 4d2e5e7
Author: Brandon Williams brandonwilli...@apache.org
Authored: Mon Oct 15 12:32:11 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Mon Oct 15 12:32:11 2012 -0500

--

--




[5/6] git commit: cqlsh: fix copy to/from for cql3 Patch by Aleksey Yeschenko, reviewed by brandonwilliams for CASSANDRA-4674

2012-10-15 Thread brandonwilliams
cqlsh: fix copy to/from for cql3
Patch by Aleksey Yeschenko, reviewed by brandonwilliams for
CASSANDRA-4674


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/596d54c2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/596d54c2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/596d54c2

Branch: refs/heads/trunk
Commit: 596d54c27eef4693915790320b8a5bdcf1662028
Parents: fe5e4ae
Author: Brandon Williams brandonwilli...@apache.org
Authored: Mon Oct 15 12:26:20 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Mon Oct 15 12:26:20 2012 -0500

--
 bin/cqlsh |6 +-
 1 files changed, 5 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/596d54c2/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index a848d05..0a0f14c 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -234,9 +234,13 @@ cqlsh_extra_syntax_rules = r'''
  ( WITH copyOption ( AND copyOption )* )?
 ;
 
-copyOption ::= [optnames]=cfOptionName = [optvals]=cfOptionVal
+copyOption ::= [optnames]=identifier = [optvals]=copyOptionVal
;
 
+copyOptionVal ::= identifier
+  | stringLiteral
+  ;
+
 # avoiding just DEBUG so that this rule doesn't get treated as a terminal
 debugCommand ::= DEBUG THINGS?
  ;



[2/6] git commit: cqlsh: use libedit when readline is not available, if possible Patch by Aleksey Yeschenko, reviewed by brandonwilliams for CASSANDRA-3597

2012-10-15 Thread brandonwilliams
cqlsh: use libedit when readline is not available, if possible
Patch by Aleksey Yeschenko, reviewed by brandonwilliams for
CASSANDRA-3597


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d83cfc2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d83cfc2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d83cfc2

Branch: refs/heads/trunk
Commit: 2d83cfc2bb51660d484ca02cb4343d0a3e8f2daa
Parents: 596d54c
Author: Brandon Williams brandonwilli...@apache.org
Authored: Mon Oct 15 12:31:51 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Mon Oct 15 12:31:51 2012 -0500

--
 bin/cqlsh |   18 ++
 1 files changed, 14 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d83cfc2/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 0a0f14c..bb440e0 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -121,6 +121,11 @@ DEFAULT_TRANSPORT_FACTORY = 
'cqlshlib.tfactory.regular_transport_factory'
 DEFAULT_TIME_FORMAT = '%Y-%m-%d %H:%M:%S%z'
 DEFAULT_FLOAT_PRECISION = 3
 
+if readline is not None and 'libedit' in readline.__doc__:
+DEFAULT_COMPLETEKEY = '\t'
+else:
+DEFAULT_COMPLETEKEY = 'tab'
+
 epilog = Connects to %(DEFAULT_HOST)s:%(DEFAULT_PORT)d by default. These
 defaults can be changed by setting $CQLSH_HOST and/or $CQLSH_PORT. When a
 host (and optional port number) are given on the command line, they take
@@ -428,8 +433,8 @@ class Shell(cmd.Cmd):
 
 def __init__(self, hostname, port, transport_factory, color=False,
  username=None, password=None, encoding=None, stdin=None, 
tty=True,
- completekey='tab', use_conn=None, cqlver=None, keyspace=None,
- tracing_enabled=False,
+ completekey=DEFAULT_COMPLETEKEY, use_conn=None,
+ cqlver=None, keyspace=None, tracing_enabled=False,
  display_time_format=DEFAULT_TIME_FORMAT,
  display_float_precision=DEFAULT_FLOAT_PRECISION):
 cmd.Cmd.__init__(self, completekey=completekey)
@@ -780,7 +785,11 @@ class Shell(cmd.Cmd):
 else:
 old_completer = readline.get_completer()
 readline.set_completer(self.complete)
-readline.parse_and_bind(self.completekey+: complete)
+if 'libedit' in readline.__doc__:
+readline.parse_and_bind(bind -e)
+readline.parse_and_bind(bind ' + self.completekey + ' 
rl_complete)
+else:
+readline.parse_and_bind(self.completekey + : complete)
 try:
 yield
 finally:
@@ -2692,7 +2701,8 @@ def read_options(cmdlineargs, environment):
 optvalues.keyspace = option_with_default(configs.get, 'authentication', 
'keyspace')
 optvalues.transport_factory = option_with_default(configs.get, 
'connection', 'factory',
   
DEFAULT_TRANSPORT_FACTORY)
-optvalues.completekey = option_with_default(configs.get, 'ui', 
'completekey', 'tab')
+optvalues.completekey = option_with_default(configs.get, 'ui', 
'completekey',
+DEFAULT_COMPLETEKEY)
 optvalues.color = option_with_default(configs.getboolean, 'ui', 'color')
 optvalues.time_format = raw_option_with_default(configs, 'ui', 
'time_format',
 DEFAULT_TIME_FORMAT)



[6/6] git commit: Pig: fix widerow/secondary env toggles Patch by brandonwilliams reviewed by Jeremy Hanna for CASSANDRA-4749

2012-10-15 Thread brandonwilliams
Pig: fix widerow/secondary env toggles
Patch by brandonwilliams reviewed by Jeremy Hanna for CASSANDRA-4749


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a0900f3d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a0900f3d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a0900f3d

Branch: refs/heads/trunk
Commit: a0900f3d3b9fadc3608bc6c4960ed1858d581e13
Parents: 936302c
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Oct 11 22:00:08 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Oct 11 22:03:27 2012 -0500

--
 .../cassandra/hadoop/pig/CassandraStorage.java |   15 +--
 1 files changed, 5 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0900f3d/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
index 49d8eac..8f539a9 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CassandraStorage.java
@@ -83,8 +83,6 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 
 private final static String DEFAULT_INPUT_FORMAT = 
org.apache.cassandra.hadoop.ColumnFamilyInputFormat;
 private final static String DEFAULT_OUTPUT_FORMAT = 
org.apache.cassandra.hadoop.ColumnFamilyOutputFormat;
-private final static boolean DEFAULT_WIDEROW_INPUT = false;
-private final static boolean DEFAULT_USE_SECONDARY = false;
 
 private final static String PARTITION_FILTER_SIGNATURE = 
cassandra.partition.filter;
 
@@ -106,8 +104,8 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 private String inputFormatClass;
 private String outputFormatClass;
 private int limit;
-private boolean widerows;
-private boolean usePartitionFilter;
+private boolean widerows = false;
+private boolean usePartitionFilter = false;
 // wide row hacks
 private ByteBuffer lastKey;
 private MapByteBuffer,IColumn lastRow;
@@ -567,11 +565,9 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 SlicePredicate predicate = new 
SlicePredicate().setSlice_range(range);
 ConfigHelper.setInputSlicePredicate(conf, predicate);
 }
-widerows = DEFAULT_WIDEROW_INPUT;
 if (System.getenv(PIG_WIDEROW_INPUT) != null)
-widerows = Boolean.valueOf(System.getProperty(PIG_WIDEROW_INPUT));
-usePartitionFilter = DEFAULT_USE_SECONDARY;
-if (System.getenv() != null)
+widerows = Boolean.valueOf(System.getenv(PIG_WIDEROW_INPUT));
+if (System.getenv(PIG_USE_SECONDARY) != null)
 usePartitionFilter = 
Boolean.valueOf(System.getenv(PIG_USE_SECONDARY));
 
 if (usePartitionFilter  getIndexExpressions() != null)
@@ -815,8 +811,7 @@ public class CassandraStorage extends LoadFunc implements 
StoreFuncInterface, Lo
 throw new IOException(PIG_OUTPUT_PARTITIONER or PIG_PARTITIONER 
environment variable not set);
 
 // we have to do this again here for the check in writeColumnsFromTuple
-usePartitionFilter = DEFAULT_USE_SECONDARY;
-if (System.getenv() != null)
+if (System.getenv(PIG_USE_SECONDARY) != null)
 usePartitionFilter = 
Boolean.valueOf(System.getenv(PIG_USE_SECONDARY));
 
 initSchema(storeSignature);



[3/6] git commit: cqlsh: use libedit when readline is not available, if possible Patch by Aleksey Yeschenko, reviewed by brandonwilliams for CASSANDRA-3597

2012-10-15 Thread brandonwilliams
cqlsh: use libedit when readline is not available, if possible
Patch by Aleksey Yeschenko, reviewed by brandonwilliams for
CASSANDRA-3597


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4d2e5e73
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4d2e5e73
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4d2e5e73

Branch: refs/heads/trunk
Commit: 4d2e5e73b127dc0b335176ddc1dec1f0244e7f6d
Parents: a0900f3
Author: Brandon Williams brandonwilli...@apache.org
Authored: Mon Oct 15 12:29:40 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Mon Oct 15 12:29:40 2012 -0500

--
 bin/cqlsh |   17 ++---
 1 files changed, 14 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d2e5e73/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 02acd47..1b282bd 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -116,6 +116,11 @@ DEFAULT_CQLVER = '2'
 DEFAULT_TIME_FORMAT = '%Y-%m-%d %H:%M:%S%z'
 DEFAULT_FLOAT_PRECISION = 3
 
+if readline is not None and 'libedit' in readline.__doc__:
+DEFAULT_COMPLETEKEY = '\t'
+else:
+DEFAULT_COMPLETEKEY = 'tab'
+
 epilog = Connects to %(DEFAULT_HOST)s:%(DEFAULT_PORT)d by default. These
 defaults can be changed by setting $CQLSH_HOST and/or $CQLSH_PORT. When a
 host (and optional port number) are given on the command line, they take
@@ -560,7 +565,8 @@ class Shell(cmd.Cmd):
 
 def __init__(self, hostname, port, color=False, username=None,
  password=None, encoding=None, stdin=None, tty=True,
- completekey='tab', use_conn=None, cqlver=None, keyspace=None,
+ completekey=DEFAULT_COMPLETEKEY, use_conn=None,
+ cqlver=None, keyspace=None,
  display_time_format=DEFAULT_TIME_FORMAT,
  display_float_precision=DEFAULT_FLOAT_PRECISION):
 cmd.Cmd.__init__(self, completekey=completekey)
@@ -851,7 +857,11 @@ class Shell(cmd.Cmd):
 else:
 old_completer = readline.get_completer()
 readline.set_completer(self.complete)
-readline.parse_and_bind(self.completekey+: complete)
+if 'libedit' in readline.__doc__:
+readline.parse_and_bind(bind -e)
+readline.parse_and_bind(bind ' + self.completekey + ' 
rl_complete)
+else:
+readline.parse_and_bind(self.completekey + : complete)
 try:
 yield
 finally:
@@ -2652,7 +2662,8 @@ def read_options(cmdlineargs, environment):
 optvalues.username = option_with_default(configs.get, 'authentication', 
'username')
 optvalues.password = option_with_default(configs.get, 'authentication', 
'password')
 optvalues.keyspace = option_with_default(configs.get, 'authentication', 
'keyspace')
-optvalues.completekey = option_with_default(configs.get, 'ui', 
'completekey', 'tab')
+optvalues.completekey = option_with_default(configs.get, 'ui', 
'completekey',
+DEFAULT_COMPLETEKEY)
 optvalues.color = option_with_default(configs.getboolean, 'ui', 'color')
 optvalues.time_format = raw_option_with_default(configs, 'ui', 
'time_format',
 DEFAULT_TIME_FORMAT)



[4/6] git commit: cqlsh: use libedit when readline is not available, if possible Patch by Aleksey Yeschenko, reviewed by brandonwilliams for CASSANDRA-3597

2012-10-15 Thread brandonwilliams
cqlsh: use libedit when readline is not available, if possible
Patch by Aleksey Yeschenko, reviewed by brandonwilliams for
CASSANDRA-3597


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4d2e5e73
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4d2e5e73
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4d2e5e73

Branch: refs/heads/cassandra-1.1
Commit: 4d2e5e73b127dc0b335176ddc1dec1f0244e7f6d
Parents: a0900f3
Author: Brandon Williams brandonwilli...@apache.org
Authored: Mon Oct 15 12:29:40 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Mon Oct 15 12:29:40 2012 -0500

--
 bin/cqlsh |   17 ++---
 1 files changed, 14 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d2e5e73/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 02acd47..1b282bd 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -116,6 +116,11 @@ DEFAULT_CQLVER = '2'
 DEFAULT_TIME_FORMAT = '%Y-%m-%d %H:%M:%S%z'
 DEFAULT_FLOAT_PRECISION = 3
 
+if readline is not None and 'libedit' in readline.__doc__:
+DEFAULT_COMPLETEKEY = '\t'
+else:
+DEFAULT_COMPLETEKEY = 'tab'
+
 epilog = Connects to %(DEFAULT_HOST)s:%(DEFAULT_PORT)d by default. These
 defaults can be changed by setting $CQLSH_HOST and/or $CQLSH_PORT. When a
 host (and optional port number) are given on the command line, they take
@@ -560,7 +565,8 @@ class Shell(cmd.Cmd):
 
 def __init__(self, hostname, port, color=False, username=None,
  password=None, encoding=None, stdin=None, tty=True,
- completekey='tab', use_conn=None, cqlver=None, keyspace=None,
+ completekey=DEFAULT_COMPLETEKEY, use_conn=None,
+ cqlver=None, keyspace=None,
  display_time_format=DEFAULT_TIME_FORMAT,
  display_float_precision=DEFAULT_FLOAT_PRECISION):
 cmd.Cmd.__init__(self, completekey=completekey)
@@ -851,7 +857,11 @@ class Shell(cmd.Cmd):
 else:
 old_completer = readline.get_completer()
 readline.set_completer(self.complete)
-readline.parse_and_bind(self.completekey+: complete)
+if 'libedit' in readline.__doc__:
+readline.parse_and_bind(bind -e)
+readline.parse_and_bind(bind ' + self.completekey + ' 
rl_complete)
+else:
+readline.parse_and_bind(self.completekey + : complete)
 try:
 yield
 finally:
@@ -2652,7 +2662,8 @@ def read_options(cmdlineargs, environment):
 optvalues.username = option_with_default(configs.get, 'authentication', 
'username')
 optvalues.password = option_with_default(configs.get, 'authentication', 
'password')
 optvalues.keyspace = option_with_default(configs.get, 'authentication', 
'keyspace')
-optvalues.completekey = option_with_default(configs.get, 'ui', 
'completekey', 'tab')
+optvalues.completekey = option_with_default(configs.get, 'ui', 
'completekey',
+DEFAULT_COMPLETEKEY)
 optvalues.color = option_with_default(configs.getboolean, 'ui', 'color')
 optvalues.time_format = raw_option_with_default(configs, 'ui', 
'time_format',
 DEFAULT_TIME_FORMAT)



[jira] [Updated] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3

2012-10-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-4674:


Affects Version/s: (was: 1.2.0)
   1.2.0 beta 1

 cqlsh COPY TO and COPY FROM don't work with cql3
 

 Key: CASSANDRA-4674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: cqlsh
 Fix For: 1.2.0 beta 2

 Attachments: CASSANDRA-4674.txt


 cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4674) cqlsh COPY TO and COPY FROM don't work with cql3

2012-10-15 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476243#comment-13476243
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-4674 at 10/15/12 5:35 PM:


Well, you've accidentally uncovered a COPY TO/COPY FROM bug, so there is 
nothing to apologize for. Thank you (:
Turns out COPY TO/COPY FROM has had broken completion for a really long time, 
which in combination with a recent commit 
(2f979ed60fc4f9dab2db7ce9921ff2953acd714c for CASSANDRA-4488) broke COPY 
TO/COPY FROM with non-default parameters.

  was (Author: iamaleksey):
Well, you've accidentally uncovered a COPY TO/COPY FROM bug, so there is 
nothing to apologize about. Thank you (:
Turns out COPY TO/COPY FROM has had broken completion for a really long time, 
which in combination with a recent commit 
(2f979ed60fc4f9dab2db7ce9921ff2953acd714c for CASSANDRA-4488) broke COPY 
TO/COPY FROM with non-default parameters.
  
 cqlsh COPY TO and COPY FROM don't work with cql3
 

 Key: CASSANDRA-4674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4674
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
  Labels: cqlsh
 Fix For: 1.2.0 beta 2

 Attachments: CASSANDRA-4674.txt


 cqlsh COPY TO and COPY FROM don't work with cql3 due to previous cql3 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


buildbot failure in ASF Buildbot on cassandra-1.1

2012-10-15 Thread buildbot
The Buildbot has detected a new failure on builder cassandra-1.1 while building 
cassandra.
Full details are available at:
 http://ci.apache.org/builders/cassandra-1.1/builds/24

Buildbot URL: http://ci.apache.org/

Buildslave for this Build: portunus_ubuntu

Build Reason: scheduler
Build Source Stamp: [branch cassandra-1.1] 
4d2e5e73b127dc0b335176ddc1dec1f0244e7f6d
Blamelist: Brandon Williams brandonwilli...@apache.org

BUILD FAILED: failed shell

sincerely,
 -The Buildbot





buildbot success in ASF Buildbot on cassandra-trunk

2012-10-15 Thread buildbot
The Buildbot has detected a restored build on builder cassandra-trunk while 
building cassandra.
Full details are available at:
 http://ci.apache.org/builders/cassandra-trunk/builds/1946

Buildbot URL: http://ci.apache.org/

Buildslave for this Build: portunus_ubuntu

Build Reason: scheduler
Build Source Stamp: [branch trunk] 7e937b3d1308c0774e4b0366b6e66b14af1dd5f6
Blamelist: Brandon Williams brandonwilli...@apache.org

Build succeeded!

sincerely,
 -The Buildbot





buildbot success in ASF Buildbot on cassandra-1.1

2012-10-15 Thread buildbot
The Buildbot has detected a restored build on builder cassandra-1.1 while 
building ASF Buildbot.
Full details are available at:
 http://ci.apache.org/builders/cassandra-1.1/builds/25

Buildbot URL: http://ci.apache.org/

Buildslave for this Build: portunus_ubuntu

Build Reason: forced: by IRC user driftx on channel #cassandra-dev: too
Build Source Stamp: HEAD
Blamelist: 

Build succeeded!

sincerely,
 -The Buildbot





[jira] [Updated] (CASSANDRA-4239) Support Thrift SSL socket

2012-10-15 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-4239:
---

Attachment: 0001-CASSANDRA-4239-Support-Thrift-SSL-socket-both-to-the.patch

 Support Thrift SSL socket
 -

 Key: CASSANDRA-4239
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4239
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Jason Brown
Priority: Minor
 Fix For: 1.2.1

 Attachments: 
 0001-CASSANDRA-4239-Support-Thrift-SSL-socket-both-to-the.patch


 Thrift has supported SSL encryption for a while now (THRIFT-106); we should 
 allow configuring that in cassandra.yaml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4239) Support Thrift SSL socket

2012-10-15 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-4239:
---

Reviewer: vijay2...@yahoo.com  (was: brandon.williams)

 Support Thrift SSL socket
 -

 Key: CASSANDRA-4239
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4239
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Jason Brown
Priority: Minor
 Fix For: 1.2.1

 Attachments: 
 0001-CASSANDRA-4239-Support-Thrift-SSL-socket-both-to-the.patch


 Thrift has supported SSL encryption for a while now (THRIFT-106); we should 
 allow configuring that in cassandra.yaml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4808) nodetool doesnt work well with -ve tokens

2012-10-15 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-4808:
-

Attachment: 0001-CASSANDRA-4808.patch

There are 2 option, 

Make nt to accept '-' by making Options ignore any - values (dont verify 
if they are valid options), this can cause confusion on other commands.

Other option (Attached patch) is to support escape character for '-' 
Example: ./apache-cassandra-1.2.0-beta1-SNAPSHOT/bin/nodetool move 
\\-2253536297082652571

 nodetool doesnt work well with -ve tokens
 -

 Key: CASSANDRA-4808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4808
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.0 beta 1

 Attachments: 0001-CASSANDRA-4808.patch


 ./apache-cassandra-1.2.0-beta1-SNAPSHOT/bin/nodetool move 
 \-2253536297082652573
 Unrecognized option: -2253536297082652573
 usage: java org.apache.cassandra.tools.NodeCmd --host arg command
 
  -cf,--column-family arg   only take a snapshot of the specified column
  family

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4239) Support Thrift SSL socket

2012-10-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476324#comment-13476324
 ] 

Jason Brown commented on CASSANDRA-4239:


Followed the advice from the comments of both this ticket and CASSANDRA-4662. 
Added SSL thrift support ThriftSSLFactory for both client and server sockets. 
As the thrift library (0.7.0) only supports SSL on the blocking sockets, did 
not modify the HSHA TServer implementation. Added in TTrasportFactory 
implementations for both cli and stress.

 Support Thrift SSL socket
 -

 Key: CASSANDRA-4239
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4239
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Jason Brown
Priority: Minor
 Fix For: 1.2.1

 Attachments: 
 0001-CASSANDRA-4239-Support-Thrift-SSL-socket-both-to-the.patch


 Thrift has supported SSL encryption for a while now (THRIFT-106); we should 
 allow configuring that in cassandra.yaml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4799) assertion failure in leveled compaction test

2012-10-15 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476329#comment-13476329
 ] 

Yuki Morishita commented on CASSANDRA-4799:
---

I found two problems here.

1) Test does not completely setup necessary SSTables and causes above 
AssertionError.

LCS test uses CFS#forceFlush to generate enough SSTables at the beginning of 
the test, but since the method is asynchronous call, there is a chance that 
test proceeds without sufficient SSTables to fill up L2 and causes AE at 
strat.getLevelSize(2)  0.

Fix for this is to change forceFlush to forceBlockingFlush.
(diff: 
https://github.com/yukim/cassandra/commit/0d16efc6d7592e61f15598d5a4e3cc81d2760007)

2) Repair sometimes causes AssertionError with LCS

During the test, I encountered below error several times.

{code}
ERROR [ValidationExecutor:1] 2012-10-12 14:39:18,660 SchemaLoader.java (line 
73) Fatal exception in thread Thread[ValidationExecutor:1,1,main]
java.lang.AssertionError
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:183)
at 
org.apache.cassandra.db.compaction.CompactionManager$ValidationCompactionIterable.init(CompactionManager.java:879)
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:743)
at 
org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:71)
at 
org.apache.cassandra.db.compaction.CompactionManager$7.call(CompactionManager.java:481)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}

AssertionError comes from following assert code in LCS#getScanners:

{code}
for (SSTableReader sstable : sstables)
{
int level = manifest.levelOf(sstable);
assert level = 0;
byLevel.get(level).add(sstable);
}
{code}

LeveledManifest#levelOf method returns level of SSTable or -1 if SSTable is not 
yet added to LeveledManifest. Here, each SSTable comes from CF's DataTracker.
Every time SSTable is written by flush/compaction, it gets added to DataTracker 
and  after that, added to LeveledManifest if you are using LCS.
If above repair code is executed between those two, you will get AssertionError.

My proposed fix is to remove assertion and treat SSTables that do not belong 
LeveledManifest yet as L0 SSTables.
(diff: 
https://github.com/yukim/cassandra/commit/c8a0fb9a9128e47ec3d07926eb26f6fd93664f52)

Problem (2) also exists in 1.1 branch, but without assertion. We may want to 
fix this in 1.1 too.

 assertion failure in leveled compaction test
 

 Key: CASSANDRA-4799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4799
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 1.2.0


 It's somewhat rare, but I'm regularly seeing this failure on trunk:
 {noformat}
 [junit] Testcase: 
 testValidationMultipleSSTablePerLevel(org.apache.cassandra.db.compaction.LeveledCompactionStrategyTest):
 FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError
 [junit]   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategyTest.testValidationMultipleSSTablePerLevel(LeveledCompactionStrategyTest.java:78)
 [junit] 
 [junit] 
 [junit] Test 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategyTest FAILED
 {noformat}
 I suspect there's a deeper problem, since this is a pretty fundamental 
 assertion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4571) Strange permament socket descriptors increasing leads to Too many open files

2012-10-15 Thread Chris Herron (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476344#comment-13476344
 ] 

Chris Herron commented on CASSANDRA-4571:
-

For anybody else encountering this unbounded socket growth problem on 1.1.5, 
note that while upgrading 1.6.0_35 seemed to help, a longer load test still 
reproduced the symptom. FWIW, upgradesstables ran for a period during this 
particular test - unclear if the increased compaction activity contributed.

 Strange permament socket descriptors increasing leads to Too many open files
 --

 Key: CASSANDRA-4571
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4571
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: CentOS 5.8 Linux 2.6.18-308.13.1.el5 #1 SMP Tue Aug 21 
 17:10:18 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux. 
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03, mixed mode)
Reporter: Serg Shnerson
Assignee: Jonathan Ellis
Priority: Critical
 Fix For: 1.1.5

 Attachments: 4571.txt


 On the two-node cluster there was found strange socket descriptors 
 increasing. lsof -n | grep java shows many rows like
 java   8380 cassandra  113r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  114r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  115r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  116r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  117r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  118r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  119r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  120r unix 0x8101a374a080
 938348482 socket
  And number of this rows constantly increasing. After about 24 hours this 
 situation leads to error.
 We use PHPCassa client. Load is not so high (aroud ~50kb/s on write). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4802) Regular startup log has confusing Bootstrap/Replace/Move completed! without boostrap, replace, or move

2012-10-15 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476366#comment-13476366
 ] 

Vijay commented on CASSANDRA-4802:
--

How about just saying:
Bootstrap completed! Now serving reads.

? Do we need any additional information?

 Regular startup log has confusing Bootstrap/Replace/Move completed! without 
 boostrap, replace, or move
 

 Key: CASSANDRA-4802
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4802
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.12
 Environment: RHEL6, JDK1.6
Reporter: Karl Mueller
Assignee: Vijay
Priority: Trivial

 A regular startup completes successfully, but it has a confusing message the 
 end of the startup:
   INFO 15:19:29,137 Bootstrap/Replace/Move completed! Now serving reads.
 This happens despite no bootstrap, replace, or move.
 While purely cosmetic, this makes you wonder what the node just did - did it 
 just bootstrap?!  It should simply read something like Startup completed! 
 Now serving reads unless it actually has done one of the actions in the 
 error message.
 Complete log at the end:
 INFO 15:13:30,522 Log replay complete, 6274 replayed mutations
  INFO 15:13:30,527 Cassandra version: 1.0.12
  INFO 15:13:30,527 Thrift API version: 19.20.0
  INFO 15:13:30,527 Loading persisted ring state
  INFO 15:13:30,541 Starting up server gossip
  INFO 15:13:30,542 Enqueuing flush of Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,543 Writing Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,550 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-274-Data.db (80 bytes)
  INFO 15:13:30,563 Starting Messaging Service on port 7000
  INFO 15:13:30,571 Using saved token 31901471898837980949691369446728269823
  INFO 15:13:30,572 Enqueuing flush of Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,573 Writing Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,579 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-275-Data.db (163 bytes)
  INFO 15:13:30,581 Node kaos-cass02.xxx/1.2.3.4 state jump to normal
  INFO 15:13:30,598 Bootstrap/Replace/Move completed! Now serving reads.
  INFO 15:13:30,600 Will not load MX4J, mx4j-tools.jar is not in the classpath

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4802) Regular startup log has confusing Bootstrap/Replace/Move completed! without boostrap, replace, or move

2012-10-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476367#comment-13476367
 ] 

Brandon Williams commented on CASSANDRA-4802:
-

I think the point is that we should not print it if we didn't actually 
bootstrap, and we should be able to distinguish between bootstrap/replace/move.

 Regular startup log has confusing Bootstrap/Replace/Move completed! without 
 boostrap, replace, or move
 

 Key: CASSANDRA-4802
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4802
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.12
 Environment: RHEL6, JDK1.6
Reporter: Karl Mueller
Assignee: Vijay
Priority: Trivial

 A regular startup completes successfully, but it has a confusing message the 
 end of the startup:
   INFO 15:19:29,137 Bootstrap/Replace/Move completed! Now serving reads.
 This happens despite no bootstrap, replace, or move.
 While purely cosmetic, this makes you wonder what the node just did - did it 
 just bootstrap?!  It should simply read something like Startup completed! 
 Now serving reads unless it actually has done one of the actions in the 
 error message.
 Complete log at the end:
 INFO 15:13:30,522 Log replay complete, 6274 replayed mutations
  INFO 15:13:30,527 Cassandra version: 1.0.12
  INFO 15:13:30,527 Thrift API version: 19.20.0
  INFO 15:13:30,527 Loading persisted ring state
  INFO 15:13:30,541 Starting up server gossip
  INFO 15:13:30,542 Enqueuing flush of Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,543 Writing Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,550 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-274-Data.db (80 bytes)
  INFO 15:13:30,563 Starting Messaging Service on port 7000
  INFO 15:13:30,571 Using saved token 31901471898837980949691369446728269823
  INFO 15:13:30,572 Enqueuing flush of Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,573 Writing Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,579 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-275-Data.db (163 bytes)
  INFO 15:13:30,581 Node kaos-cass02.xxx/1.2.3.4 state jump to normal
  INFO 15:13:30,598 Bootstrap/Replace/Move completed! Now serving reads.
  INFO 15:13:30,600 Will not load MX4J, mx4j-tools.jar is not in the classpath

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4808) nodetool doesnt work well with negative tokens

2012-10-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-4808:


Summary: nodetool doesnt work well with negative tokens  (was: nodetool 
doesnt work well with -ve tokens)

 nodetool doesnt work well with negative tokens
 --

 Key: CASSANDRA-4808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4808
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.0 beta 1

 Attachments: 0001-CASSANDRA-4808.patch


 ./apache-cassandra-1.2.0-beta1-SNAPSHOT/bin/nodetool move 
 \-2253536297082652573
 Unrecognized option: -2253536297082652573
 usage: java org.apache.cassandra.tools.NodeCmd --host arg command
 
  -cf,--column-family arg   only take a snapshot of the specified column
  family

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4804) Wrong assumption for KeyRange about range.end_token in get_range_slices().

2012-10-15 Thread Nikolay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay updated CASSANDRA-4804:
---

Attachment: cassa.1.2.x.diff.txt
cassa.1.1.6.diff.txt

 Wrong assumption for KeyRange about range.end_token in get_range_slices(). 
 ---

 Key: CASSANDRA-4804
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4804
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.1.6, 1.2.0 beta 1
Reporter: Nikolay
Priority: Minor
 Fix For: 1.1.6, 1.2.0 beta 1

 Attachments: cassa.1.1.6.diff.txt, cassa.1.2.x.diff.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 In get_range_slices() there is parameter KeyRange range.
 There you can pass start_key - end_key, start_token - end_token, or start_key 
 - end_token.
 This is described in the documentation.
 in thrift/ThriftValidation.java there is validation function 
 validateKeyRange() (line:489) that validates correctly the KeyRange, 
 including the case start_key - end_token.
 However in thrift/CassandraServer.java in function get_range_slices() on 
 line: 686 wrong assumption is made:
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), RowPosition.forKey(range.end_key, p));
}
 This means if there is start key, no end token is checked.
 The opposite - null is inserted as end_key.
 Solution:
 same file - thrift/CassandraServer.java on next function - get_paged_slice(), 
 on line:741 same code is written correctly
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   RowPosition end = range.end_key == null ? 
 p.getTokenFactory().fromString(range.end_token).maxKeyBound(p)
: RowPosition.forKey(range.end_key, p);
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), end);
}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4808) nodetool doesnt work well with negative tokens

2012-10-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476369#comment-13476369
 ] 

Brandon Williams commented on CASSANDRA-4808:
-

I'm confused as to why we need negative tokens.

 nodetool doesnt work well with negative tokens
 --

 Key: CASSANDRA-4808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4808
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.0 beta 1

 Attachments: 0001-CASSANDRA-4808.patch


 ./apache-cassandra-1.2.0-beta1-SNAPSHOT/bin/nodetool move 
 \-2253536297082652573
 Unrecognized option: -2253536297082652573
 usage: java org.apache.cassandra.tools.NodeCmd --host arg command
 
  -cf,--column-family arg   only take a snapshot of the specified column
  family

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4802) Regular startup log has confusing Bootstrap/Replace/Move completed! without boostrap, replace, or move

2012-10-15 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476373#comment-13476373
 ] 

Vijay commented on CASSANDRA-4802:
--

Move doesnt use the same code anymore, replace uses this but there are other 
log info explaining that

If Bootstrap is a wrong word then how about: Startup completed?
(I am still looking for an abstract word :))

 Regular startup log has confusing Bootstrap/Replace/Move completed! without 
 boostrap, replace, or move
 

 Key: CASSANDRA-4802
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4802
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.12
 Environment: RHEL6, JDK1.6
Reporter: Karl Mueller
Assignee: Vijay
Priority: Trivial

 A regular startup completes successfully, but it has a confusing message the 
 end of the startup:
   INFO 15:19:29,137 Bootstrap/Replace/Move completed! Now serving reads.
 This happens despite no bootstrap, replace, or move.
 While purely cosmetic, this makes you wonder what the node just did - did it 
 just bootstrap?!  It should simply read something like Startup completed! 
 Now serving reads unless it actually has done one of the actions in the 
 error message.
 Complete log at the end:
 INFO 15:13:30,522 Log replay complete, 6274 replayed mutations
  INFO 15:13:30,527 Cassandra version: 1.0.12
  INFO 15:13:30,527 Thrift API version: 19.20.0
  INFO 15:13:30,527 Loading persisted ring state
  INFO 15:13:30,541 Starting up server gossip
  INFO 15:13:30,542 Enqueuing flush of Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,543 Writing Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,550 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-274-Data.db (80 bytes)
  INFO 15:13:30,563 Starting Messaging Service on port 7000
  INFO 15:13:30,571 Using saved token 31901471898837980949691369446728269823
  INFO 15:13:30,572 Enqueuing flush of Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,573 Writing Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,579 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-275-Data.db (163 bytes)
  INFO 15:13:30,581 Node kaos-cass02.xxx/1.2.3.4 state jump to normal
  INFO 15:13:30,598 Bootstrap/Replace/Move completed! Now serving reads.
  INFO 15:13:30,600 Will not load MX4J, mx4j-tools.jar is not in the classpath

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4808) nodetool doesnt work well with negative tokens

2012-10-15 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476414#comment-13476414
 ] 

Vijay commented on CASSANDRA-4808:
--

M3P supports -ve tokens the range is from Long.MIN_VALUE to Long.MAX_VALUE, 

explanation is in 
https://issues.apache.org/jira/browse/CASSANDRA-4621?focusedCommentId=13452829page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13452829


 nodetool doesnt work well with negative tokens
 --

 Key: CASSANDRA-4808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4808
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.0 beta 1

 Attachments: 0001-CASSANDRA-4808.patch


 ./apache-cassandra-1.2.0-beta1-SNAPSHOT/bin/nodetool move 
 \-2253536297082652573
 Unrecognized option: -2253536297082652573
 usage: java org.apache.cassandra.tools.NodeCmd --host arg command
 
  -cf,--column-family arg   only take a snapshot of the specified column
  family

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4808) nodetool doesnt work well with negative tokens

2012-10-15 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476414#comment-13476414
 ] 

Vijay edited comment on CASSANDRA-4808 at 10/15/12 8:33 PM:


M3P supports -ve tokens the range is from Long.MIN_VALUE to Long.MAX_VALUE, 

explanation is in 
https://issues.apache.org/jira/browse/CASSANDRA-4621?focusedCommentId=13452829page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13452829

Example for generating tokens:

For RandomPartitioner

Node 0  : 0
Node 1  : 56713727820156410577229101238628035242
Node 2  : 113427455640312821154458202477256070484

For Murmur3Partitioner

Node 0  : 0
Node 1  : 6148914691236517204
Node 2  : -6148914691236517208

  was (Author: vijay2...@yahoo.com):
M3P supports -ve tokens the range is from Long.MIN_VALUE to Long.MAX_VALUE, 

explanation is in 
https://issues.apache.org/jira/browse/CASSANDRA-4621?focusedCommentId=13452829page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13452829

  
 nodetool doesnt work well with negative tokens
 --

 Key: CASSANDRA-4808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4808
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Vijay
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.0 beta 1

 Attachments: 0001-CASSANDRA-4808.patch


 ./apache-cassandra-1.2.0-beta1-SNAPSHOT/bin/nodetool move 
 \-2253536297082652573
 Unrecognized option: -2253536297082652573
 usage: java org.apache.cassandra.tools.NodeCmd --host arg command
 
  -cf,--column-family arg   only take a snapshot of the specified column
  family

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4809) Allow restoring specific column families from archived commitlog

2012-10-15 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-4809:
--

 Summary: Allow restoring specific column families from archived 
commitlog
 Key: CASSANDRA-4809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4809
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Nick Bailey
 Fix For: 1.3


Currently you can only restore the entire contents of a commit log archive. It 
would be useful to specify the keyspaces/column families you want to restore 
from an archived commitlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4596) thrift call to get_paged_slice() hangs if end token is 0

2012-10-15 Thread Nikolay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476439#comment-13476439
 ] 

Nikolay commented on CASSANDRA-4596:


have similar issue with my patch for 
https://issues.apache.org/jira/browse/CASSANDRA-4804

if you set 0, it time out. However 2^127 - 1 is equal to 
170141183460469231731687303715884105727

setting it to bigger number, for example
200
works correct too, even is outside the ring.

 thrift call to get_paged_slice() hangs if end token is 0
 --

 Key: CASSANDRA-4596
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4596
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core, Hadoop
Affects Versions: 1.1.3
 Environment: linux
Reporter: Normen Seemann

 I am using get_paged_slice() for range scans driven from within hadoop 
 mappers. The mapper that scans the last segment with get_paged_slice() 
 where start key is set *and* end_token is set to 0 token hangs within the 
 thirft call. Client shows following jstack:
  - java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[], 
 int, int, int) @bci=0 (Interpreted frame)
  - java.net.SocketInputStream.read(byte[], int, int) @bci=84, line=129 
 (Interpreted frame)
  - org.apache.thrift.transport.TIOStreamTransport.read(byte[], int, int) 
 @bci=25, line=127 (Interpreted frame)
  - org.apache.thrift.transport.TTransport.readAll(byte[], int, int) @bci=22, 
 line=84 (Interpreted frame)
  - org.apache.thrift.transport.TFramedTransport.readFrame() @bci=10, line=129 
 (Interpreted frame)
  - org.apache.thrift.transport.TFramedTransport.read(byte[], int, int) 
 @bci=28, line=101 (Interpreted frame)
  - org.apache.thrift.transport.TTransport.readAll(byte[], int, int) @bci=22, 
 line=84 (Interpreted frame)
  - org.apache.thrift.protocol.TBinaryProtocol.readAll(byte[], int, int) 
 @bci=12, line=378 (Interpreted frame)
  - org.apache.thrift.protocol.TBinaryProtocol.readI32() @bci=52, line=297 
 (Interpreted frame)
  - org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin() @bci=1, 
 line=204 (Interpreted frame)
  - org.apache.thrift.TServiceClient.receiveBase(org.apache.thrift.TBase, 
 java.lang.String) @bci=4, line=69 (Interpreted frame)
  - org.apache.cassandra.thrift.Cassandra$Client.recv_get_paged_slice() 
 @bci=12, line=727 (Interpreted frame)
  - 
 org.apache.cassandra.thrift.Cassandra$Client.get_paged_slice(java.lang.String,
  org.apache.cassandra.thrift.KeyRange, java.nio.ByteBuffer, 
 org.apache.cassandra.thrift.ConsistencyLevel) @bci=10, line=711 (Interpreted 
 frame)
 Changing the end_token from 0 to 2**127-1 fixes the problem, however, I 
 would only consider this a workaround. Now, there are actually two issues:
 1.) Is the call to get_paged_slice() I described supported at all?
 2.) if it is not supported please fix with reasonable error instead of just 
 hanging

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4810) unit test failing under long-test

2012-10-15 Thread Bill Bucher (JIRA)
Bill Bucher created CASSANDRA-4810:
--

 Summary: unit test failing under long-test
 Key: CASSANDRA-4810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4810
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Bill Bucher
Priority: Minor


the following failure occurs when running ant long-test

junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionsTest
[junit] Tests run: 5, Failures: 1, Errors: 0, Time elapsed: 31.28 sec
[junit] 
[junit] - Standard Output ---
[junit] org.apache.cassandra.db.compaction.LongCompactionsTest: sstables=2 
rowsper=1 colsper=20: 2173 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionsTest: sstables=2 
rowsper=20 colsper=1: 4531 ms
[junit] org.apache.cassandra.db.compaction.LongCompactionsTest: 
sstables=100 rowsper=800 colsper=5: 1864 ms
[junit] -  ---
[junit] Testcase: 
testStandardColumnCompactions(org.apache.cassandra.db.compaction.LongCompactionsTest):
FAILED
[junit] expected:9 but was:99
[junit] junit.framework.AssertionFailedError: expected:9 but was:99
[junit] at 
org.apache.cassandra.db.compaction.CompactionsTest.assertMaxTimestamp(CompactionsTest.java:207)
[junit] at 
org.apache.cassandra.db.compaction.LongCompactionsTest.testStandardColumnCompactions(LongCompactionsTest.java:141)
[junit] 
[junit] 
[junit] Test org.apache.cassandra.db.compaction.LongCompactionsTest FAILED


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4786) NPE in migration stage after creating an index

2012-10-15 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476466#comment-13476466
 ] 

Pavel Yaskevich commented on CASSANDRA-4786:


+1

 NPE in migration stage after creating an index
 --

 Key: CASSANDRA-4786
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4786
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 4786.txt


 The dtests are generating this error after trying to create an index in cql2:
 {noformat}
 ERROR [MigrationStage:1] 2012-10-09 20:54:12,796 CassandraDaemon.java (line 
 132) Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.ColumnFamilyStore.reload(ColumnFamilyStore.java:162)
 at 
 org.apache.cassandra.db.DefsTable.updateColumnFamily(DefsTable.java:549)
 at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:479)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:344)
 at 
 org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:256)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 ERROR [Thrift:1] 2012-10-09 20:54:12,797 CustomTThreadPoolServer.java (line 
 214) Error occurred during processing of message.
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.NullPointerException
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:348)
 at 
 org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:238)
 at 
 org.apache.cassandra.service.MigrationManager.announceColumnFamilyUpdate(MigrationManager.java:209)
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:714)
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:816)
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1656)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3721)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3709)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:196)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.NullPointerException
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:344)
 ... 13 more
 Caused by: java.lang.NullPointerException
 at 
 org.apache.cassandra.db.ColumnFamilyStore.reload(ColumnFamilyStore.java:162)
 at 
 org.apache.cassandra.db.DefsTable.updateColumnFamily(DefsTable.java:549)
 at 
 org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:479)
 at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:344)
 at 
 org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:256)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 ... 3 more
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4802) Regular startup log has confusing Bootstrap/Replace/Move completed! without boostrap, replace, or move

2012-10-15 Thread Karl Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476509#comment-13476509
 ] 

Karl Mueller commented on CASSANDRA-4802:
-

Bootstrap means something specifically with cassandra in that you think some 
data has streamed in.

I think Startup completed would be great.

If there IS a bootstrap/replace/move then I think the message ought to specify 
which has happened and that it's ready now (if it's easy to do) :)

 Regular startup log has confusing Bootstrap/Replace/Move completed! without 
 boostrap, replace, or move
 

 Key: CASSANDRA-4802
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4802
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.12
 Environment: RHEL6, JDK1.6
Reporter: Karl Mueller
Assignee: Vijay
Priority: Trivial

 A regular startup completes successfully, but it has a confusing message the 
 end of the startup:
   INFO 15:19:29,137 Bootstrap/Replace/Move completed! Now serving reads.
 This happens despite no bootstrap, replace, or move.
 While purely cosmetic, this makes you wonder what the node just did - did it 
 just bootstrap?!  It should simply read something like Startup completed! 
 Now serving reads unless it actually has done one of the actions in the 
 error message.
 Complete log at the end:
 INFO 15:13:30,522 Log replay complete, 6274 replayed mutations
  INFO 15:13:30,527 Cassandra version: 1.0.12
  INFO 15:13:30,527 Thrift API version: 19.20.0
  INFO 15:13:30,527 Loading persisted ring state
  INFO 15:13:30,541 Starting up server gossip
  INFO 15:13:30,542 Enqueuing flush of Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,543 Writing Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,550 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-274-Data.db (80 bytes)
  INFO 15:13:30,563 Starting Messaging Service on port 7000
  INFO 15:13:30,571 Using saved token 31901471898837980949691369446728269823
  INFO 15:13:30,572 Enqueuing flush of Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,573 Writing Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,579 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-275-Data.db (163 bytes)
  INFO 15:13:30,581 Node kaos-cass02.xxx/1.2.3.4 state jump to normal
  INFO 15:13:30,598 Bootstrap/Replace/Move completed! Now serving reads.
  INFO 15:13:30,600 Will not load MX4J, mx4j-tools.jar is not in the classpath

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4676) Can't delete/create a keyspace

2012-10-15 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476594#comment-13476594
 ] 

Pavel Yaskevich commented on CASSANDRA-4676:


[~soboleiv] Can you try with 1.1.6 version (latest cassandra-1.1 branch)?

 Can't delete/create a keyspace
 --

 Key: CASSANDRA-4676
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4676
 Project: Cassandra
  Issue Type: Bug
 Environment: CentOs 5.5.
 Cassandra 1.0.6--1.1.5
Reporter: Ivan Sobolev
Assignee: Pavel Yaskevich
Priority: Trivial
 Attachments: cassandra-cli.txt, cassandra-ex.txt


 Deletion/recreation of the keyspace was not possible.
 *Workaround:*
 use system;
 set schema_keyspaces['OpsCenter']['durable_writes']=true;
 set schema_keyspaces['OpsCenter']['strategy_options']='{datacenter1:1}';
 set schema_keyspaces['OpsCenter']['name']='OpsCenter';
 set 
 schema_keyspaces['OpsCenter']['strategy_class']='org.apache.cassandra.locator.NetworkTopologyStrategy';
 drop keyspace OpsCenter;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4804) Wrong assumption for KeyRange about range.end_token in get_range_slices().

2012-10-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4804:
--

 Reviewer: dbrosius
Affects Version/s: (was: 1.1.6)
   (was: 1.2.0 beta 1)
Fix Version/s: (was: 1.1.6)
   (was: 1.2.0 beta 1)
   1.2.0 beta 2
   1.1.7
 Assignee: Nikolay

Can you review, Dave?

 Wrong assumption for KeyRange about range.end_token in get_range_slices(). 
 ---

 Key: CASSANDRA-4804
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4804
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nikolay
Assignee: Nikolay
Priority: Minor
 Fix For: 1.1.7, 1.2.0 beta 2

 Attachments: cassa.1.1.6.diff.txt, cassa.1.2.x.diff.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 In get_range_slices() there is parameter KeyRange range.
 There you can pass start_key - end_key, start_token - end_token, or start_key 
 - end_token.
 This is described in the documentation.
 in thrift/ThriftValidation.java there is validation function 
 validateKeyRange() (line:489) that validates correctly the KeyRange, 
 including the case start_key - end_token.
 However in thrift/CassandraServer.java in function get_range_slices() on 
 line: 686 wrong assumption is made:
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), RowPosition.forKey(range.end_key, p));
}
 This means if there is start key, no end token is checked.
 The opposite - null is inserted as end_key.
 Solution:
 same file - thrift/CassandraServer.java on next function - get_paged_slice(), 
 on line:741 same code is written correctly
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   RowPosition end = range.end_key == null ? 
 p.getTokenFactory().fromString(range.end_token).maxKeyBound(p)
: RowPosition.forKey(range.end_key, p);
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), end);
}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4806) Consistency of Append/Prepend on Lists need to be improved or clarified

2012-10-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4806:
--

 Priority: Minor  (was: Major)
Affects Version/s: 1.2.0 beta 1
Fix Version/s: 1.2.0 beta 2
 Assignee: Sylvain Lebresne

I thought we were going with (3) but open to solutions.

 Consistency of Append/Prepend on Lists need to be improved or clarified
 ---

 Key: CASSANDRA-4806
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4806
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0 beta 1
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.2.0 beta 2


 Updates are idempotent in Cassandra, this rule makes it simple for developers 
 or client libraries to deal with retries on error. So far the only exception 
 was counters, and we worked around it saying they were meant to be used for 
 analytics use cases.
 Now with List datatype to be added in Cassandra 1.2 we have a similar issue 
 as Append and Prepend operations that can be applied on them are not 
 idempotent. The state of the list will be unknown whenever a timeout is 
 received from the coordinator node saying that no acknowledge could be 
 received on time from replicas or when the connection with the coordinator 
 node is broken while a client wait for an update request to be acknowledged.
 Of course the client can issue a read request on this List in the rare cases 
 when such an unknown state appear, but this is not really elegant and such a 
 check doesn't come with any visibility or atomicity guarantees.
 I can see 3 options:
 * Remove Append and Prepend operations. But this is a pity as they're really 
 useful.
 * Make the behavior of these commands quasi-idempotent. I imagine that if we 
 attach the list of timestamps and/or hashes of recent update requests to each 
 List column stored in Cassandra we would be able to avoid applying duplicate 
 updates. 
 * Explicitly document these operations as potentially unconsistent under 
 these particular conditions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-4805) live update compaction strategy destroy counter column family

2012-10-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4805.
---

Resolution: Invalid

Don't update CQL3 columnfamilies from the cli, it will rip out the column 
definitions as you see here.  (1.2 prevents this, in 1.1 we are leaving it 
as-is.)

 live update compaction strategy destroy counter column family 
 --

 Key: CASSANDRA-4805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4805
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: centos 64 , cassandra 1.1.5
Reporter: sunjian

 1. in a running cassandra cluster with 5 nodes
 2. CLI : update column family {user_stats (a counter column family)} with 
 compaction_strategy='LeveledCompactionStrategy'
 3. nodetool -h host_ip compact
 result : 
 can't INCR/DECR the counter column any more , but it's OK to read .
 
 counter column family definition :
   String sql = CREATE TABLE user_stats ( +
   user_id bigint , +
   counter_type text , +
   counter_for_what text , +
   counter_value counter , +
PRIMARY KEY( 
   +  user_id 
   + , 
   + counter_type 
   + ,
   + counter_for_what 
   +)) WITH read_repair_chance = 1.0 AND 
 replicate_on_write=true   ;
 [exception]
 java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
 identifier counter_value) 
   at 
 org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
  
   at 
 org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4796) composite indexes don't always return results they should

2012-10-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476639#comment-13476639
 ] 

Jonathan Ellis commented on CASSANDRA-4796:
---

+1

 composite indexes don't always return results they should
 -

 Key: CASSANDRA-4796
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4796
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 4726.txt


 composite_index_with_pk_test in the dtests is failing and it reproduces 
 manually.
 {noformat}
 cqlsh:fooCREATE TABLE blogs ( blog_id int,   
   time1 int, time2 int, author text,  
content text, PRIMARY KEY (blog_id, time1, 
 time2) ) ;
 cqlsh:foo create index on blogs(author);
 cqlsh:foo INSERT INTO blogs (blog_id, time1, time2, author, content) VALUES 
 (1, 0, 0, 'foo', 'bar1');
 cqlsh:foo INSERT INTO blogs (blog_id, time1, time2, author, content) VALUES 
 (1, 0, 1, 'foo', 'bar2');
 cqlsh:foo INSERT INTO blogs (blog_id, time1, time2, author, content) VALUES 
 (2, 1, 0, 'foo', 'baz');
 cqlsh:foo INSERT INTO blogs (blog_id, time1, time2, author, content) VALUES 
 (3, 0, 1, 'gux', 'qux');
 cqlsh:foo SELECT blog_id, content FROM blogs WHERE time1 = 1 AND 
 author='foo';
 cqlsh:foo
 {noformat}
 The expected result is:
 {noformat}
  blog_id | time1 | time2 | author | content
 -+---+---++-
2 | 1 | 0 |foo | baz
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4807) Compaction progress counts more than 100%

2012-10-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4807:
--

Assignee: Yuki Morishita

 Compaction progress counts more than 100%
 -

 Key: CASSANDRA-4807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4807
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.6
Reporter: Omid Aladini
Assignee: Yuki Morishita
Priority: Minor

 'nodetool compactionstats' compaction progress counts more than 100%:
 {code}
 pending tasks: 74
   compaction typekeyspace   column family bytes compacted 
 bytes total  progress
ValidationKSPCF1   56192578305 
 8465276891766.38%
CompactionKSPCF2   162018591   
 119913592 135.11%
 {code}
 Hadn't experienced this before 1.1.3. Is it due to changes in 1.1.4-1.1.6 ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3868) Remove or nullify replicate_on_write option

2012-10-15 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476645#comment-13476645
 ] 

Edward Capriolo commented on CASSANDRA-3868:


We should leave this feature in and rename it mongo-mode, then everyone will 
understand that its dangerous but web-scale.

 Remove or nullify replicate_on_write option
 ---

 Key: CASSANDRA-3868
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3868
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.8.0
Reporter: Brandon Williams
 Fix For: 1.3

 Attachments: 3868.txt


 My understanding from Sylvain is that setting this option to false is rather 
 dangerous/stupid, and you should basically never do it.  So 1.1 is a good 
 time to get rid of it, or make it a no-op.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4794) cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException

2012-10-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476649#comment-13476649
 ] 

Jonathan Ellis commented on CASSANDRA-4794:
---

bq. can you reproduce in cqlsh? a cpp environment isn't super easy for me to 
build right now.

 cassandra 1.2.0 beta: atomic_batch_mutate fails with Default TException
 ---

 Key: CASSANDRA-4794
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4794
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.0 beta 1
 Environment: C++
Reporter: debadatta das
 Attachments: sample_AtomicBatchMutate.cpp


 Hi,
 We have installed cassandra 1.2.0 beta with thrift 0.7.0. We are using cpp 
 interface. When we use batch_mutate API, it works fine. But when we are using 
 the new atomic_batch_mutate API with same parameters as batch_mutate, it 
 fails with org::apache::cassandra::TimedOutException, what(): Default 
 TException. We get the same TException error even after increasing Send/Reciv 
 timeout values of Tsocket to 15 seconds or more.
 Details:
 cassandra ring:
 cassandra ring with single node
 consistency level paramter to atomic_batch_mutate
 ConsistencyLevel::ONE
 Thrift version:
 same results with thrift 0.5.0 and thrift 0.7.0.
 thrift 0.8.0 seems unsupported with cassanda 1.2.0. Gives compilation error 
 for cpp interface build.
 We are calling atomic_batch_mutate() with same parameters as batch_mutate.
 cassclient.atomic_batch_mutate(outermap1, ConsistencyLevel::ONE);
 where outmap1 is
 mapstring, mapstring, vectorMutation   outermap1;
 Please point out if anything is missing while using atomic_batch_mutate or 
 the reason behind the failure.
 The logs in cassandra system.log we get during atomic_batch_mutate failure 
 are:
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,604 MessagingService.java (line 
 800) 1 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,606 StatusLogger.java (line 53) 
 Pool Name Active Pending Blocked
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,607 StatusLogger.java (line 68) 
 ReadStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 RequestResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReadRepairStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 MutationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,608 StatusLogger.java (line 68) 
 ReplicateOnWriteStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 GossipStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 AntiEntropyStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MigrationStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 StreamStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,609 StatusLogger.java (line 68) 
 MemtablePostFlusher 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 FlushWriter 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 MiscStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 commitlog_archiver 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 68) 
 InternalResponseStage 0 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,610 StatusLogger.java (line 73) 
 CompactionManager 0 0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 85) 
 MessagingService n/a 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 95) 
 Cache Type Size Capacity KeysToSave Provider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 96) 
 KeyCache 227 74448896 all
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,611 StatusLogger.java (line 102) 
 RowCache 0 0 all org.apache.cassandra.cache.SerializingCacheProvider
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 109) 
 ColumnFamily Memtable ops,data
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 KeyspaceTest.CF_Test 1,71
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.local 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.peers 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.batchlog 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.NodeIdInfo 0,0
 INFO [ScheduledTasks:1] 2012-10-10 04:47:30,612 StatusLogger.java (line 112) 
 system.LocationInfo 0,0
 INFO [ScheduledTasks:1] 

[jira] [Commented] (CASSANDRA-4239) Support Thrift SSL socket

2012-10-15 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476652#comment-13476652
 ] 

Vijay commented on CASSANDRA-4239:
--

Couple of concerns, 

1) DatabaseDescriptor.getClientEncryptionOptions(): cli will now require the 
user to load cassandra.yaml, i guess we can just do new EncryptionOptions()
   Keystore is not needed in the client, we might not need it
2) If encryption is enabled on HSHA currently we are not alerting the 
user/throw an exception.

 Support Thrift SSL socket
 -

 Key: CASSANDRA-4239
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4239
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Jason Brown
Priority: Minor
 Fix For: 1.2.1

 Attachments: 
 0001-CASSANDRA-4239-Support-Thrift-SSL-socket-both-to-the.patch


 Thrift has supported SSL encryption for a while now (THRIFT-106); we should 
 allow configuring that in cassandra.yaml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4571) Strange permament socket descriptors increasing leads to Too many open files

2012-10-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476654#comment-13476654
 ] 

Jonathan Ellis commented on CASSANDRA-4571:
---

Related to CASSANDRA-4740?

 Strange permament socket descriptors increasing leads to Too many open files
 --

 Key: CASSANDRA-4571
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4571
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: CentOS 5.8 Linux 2.6.18-308.13.1.el5 #1 SMP Tue Aug 21 
 17:10:18 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux. 
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03, mixed mode)
Reporter: Serg Shnerson
Assignee: Jonathan Ellis
Priority: Critical
 Fix For: 1.1.5

 Attachments: 4571.txt


 On the two-node cluster there was found strange socket descriptors 
 increasing. lsof -n | grep java shows many rows like
 java   8380 cassandra  113r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  114r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  115r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  116r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  117r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  118r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  119r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  120r unix 0x8101a374a080
 938348482 socket
  And number of this rows constantly increasing. After about 24 hours this 
 situation leads to error.
 We use PHPCassa client. Load is not so high (aroud ~50kb/s on write). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4810) unit test failing under long-test

2012-10-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4810:
--

Assignee: Yuki Morishita

is this 1.1 or trunk?

 unit test failing under long-test
 -

 Key: CASSANDRA-4810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4810
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Bill Bucher
Assignee: Yuki Morishita
Priority: Minor

 the following failure occurs when running ant long-test
 junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionsTest
 [junit] Tests run: 5, Failures: 1, Errors: 0, Time elapsed: 31.28 sec
 [junit] 
 [junit] - Standard Output ---
 [junit] org.apache.cassandra.db.compaction.LongCompactionsTest: 
 sstables=2 rowsper=1 colsper=20: 2173 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionsTest: 
 sstables=2 rowsper=20 colsper=1: 4531 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionsTest: 
 sstables=100 rowsper=800 colsper=5: 1864 ms
 [junit] -  ---
 [junit] Testcase: 
 testStandardColumnCompactions(org.apache.cassandra.db.compaction.LongCompactionsTest):
   FAILED
 [junit] expected:9 but was:99
 [junit] junit.framework.AssertionFailedError: expected:9 but was:99
 [junit]   at 
 org.apache.cassandra.db.compaction.CompactionsTest.assertMaxTimestamp(CompactionsTest.java:207)
 [junit]   at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testStandardColumnCompactions(LongCompactionsTest.java:141)
 [junit] 
 [junit] 
 [junit] Test org.apache.cassandra.db.compaction.LongCompactionsTest FAILED

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4804) Wrong assumption for KeyRange about range.end_token in get_range_slices().

2012-10-15 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476661#comment-13476661
 ] 

Dave Brosius commented on CASSANDRA-4804:
-

it seems to me you should check 


 Wrong assumption for KeyRange about range.end_token in get_range_slices(). 
 ---

 Key: CASSANDRA-4804
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4804
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nikolay
Assignee: Nikolay
Priority: Minor
 Fix For: 1.1.7, 1.2.0 beta 2

 Attachments: cassa.1.1.6.diff.txt, cassa.1.2.x.diff.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 In get_range_slices() there is parameter KeyRange range.
 There you can pass start_key - end_key, start_token - end_token, or start_key 
 - end_token.
 This is described in the documentation.
 in thrift/ThriftValidation.java there is validation function 
 validateKeyRange() (line:489) that validates correctly the KeyRange, 
 including the case start_key - end_token.
 However in thrift/CassandraServer.java in function get_range_slices() on 
 line: 686 wrong assumption is made:
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), RowPosition.forKey(range.end_key, p));
}
 This means if there is start key, no end token is checked.
 The opposite - null is inserted as end_key.
 Solution:
 same file - thrift/CassandraServer.java on next function - get_paged_slice(), 
 on line:741 same code is written correctly
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   RowPosition end = range.end_key == null ? 
 p.getTokenFactory().fromString(range.end_token).maxKeyBound(p)
: RowPosition.forKey(range.end_key, p);
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), end);
}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4804) Wrong assumption for KeyRange about range.end_token in get_range_slices().

2012-10-15 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476661#comment-13476661
 ] 

Dave Brosius edited comment on CASSANDRA-4804 at 10/16/12 2:14 AM:
---

it seems to me you should check range.isSetEnd_key() and range.isSetEnd_token() 
to see what option you should use as i believe it's valid for the value to be 
null, meaning end of range.
+


  was (Author: dbrosius):
it seems to me you should check 

  
 Wrong assumption for KeyRange about range.end_token in get_range_slices(). 
 ---

 Key: CASSANDRA-4804
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4804
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nikolay
Assignee: Nikolay
Priority: Minor
 Fix For: 1.1.7, 1.2.0 beta 2

 Attachments: cassa.1.1.6.diff.txt, cassa.1.2.x.diff.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 In get_range_slices() there is parameter KeyRange range.
 There you can pass start_key - end_key, start_token - end_token, or start_key 
 - end_token.
 This is described in the documentation.
 in thrift/ThriftValidation.java there is validation function 
 validateKeyRange() (line:489) that validates correctly the KeyRange, 
 including the case start_key - end_token.
 However in thrift/CassandraServer.java in function get_range_slices() on 
 line: 686 wrong assumption is made:
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), RowPosition.forKey(range.end_key, p));
}
 This means if there is start key, no end token is checked.
 The opposite - null is inserted as end_key.
 Solution:
 same file - thrift/CassandraServer.java on next function - get_paged_slice(), 
 on line:741 same code is written correctly
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   RowPosition end = range.end_key == null ? 
 p.getTokenFactory().fromString(range.end_token).maxKeyBound(p)
: RowPosition.forKey(range.end_key, p);
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), end);
}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4799) assertion failure in leveled compaction test

2012-10-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476663#comment-13476663
 ] 

Jonathan Ellis commented on CASSANDRA-4799:
---

+1

(I wish we could keep that assert but I don't have a better solution.)

 assertion failure in leveled compaction test
 

 Key: CASSANDRA-4799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4799
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 1.2.0


 It's somewhat rare, but I'm regularly seeing this failure on trunk:
 {noformat}
 [junit] Testcase: 
 testValidationMultipleSSTablePerLevel(org.apache.cassandra.db.compaction.LeveledCompactionStrategyTest):
 FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError
 [junit]   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategyTest.testValidationMultipleSSTablePerLevel(LeveledCompactionStrategyTest.java:78)
 [junit] 
 [junit] 
 [junit] Test 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategyTest FAILED
 {noformat}
 I suspect there's a deeper problem, since this is a pretty fundamental 
 assertion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4804) Wrong assumption for KeyRange about range.end_token in get_range_slices().

2012-10-15 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476661#comment-13476661
 ] 

Dave Brosius edited comment on CASSANDRA-4804 at 10/16/12 2:21 AM:
---

it seems to me you should check range.isSetEnd_key() and range.isSetEnd_token() 
to see what option you should use as i believe it's valid for the value to be 
null, meaning end of range.


bah... ignore this comment. new byte[0] is the way to specify end of range.


  was (Author: dbrosius):
it seems to me you should check range.isSetEnd_key() and 
range.isSetEnd_token() to see what option you should use as i believe it's 
valid for the value to be null, meaning end of range.
+

  
 Wrong assumption for KeyRange about range.end_token in get_range_slices(). 
 ---

 Key: CASSANDRA-4804
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4804
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nikolay
Assignee: Nikolay
Priority: Minor
 Fix For: 1.1.7, 1.2.0 beta 2

 Attachments: cassa.1.1.6.diff.txt, cassa.1.2.x.diff.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 In get_range_slices() there is parameter KeyRange range.
 There you can pass start_key - end_key, start_token - end_token, or start_key 
 - end_token.
 This is described in the documentation.
 in thrift/ThriftValidation.java there is validation function 
 validateKeyRange() (line:489) that validates correctly the KeyRange, 
 including the case start_key - end_token.
 However in thrift/CassandraServer.java in function get_range_slices() on 
 line: 686 wrong assumption is made:
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), RowPosition.forKey(range.end_key, p));
}
 This means if there is start key, no end token is checked.
 The opposite - null is inserted as end_key.
 Solution:
 same file - thrift/CassandraServer.java on next function - get_paged_slice(), 
 on line:741 same code is written correctly
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   RowPosition end = range.end_key == null ? 
 p.getTokenFactory().fromString(range.end_token).maxKeyBound(p)
: RowPosition.forKey(range.end_key, p);
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), end);
}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4804) Wrong assumption for KeyRange about range.end_token in get_range_slices().

2012-10-15 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476670#comment-13476670
 ] 

Dave Brosius commented on CASSANDRA-4804:
-

1.2 patch doesn't apply cleanly. 

remove commented out code

otherwise patch works as expected.

 Wrong assumption for KeyRange about range.end_token in get_range_slices(). 
 ---

 Key: CASSANDRA-4804
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4804
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Nikolay
Assignee: Nikolay
Priority: Minor
 Fix For: 1.1.7, 1.2.0 beta 2

 Attachments: cassa.1.1.6.diff.txt, cassa.1.2.x.diff.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 In get_range_slices() there is parameter KeyRange range.
 There you can pass start_key - end_key, start_token - end_token, or start_key 
 - end_token.
 This is described in the documentation.
 in thrift/ThriftValidation.java there is validation function 
 validateKeyRange() (line:489) that validates correctly the KeyRange, 
 including the case start_key - end_token.
 However in thrift/CassandraServer.java in function get_range_slices() on 
 line: 686 wrong assumption is made:
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), RowPosition.forKey(range.end_key, p));
}
 This means if there is start key, no end token is checked.
 The opposite - null is inserted as end_key.
 Solution:
 same file - thrift/CassandraServer.java on next function - get_paged_slice(), 
 on line:741 same code is written correctly
if (range.start_key == null)
{
   ... // populate tokens
}
else
{
   RowPosition end = range.end_key == null ? 
 p.getTokenFactory().fromString(range.end_token).maxKeyBound(p)
: RowPosition.forKey(range.end_key, p);
   bounds = new BoundsRowPosition(RowPosition.forKey(range.start_key, 
 p), end);
}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4805) live update compaction strategy destroy counter column family

2012-10-15 Thread sunjian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476674#comment-13476674
 ] 

sunjian commented on CASSANDRA-4805:


[~jbellis] We use cassandra 1.1.5 currently , shall we update the cluster to 
1.2 after it released ? 

 live update compaction strategy destroy counter column family 
 --

 Key: CASSANDRA-4805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4805
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: centos 64 , cassandra 1.1.5
Reporter: sunjian

 1. in a running cassandra cluster with 5 nodes
 2. CLI : update column family {user_stats (a counter column family)} with 
 compaction_strategy='LeveledCompactionStrategy'
 3. nodetool -h host_ip compact
 result : 
 can't INCR/DECR the counter column any more , but it's OK to read .
 
 counter column family definition :
   String sql = CREATE TABLE user_stats ( +
   user_id bigint , +
   counter_type text , +
   counter_for_what text , +
   counter_value counter , +
PRIMARY KEY( 
   +  user_id 
   + , 
   + counter_type 
   + ,
   + counter_for_what 
   +)) WITH read_repair_chance = 1.0 AND 
 replicate_on_write=true   ;
 [exception]
 java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
 identifier counter_value) 
   at 
 org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
  
   at 
 org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4811) Some cqlsh help topics don't work (select, create, insert and anything else that is a cql statement)

2012-10-15 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-4811:


 Summary: Some cqlsh help topics don't work (select, create, insert 
and anything else that is a cql statement)
 Key: CASSANDRA-4811
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4811
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1, 1.1.6
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 1.1.7, 1.2.0 beta 2


cqlsh help select
Improper help command.

Same will happen if you look up a help topic for any other cql statement.
38748b43d8de17375c7cc16e7a4969ca4c1a2aa1 broke it (#4198) 5 months ago.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4805) live update compaction strategy destroy counter column family

2012-10-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476676#comment-13476676
 ] 

Jonathan Ellis commented on CASSANDRA-4805:
---

If you are using CQL3, then I would definitely consider that.  beta2 will be 
released shortly and that will give you a better idea what to expect.

 live update compaction strategy destroy counter column family 
 --

 Key: CASSANDRA-4805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4805
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5
 Environment: centos 64 , cassandra 1.1.5
Reporter: sunjian

 1. in a running cassandra cluster with 5 nodes
 2. CLI : update column family {user_stats (a counter column family)} with 
 compaction_strategy='LeveledCompactionStrategy'
 3. nodetool -h host_ip compact
 result : 
 can't INCR/DECR the counter column any more , but it's OK to read .
 
 counter column family definition :
   String sql = CREATE TABLE user_stats ( +
   user_id bigint , +
   counter_type text , +
   counter_for_what text , +
   counter_value counter , +
PRIMARY KEY( 
   +  user_id 
   + , 
   + counter_type 
   + ,
   + counter_for_what 
   +)) WITH read_repair_chance = 1.0 AND 
 replicate_on_write=true   ;
 [exception]
 java.sql.SQLSyntaxErrorException: InvalidRequestException(why:Unknown 
 identifier counter_value) 
   at 
 org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:92)
  
   at 
 org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:303)
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4810) unit test failing under long-test

2012-10-15 Thread Bill Bucher (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476682#comment-13476682
 ] 

Bill Bucher commented on CASSANDRA-4810:


trunk.

 unit test failing under long-test
 -

 Key: CASSANDRA-4810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4810
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Bill Bucher
Assignee: Yuki Morishita
Priority: Minor

 the following failure occurs when running ant long-test
 junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionsTest
 [junit] Tests run: 5, Failures: 1, Errors: 0, Time elapsed: 31.28 sec
 [junit] 
 [junit] - Standard Output ---
 [junit] org.apache.cassandra.db.compaction.LongCompactionsTest: 
 sstables=2 rowsper=1 colsper=20: 2173 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionsTest: 
 sstables=2 rowsper=20 colsper=1: 4531 ms
 [junit] org.apache.cassandra.db.compaction.LongCompactionsTest: 
 sstables=100 rowsper=800 colsper=5: 1864 ms
 [junit] -  ---
 [junit] Testcase: 
 testStandardColumnCompactions(org.apache.cassandra.db.compaction.LongCompactionsTest):
   FAILED
 [junit] expected:9 but was:99
 [junit] junit.framework.AssertionFailedError: expected:9 but was:99
 [junit]   at 
 org.apache.cassandra.db.compaction.CompactionsTest.assertMaxTimestamp(CompactionsTest.java:207)
 [junit]   at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testStandardColumnCompactions(LongCompactionsTest.java:141)
 [junit] 
 [junit] 
 [junit] Test org.apache.cassandra.db.compaction.LongCompactionsTest FAILED

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4571) Strange permament socket descriptors increasing leads to Too many open files

2012-10-15 Thread Chris Herron (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476710#comment-13476710
 ] 

Chris Herron commented on CASSANDRA-4571:
-

FYI was able to reproduce the symptom on Cassandra 1.1.6.
@[~jbellis] Re: CASSANDRA-4740 and whether it relates to this: 
* Haven't looked across all nodes for phantom connections yet
* Have searched across all logs - found a single instance of Timed out 
replaying hints.
* Mina mentioned that Nodes running earlier kernels (2.6.39, 3.0, 3.1) haven't 
exhibited this. We are seeing this on Linux kernel 2.6.35 with Java 1.6.0_35.


 Strange permament socket descriptors increasing leads to Too many open files
 --

 Key: CASSANDRA-4571
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4571
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
 Environment: CentOS 5.8 Linux 2.6.18-308.13.1.el5 #1 SMP Tue Aug 21 
 17:10:18 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux. 
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03, mixed mode)
Reporter: Serg Shnerson
Assignee: Jonathan Ellis
Priority: Critical
 Fix For: 1.1.5

 Attachments: 4571.txt


 On the two-node cluster there was found strange socket descriptors 
 increasing. lsof -n | grep java shows many rows like
 java   8380 cassandra  113r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  114r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  115r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  116r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  117r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  118r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  119r unix 0x8101a374a080
 938348482 socket
 java   8380 cassandra  120r unix 0x8101a374a080
 938348482 socket
  And number of this rows constantly increasing. After about 24 hours this 
 situation leads to error.
 We use PHPCassa client. Load is not so high (aroud ~50kb/s on write). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: update startup log messages Patch by Vijay, reviewed by brandonwilliams for CASSANDRA-4802

2012-10-15 Thread vijay
Updated Branches:
  refs/heads/trunk 7e937b3d1 - d525cf969


update startup log messages
Patch by Vijay, reviewed by brandonwilliams for CASSANDRA-4802


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d525cf96
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d525cf96
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d525cf96

Branch: refs/heads/trunk
Commit: d525cf969c042b21a9375446f5449ee82d7d1484
Parents: 7e937b3
Author: Vijay Parthasarathy vparthasara...@ipad.apple.com
Authored: Mon Oct 15 21:21:51 2012 -0700
Committer: Vijay Parthasarathy vparthasara...@ipad.apple.com
Committed: Mon Oct 15 21:21:51 2012 -0700

--
 .../apache/cassandra/service/StorageService.java   |7 ---
 1 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d525cf96/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 7d92fbe..8de0bd2 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -640,7 +640,7 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 if (DatabaseDescriptor.getNumTokens() == 1)
 logger.warn(Generated random token  + tokens + . 
Random tokens will result in an unbalanced ring; see 
http://wiki.apache.org/cassandra/Operations;);
 else
-logger.info(Generated random tokens.);
+logger.info(Generated random tokens. tokens are {}, 
tokens);
 }
 else
 {
@@ -716,12 +716,12 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 if (!current.isEmpty())
 for (InetAddress existing : current)
 Gossiper.instance.replacedEndpoint(existing);
-logger.info(Bootstrap/Replace/Move completed! Now serving 
reads.);
+logger.info(Startup completed! Now serving reads.);
 assert tokenMetadata.sortedTokens().size()  0;
 }
 else
 {
-logger.info(Bootstrap complete, but write survey mode is active, 
not becoming an active ring member. Use JMX (StorageService-joinRing()) to 
finalize ring joining.);
+logger.info(Startup complete, but write survey mode is active, 
not becoming an active ring member. Use JMX (StorageService-joinRing()) to 
finalize ring joining.);
 }
 }
 
@@ -837,6 +837,7 @@ public class StorageService implements 
IEndpointStateChangeSubscriber, StorageSe
 Tracing.instance();
 setMode(Mode.JOINING, Starting to bootstrap..., true);
 new BootStrapper(FBUtilities.getBroadcastAddress(), tokens, 
tokenMetadata).bootstrap(); // handles token update
+logger.info(Bootstrap completed! for the tokens {}, tokens);
 }
 
 public boolean isBootstrapMode()



[jira] [Resolved] (CASSANDRA-4802) Regular startup log has confusing Bootstrap/Replace/Move completed! without boostrap, replace, or move

2012-10-15 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay resolved CASSANDRA-4802.
--

   Resolution: Fixed
Fix Version/s: 1.2.0

Committed 
https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blobdiff;f=src/java/org/apache/cassandra/service/StorageService.java;h=8de0bd24632c89ea1b41c952ee6ec2db58808894;hp=7d92fbe0ff15c8c686a93425f4fccca49b921c0b;hb=d525cf969c042b21a9375446f5449ee82d7d1484;hpb=7e937b3d1308c0774e4b0366b6e66b14af1dd5f6

Let me know if you need more info, i will reopen this ticket.

 Regular startup log has confusing Bootstrap/Replace/Move completed! without 
 boostrap, replace, or move
 

 Key: CASSANDRA-4802
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4802
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.12
 Environment: RHEL6, JDK1.6
Reporter: Karl Mueller
Assignee: Vijay
Priority: Trivial
 Fix For: 1.2.0


 A regular startup completes successfully, but it has a confusing message the 
 end of the startup:
   INFO 15:19:29,137 Bootstrap/Replace/Move completed! Now serving reads.
 This happens despite no bootstrap, replace, or move.
 While purely cosmetic, this makes you wonder what the node just did - did it 
 just bootstrap?!  It should simply read something like Startup completed! 
 Now serving reads unless it actually has done one of the actions in the 
 error message.
 Complete log at the end:
 INFO 15:13:30,522 Log replay complete, 6274 replayed mutations
  INFO 15:13:30,527 Cassandra version: 1.0.12
  INFO 15:13:30,527 Thrift API version: 19.20.0
  INFO 15:13:30,527 Loading persisted ring state
  INFO 15:13:30,541 Starting up server gossip
  INFO 15:13:30,542 Enqueuing flush of Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,543 Writing Memtable-LocationInfo@1828864224(29/36 
 serialized/live bytes, 1 ops)
  INFO 15:13:30,550 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-274-Data.db (80 bytes)
  INFO 15:13:30,563 Starting Messaging Service on port 7000
  INFO 15:13:30,571 Using saved token 31901471898837980949691369446728269823
  INFO 15:13:30,572 Enqueuing flush of Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,573 Writing Memtable-LocationInfo@294410307(53/66 
 serialized/live bytes, 2 ops)
  INFO 15:13:30,579 Completed flushing 
 /data2/data-cassandra/system/LocationInfo-hd-275-Data.db (163 bytes)
  INFO 15:13:30,581 Node kaos-cass02.xxx/1.2.3.4 state jump to normal
  INFO 15:13:30,598 Bootstrap/Replace/Move completed! Now serving reads.
  INFO 15:13:30,600 Will not load MX4J, mx4j-tools.jar is not in the classpath

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4812) Require enabling cross-node timeouts

2012-10-15 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-4812:
-

 Summary: Require enabling cross-node timeouts
 Key: CASSANDRA-4812
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4812
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.0 beta 2


Deploying 1.2 against a cluster whose clocks are not synchronized will cause 
*every* request to timeout.  Suggest adding a {{cross_node_timeout}} option 
defaulting to false that users must explicitly enable after installing ntpd.  
Otherwise we fall back to the pessimistic case of assuming the request was 
forwarded to the replica instantly by the coordinator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira