[jira] Commented: (CASSANDRA-1311) Support (asynchronous) triggers

2010-12-28 Thread Maxim Grinev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975462#action_12975462
 ] 

Maxim Grinev commented on CASSANDRA-1311:
-

The implementation guarantees that triggers will be executed at least once even 
if the update is only partially executed. The replicas that are updated will 
take care of that. It means that if a write updates *some* replica and the 
write coordinator crashed before executing triggers and acknowledging the 
client the triggers will be executed (as many times as the number of replicas 
were updated). So missing an update is not a problem. Triggers are a good 
solution for indexing. The only thing that is triggers are not good for is 
where the trigger procedure is not idempontent. For example, when a trigger 
increments a counter, the counter will be incremented with the same value more 
than once if the write coordinator failed. 

 Support (asynchronous) triggers
 ---

 Key: CASSANDRA-1311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1311
 Project: Cassandra
  Issue Type: New Feature
  Components: Contrib
Reporter: Maxim Grinev
 Fix For: 0.8

 Attachments: HOWTO-PatchAndRunTriggerExample-update1.txt, 
 HOWTO-PatchAndRunTriggerExample.txt, ImplementationDetails-update1.pdf, 
 ImplementationDetails.pdf, trunk-967053.txt, trunk-984391-update1.txt, 
 trunk-984391-update2.txt


 Asynchronous triggers is a basic mechanism to implement various use cases of 
 asynchronous execution of application code at database side. For example to 
 support indexes and materialized views, online analytics, push-based data 
 propagation.
 Please find the motivation, triggers description and list of applications:
 http://maxgrinev.com/2010/07/23/extending-cassandra-with-asynchronous-triggers/
 An example of using triggers for indexing:
 http://maxgrinev.com/2010/07/23/managing-indexes-in-cassandra-using-async-triggers/
 Implementation details are attached.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1876) Allow minor Parallel Compaction

2010-12-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975502#action_12975502
 ] 

Germán Kondolf commented on CASSANDRA-1876:
---

Regarding the original issue, I've repatched the trunk version to sync the new 
functionallity of keep sstable cached keys after compaction.

 Allow minor Parallel Compaction
 ---

 Key: CASSANDRA-1876
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1876
 Project: Cassandra
  Issue Type: Improvement
Reporter: Germán Kondolf
Priority: Minor
 Attachments: 1876-reformatted.txt, compactionPatch-V2.txt, 
 compactionPatch-V3.txt


 Hi,
 According to the dev's list discussion (1) I've patched the CompactionManager 
 to allow parallel compaction.
 Mainly it splits the sstables to compact in the desired buckets, configured 
 by a new parameter: compaction_parallelism with the current default of 1.
 Then, it just submits the units of work to a new executor and waits for the 
 finalization.
 The patch was created in the trunk, so I don't know the exact affected 
 version, I assume that is 0.8.
 I'll try to apply this patch to 0.6.X also for my current production 
 installation, and then reattach it.
 (1) http://markmail.org/thread/cldnqfh3s3nufnke

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-803) remove PropertyConfigurator from CassandraDaemon

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-803.
--

Resolution: Duplicate

bq. As far I can see, the PropertyConfigurator is already removed with rev 
934505 for CASSANDRA-971

Right. Closing this one as a duplicate.

bq. setLog4jLevel should be kept, but as some kind of optional operation, just 
in the case that log4j is present

This is what we're going with for now.

 remove PropertyConfigurator from CassandraDaemon
 

 Key: CASSANDRA-803
 URL: https://issues.apache.org/jira/browse/CASSANDRA-803
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6
Reporter: Jesse McConnell

 In order for users to make use of the EmbeddedCassandraService for unit 
 testing they need to have a dependency declared on log4j.  
 It would be nice if we could use the log4j-over-slf4j artifact to bridge this 
 requirement for those of us using slf4j.  
 http://www.slf4j.org/legacy.html#log4j-over-slf4j
 Currently it errors with the direct usage of the PropertyConfigurator in 
 o.a.c.thrift.CassandraDaemon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1859) distributed test harness

2010-12-28 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975506#action_12975506
 ] 

Jonathan Ellis commented on CASSANDRA-1859:
---

On the old CASSANDRA-874, Peter pointed out that Libvirt on Linux allows you 
to set up virtual network interfaces on a given host so you could get multiple 
instances of Cassandra running on the same local hardware, one could probably 
simulate failing nodes. see: virsh --net-create for examples

It would be nice to not require ec2/rax to run the test suite.

 distributed test harness
 

 Key: CASSANDRA-1859
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1859
 Project: Cassandra
  Issue Type: Test
  Components: Tools
Reporter: Kelvin Kakugawa
Assignee: Kelvin Kakugawa
 Fix For: 0.7.1


 Distributed Test Harness
 - deploys a cluster on a cloud provider
 - runs tests targeted at the cluster
 - tears down the cluster

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-918) Create a dazzling web ui for cassandra

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-918.
--

Resolution: Won't Fix

The jmx part is addressed by CASSANDRA-1068. More than that probably doesn't 
belong in-tree.

 Create a dazzling web ui for cassandra
 --

 Key: CASSANDRA-918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-918
 Project: Cassandra
  Issue Type: Wish
  Components: Contrib
Reporter: Gary Dusbabek

 This would need to pull in jmx attributes and be able to execute operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



buildbot success in ASF Buildbot on cassandra-0.6

2010-12-28 Thread buildbot
The Buildbot has detected a restored build of cassandra-0.6 on ASF Buildbot.
Full details are available at:
 http://ci.apache.org/builders/cassandra-0.6/builds/264

Buildbot URL: http://ci.apache.org/

Buildslave for this Build: isis_ubuntu

Build Reason: 
Build Source Stamp: [branch cassandra/branches/cassandra-0.6] 1053362
Blamelist: eevans

Build succeeded!

sincerely,
 -The Buildbot



[jira] Commented: (CASSANDRA-1908) Implement the CLibrary using JNI module to avoid the LGPL dependency on JNA

2010-12-28 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975512#action_12975512
 ] 

Eric Evans commented on CASSANDRA-1908:
---

It might be trivial to implement a JNI library, but I don't think it's going to 
be trivial to build  architecture dependent code at release time, field issues, 
debug, etc, for N platforms.

I'm also open to alternatives, but I'd need to be convinced that the cure 
wasn't worth the disease.  

 Implement the CLibrary using JNI module to avoid the LGPL dependency on JNA
 ---

 Key: CASSANDRA-1908
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1908
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Hiram Chirino
 Fix For: 0.7.1

 Attachments: cassandra-jni.zip


 Cassandra can't ship JNA out of the box since it's LGPL licensed, so many of 
 the performance optimizing features in the CLibrary class are not available 
 in a simple install.  It should be trivial to implement a real JNI library 
 for the CLibrary class.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1905) count timeouts towards dynamicsnitch latencies

2010-12-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975513#action_12975513
 ] 

Hudson commented on CASSANDRA-1905:
---

Integrated in Cassandra-0.6 #38 (See 
[https://hudson.apache.org/hudson/job/Cassandra-0.6/38/])


 count timeouts towards dynamicsnitch latencies
 --

 Key: CASSANDRA-1905
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1905
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6.6
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
 Fix For: 0.6.9, 0.7.1

 Attachments: 1905.txt


 receiveTiming is only called by ResponseVerbHandler; we need to add timing 
 information for timed-out requests as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-760) Optimize pending ranges.

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-760.
--

Resolution: Not A Problem

Discussed w/ Gary on irc -- since pending ranges only need to be updated on 
token movement, this is not an important thing to optimize.

 Optimize pending ranges.
 

 Key: CASSANDRA-760
 URL: https://issues.apache.org/jira/browse/CASSANDRA-760
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Gary Dusbabek
Priority: Minor

 After 620, pending ranges are calculated on a per-table basis.  This isn't 
 optimal, as the pending ranges of some tables will be subsets of pending 
 ranges for other tables with the same replication strategy but a greater 
 replication factor.
 This can be optimized.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1709) CQL keyspace and column family management

2010-12-28 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975516#action_12975516
 ] 

Eric Evans commented on CASSANDRA-1709:
---

The other thought I had was that {{ALTER}} in SQL is used for schema 
modification, and while we're technically schema-less, the assignment of column 
metadata is the closest thing we have to adding/removing/modifying column 
schema.

 CQL keyspace and column family management
 -

 Key: CASSANDRA-1709
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1709
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Affects Versions: 0.8
Reporter: Eric Evans
Priority: Minor
 Fix For: 0.8

   Original Estimate: 0h
  Remaining Estimate: 0h

 CQL specification and implementation for schema management.
 This corresponds to the following RPC methods:
 * system_add_column_family()
 * system_add_keyspace()
 * system_drop_keyspace()
 * system_update_keyspace()
 * system_update_columnfamily()

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-939) Decommisioning does not update status

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-939.
--

Resolution: Cannot Reproduce

I believe this was fixed at some point in the 0.6 releases

 Decommisioning does not update status
 -

 Key: CASSANDRA-939
 URL: https://issues.apache.org/jira/browse/CASSANDRA-939
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.6
Reporter: gabriele renzi
Priority: Minor

 This happened using 0.6-beta3 on a test two nodes cluster. 
 Steps that lead to problem:
 - launch node A
 - load data in A
 - launch node B connected to A
 - load data in both (replicationfactor is 1)
 - use nodetool to decommission A
 At this point something went wrong inside A, and the command seemingly failed:
 r...@a$ ./bin/nodetool --host localhost decommission
 Exception in thread main java.lang.reflect.UndeclaredThrowableException
 at $Proxy0.decommission(Unknown Source)
 at 
 org.apache.cassandra.tools.NodeProbe.decommission(NodeProbe.java:324)
 at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:447)
 Caused by: java.rmi.UnmarshalException: Error unmarshaling return header; 
 nested exception is:
 java.io.EOFException
 at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:209)
 at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142)
 at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
 at javax.management.remote.rmi.RMIConnectionImpl_Stub.invoke(Unknown 
 Source)
 at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:993)
 at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:288)
 ... 3 more
 Caused by: java.io.EOFException
 at java.io.DataInputStream.readByte(DataInputStream.java:250)
 at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:195)
 ... 8 more
  
 At this point, `nodetool streams` on  A reported Mode:decommissioned but 
 still sending streams. 
 The same way, node B still reported Mode: normal and still receiving streams .
 In both case the streaming values where reported as 0/size-of-data for all 
 the files.
 Having turned off node A, after ~24 hours, node B still reports the same 
 thing.
 The decommissioning seems to have actually worked, but the status never got 
 updated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1132) Add min/max counter support on top of the incr/decr counters..

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1132.
---

Resolution: Later
  Assignee: (was: Adam Samet)

Closing for lack of activity

 Add min/max counter support on top of the incr/decr counters..
 --

 Key: CASSANDRA-1132
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1132
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Adam Samet
   Original Estimate: 10h
  Remaining Estimate: 10h

 I'd like to add support for min and max counters on top of Kelvin's incr/decr 
 counter implementation.  This will involve multiple resolution strategies for 
 clocks, and a bit of a refactoring to support multiple Reconciler / Context 
 classes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1124) Improve Cassandra to MapReduce locality sharing

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1124.
---

Resolution: Not A Problem

bq. run Hadoop JobTrackers on all of the Cassandra nodes

this is the approach we've been going with.

 Improve Cassandra to MapReduce locality sharing
 ---

 Key: CASSANDRA-1124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1124
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Reporter: Jeremy Hanna
Priority: Minor

 Currently, the hadoop integration only passes the data's local node 
 information (ColumnFamilyRecordReader-RowIterator-getLocation).  Hadoop can 
 take advantage of full locality and it's possible that we have full locality 
 configured in Cassandra.
 So this improvement is for adding the full locality of the data into the 
 String in a way that hadoop can make use of it with its Job/Task Trackers.
 This will allow for jobs to be potentially on the same rack and/or datacenter 
 if possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1162) Put the pig loadfunc for Cassandra into core

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1162.
---

Resolution: Duplicate

part of CASSANDRA-1805

 Put the pig loadfunc for Cassandra into core
 

 Key: CASSANDRA-1162
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1162
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jeremy Hanna
Priority: Minor

 Currently the Pig loadfunc for Cassandra lives in a contrib module.  
 Currently it also doesn't work properly in all cases so it would need some 
 fixing there too.  For example, I haven't gotten it to work standalone with 
 just pig.  It seems to work better against a hadoop cluster.
 The other question for putting it into core is whether we keep an example of 
 it's usage in contrib - kind of like what we're doing with WordCount - only 
 for pig - the reason being is that you need at least Pig somewhere for the 
 CassandraStore (the loadfunc) to actually be useful.  Seems very similar to 
 the WordCount case though - and it can be more stable with CassandraStore 
 being built with core (hopefully).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1175) Client/Server deadlock in aggressive bulk upload (maybe a thrift bug?) (2010-06-07_12-31-16)

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1175.
---

Resolution: Not A Problem

bq. Makes me wonder if the async commitlog writes are still just missing the 
backpressure changes?

binary writes are not going to get backpressure support.

the good news is CASSANDRA-1278 is open to add bulk loading via the streaming 
api which will be simpler and more robust.

 Client/Server deadlock in aggressive bulk upload (maybe a thrift bug?) 
 (2010-06-07_12-31-16)
 

 Key: CASSANDRA-1175
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1175
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: apache-cassandra-2010-06-07_12-31-16
Reporter: Jesse Hallio

 I was testing to see how long it takes to upload some 222M lines into 
 Cassandra. Using a single machine (4-core, 8G of mem, opensolaris, 1.6.0_10 
 64bit and 32bit) to run the server and client.
 The client creates a single keyspace with a single column family. The 
 inserted data is 8-byte key, with 26 NVs (3-5 ascii bytes per key, 8 bytes 
 per value) for each line. Using RackUnawareStrategy and replication factor 1. 
 The server install is pretty much out-of-the-box, with -d64 -server -Xmx6G 
 for the server end. (3G for the 32bit VM). The client writes the changes in 
 batches of 1000 lines with batch_mutate, and outputs a logging line every 50k 
 lines.
 The import hangs at random points - sometimes after 6900K mark (I think I saw 
 even 10M yesterday, but I lost the window and the backbuffer with it), 
 sometimes only after 1750K. kill -QUIT gives for the server:
 ---
 pool-1-thread-1 prio=3 tid=0x00b68800 nid=0x5c runnable 
 [0xfd7e676f5000..0xfd7e676f5920]
java.lang.Thread.State: RUNNABLE
 at java.net.SocketInputStream.socketRead0(Native Method)
 at java.net.SocketInputStream.read(SocketInputStream.java:129)
 at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
 at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
 - locked 0xfd7e7ac80fb8 (a java.io.BufferedInputStream)
 at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:363)
 at 
 org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandra.java:12840)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:1743)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:1317)
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 ---
 and for the client:
 ---
 main prio=3 tid=0x0807 nid=0x2 runnable [0xfe38e000..0xfe38ed38]
java.lang.Thread.State: RUNNABLE
 at java.net.SocketInputStream.socketRead0(Native Method)
 at java.net.SocketInputStream.read(SocketInputStream.java:129)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
 at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
 at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
 - locked 0xbad8a840 (a java.io.BufferedInputStream)
 at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:126)
 at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:314)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:262)
 at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:192)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:745)
 at 
 org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:729)
 at mycode.MyClass.main(MyClass.java:169)
 ---
 It looks like both ends are trying to read simultaneously from each other, 
 which kind of looks like a thrift bug; but I don't have clear idea what 
 happens in org.apache.cassandra.thrift.Cassandra.
 I tried using with thrift r948492, but it didn't help (I didn't recompile the 
 interface classes, I only switched the runtime jar).

-- 
This message is automatically generated by JIRA.
-
You can reply to this 

[jira] Resolved: (CASSANDRA-1215) Server Side Operations

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1215.
---

Resolution: Duplicate

 Server Side Operations
 --

 Key: CASSANDRA-1215
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1215
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Edward Capriolo

 Cassandra values are byte arrays. To operate on these byte arrays a client 
 will have to get the value from a server, modify it , and retransmit the 
 value back to be set. This is NOT a request for atomic operations, however 
 some types of atomic operations may be possible after vector clocks are 
 implemented. Regardless of vector clocks or atomic operations, some common 
 string operations would still be useful.  
 These type of functions may include:
 {noformat}
 append
 substring
 increment
 indexof
 {noformat}
 Operations that work on lists would be more challenging again because 
 Cassandra does not know or care what the underlying column data is, but those 
 could be specified in the method call. 
 {noformat}
 pop (String delimieter )
 itemat(String delimeter, item i)
 {noformat}
 or possibly described in the schema (I do not like this idea but wanted to 
 mention it)
 {noformat}
 ColumnFamily valueDelimeter\t {noformat}
 Also theoretically a user could pass an object implementing an interface or a 
 string that is a little language that operates on the data to return some 
 result or change the data.
 I would like to discuss the merits of such features, and if we decide these 
 would be useful I would like to work on implementing them.  I can not assign 
 myself this ticket otherwise I would have.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (CASSANDRA-1908) Implement the CLibrary using JNI module to avoid the LGPL dependency on JNA

2010-12-28 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975512#action_12975512
 ] 

Eric Evans edited comment on CASSANDRA-1908 at 12/28/10 10:21 AM:
--

It might be trivial to implement a JNI library, but I don't think it's going to 
be trivial to build  architecture dependent code at release time, field issues, 
debug, etc, for N platforms.

I'm also open to alternatives, but I'd need to be convinced that the cure 
wasn't worse than the disease.  

  was (Author: urandom):
It might be trivial to implement a JNI library, but I don't think it's 
going to be trivial to build  architecture dependent code at release time, 
field issues, debug, etc, for N platforms.

I'm also open to alternatives, but I'd need to be convinced that the cure 
wasn't worth the disease.  
  
 Implement the CLibrary using JNI module to avoid the LGPL dependency on JNA
 ---

 Key: CASSANDRA-1908
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1908
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Hiram Chirino
 Fix For: 0.7.1

 Attachments: cassandra-jni.zip


 Cassandra can't ship JNA out of the box since it's LGPL licensed, so many of 
 the performance optimizing features in the CLibrary class are not available 
 in a simple install.  It should be trivial to implement a real JNI library 
 for the CLibrary class.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1228) make Cassandra database engine to be a Independent subproject

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1228.
---

Resolution: Not A Problem

Patches welcome, but this isn't something that the existing devs are very 
interested in doing.

 make Cassandra database engine to be a Independent subproject 
 --

 Key: CASSANDRA-1228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1228
 Project: Cassandra
  Issue Type: Wish
  Components: Core
 Environment: java
Reporter: jingfengtan

 make Cassandra database engine to be a Independent sub-project 
 i think Cassandra database engine is good for many application Dev. 
 but no all company likes Cassandra's consistency hash, gossip, and Non 
 synchronous write.
 they may doubted about this implements. 
 and  my company want to reform Cassandra. we use zookeeper, non consistency 
 hash computing, 
 intelligence client implements. and use Cassandra database engine as base.
 so if Cassandra database engine is  Independent sub-project.  is very night.
 we needn't  span a lot time to sync with lasted Cassandra  version.
 and Cassandra database engine can use in many other field. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1224) Cassandra NPE on insert after one node goes down.

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1224.
---

Resolution: Cannot Reproduce

 Cassandra NPE on insert after one node goes down.
 -

 Key: CASSANDRA-1224
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1224
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6.1
 Environment: Gentoo  Linux
Reporter: Jeff Lerman
Priority: Minor

 Hi all,
 I posted this in a different thread and was instructed to create a new bug.  
 As far as I can tell it is not too major of an issue as it may have been 
 cause by us prematurely taking down a node.
 I just had this happen in Cassandra 0.6.1. We're only running two nodes as of 
 now and our second one was barely accepting any requests and only being 
 replicated to for the most part. The load went up to 9 consistently so we 
 investigated and noticed its Load on nodetool was 2x as large as our other 
 instance. I went and cleared out the data and commitlogs, set autobootstrap 
 to true and put it back in.
 This is where our case gets funky...we noticed the other instance's load 
 going up a lot and saw that the one I just readded was not doing much. After 
 awhile of contemplating, I took down the second one again. Minutes later I 
 found an open case about the anticompaction happening before full 
 bootstrapping occurs. I found the data/stream dir on the working instance and 
 saw that it was complete...but I had already taken down the second one! So I 
 deleted the stream dir to save space and figured I'd start the process again 
 tomorrow.
 A few hours later I am getting these Internal errors on writes:
 ERROR [pool-1-thread-287117] 2010-06-23 19:16:51,754 Cassandra.java (line 
 1492) Internal error processing insert
 java.lang.NullPointerException
 That was the entire trace.   We tried to kill -3 Cassandra...waited hours and 
 it never killed.  Did a kill -6 but got no usable dump.   Perhaps it is 
 possible for someone to recreate this situation?
 I also noticed that the virtual memory Cassandra was taking up tacked on the 
 extra 10+GB for the stream file.  It never released this either which is bad.
 Thanks,
 Jeff

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-973) new AccessLevel needed above FULL for schema modifications

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-973.
--

Resolution: Later

 new AccessLevel needed above FULL for schema modifications
 --

 Key: CASSANDRA-973
 URL: https://issues.apache.org/jira/browse/CASSANDRA-973
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ted Zlatanov

 AccessLevel.FULL is for read+write+delete, but modifying the schema with the 
 new Thrift functions should require a higher level.  Maybe AccessLevel.SCHEMA?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-979) describe_keyspace() and describe_keyspaces() don't provide enough schema information

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-979.
--

Resolution: Duplicate

I believe this is addressed in 0.7.

 describe_keyspace() and describe_keyspaces() don't provide enough schema 
 information
 

 Key: CASSANDRA-979
 URL: https://issues.apache.org/jira/browse/CASSANDRA-979
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Ted Zlatanov

 describe_keyspace() and describe_keyspaces() should provide enough 
 introspection to reconstruct the schema.  All the CfDef and KsDef attributes 
 should be in the return maps.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1013) track row count during compaction as well as min/mean/max row size

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1013.
---

   Resolution: Duplicate
Fix Version/s: 0.7.0

done for 0.7

 track row count during compaction as well as min/mean/max row size
 --

 Key: CASSANDRA-1013
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1013
 Project: Cassandra
  Issue Type: Wish
Affects Versions: 0.6.2
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.0


 This would allow calculating total row count in a CF as sum(node row count) / 
 replicationfactor.
 Updating this would of course only make sense during major compaction.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-996) Assertion when bootstrapping multiple machines at the same time

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-996.
--

Resolution: Later

it's not designed to support adding multiple nodes w/ autobootstrap unless

1) you wait for pending range setup to complete on the previous before starting 
another or
2) you manually specify initialtoken

 Assertion when bootstrapping multiple machines at the same time
 ---

 Key: CASSANDRA-996
 URL: https://issues.apache.org/jira/browse/CASSANDRA-996
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7 beta 1
Reporter: Erick Tryzelaar

 In testing CASSANDRA-994, I ran into this exception when I tried to bootstrap 
 multiple machines at the same time:
 java.lang.AssertionError
 at 
 org.apache.cassandra.locator.TokenMetadata.removeEndpoint(TokenMetadata.java:192)
 at 
 org.apache.cassandra.locator.TokenMetadata.cloneAfterAllLeft(TokenMetadata.java:296)
 at 
 org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:725)
 at 
 org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:703)
 at 
 org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:571)
 at 
 org.apache.cassandra.service.StorageService.onChange(StorageService.java:514)
 at org.apache.cassandra.service.StorageService.onJoin(StorageService.java:881)
 at org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:588)
 at org.apache.cassandra.gms.Gossiper.handleNewJoin(Gossiper.java:563)
 at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:630)
 at 
 org.apache.cassandra.gms.Gossiper$GossipDigestAck2VerbHandler.doVerb(Gossiper.java:1016)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:41)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 This is repeatable, and it appears not happen if I start a machine one at a 
 time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1041) Skip large size (Configurable) SSTable in minor or/and major compaction

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1041.
---

Resolution: Fixed

superseded by CASSANDRA-1608

 Skip large size (Configurable) SSTable in minor or/and major compaction
 ---

 Key: CASSANDRA-1041
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1041
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Schubert Zhang
Priority: Minor
 Attachments: CASSANDRA-1041-0.6.1.patch


 When the SSTable files are large enough, such as 100GB, the compaction 
 (include minor and major) cost is big (disk IO, CPU, memory), etc.
 In some applications, we accept not compcating all SSTables to the final very 
 large ones. 
 This feature provide two optional configurable attributes 
 MinorCompactSkipInGB and MajorCompactSkipInGB for each ColumnFamily. 
 The optional MinorCompactSkipInGB attribute specifies the maximum size of 
 SSTables which will be compcated in minor-compaction. The SSTables larger 
 than MinorCompactSkipInGB will be skipped. The optional MajorCompactSkipInGB 
 attribute is same for major-compaction.
 The default of these attributes are 0, means do not skip, just as current 
 0.6.1.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1346) Internal error processing get_slice

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1346.
---

Resolution: Cannot Reproduce

 Internal error processing get_slice
 ---

 Key: CASSANDRA-1346
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1346
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6.3, 0.6.4
 Environment: Linux
 cat /etc/issue: Debian GNU/Linux 5.0
 uname -a: Linux server.hp 2.6.26-2-amd64 #1 SMP Sun Jun 20 20:16:30 UTC 2010 
 x86_64 GNU/Linux
Reporter: Sasha1024
 Attachments: system-0.6.3.log, system-0.6.4.log


 I get the Internal error processing get_slice error.
 Here are parameters of get_slice call:
 {code}
 string(8) Torrents
 string(40) e6c797bdcea982877f74cd41f257427bbe31b189
 object(cassandra_ColumnParent)#1709 (2) {
   [column_family]=
   string(7) Scrapes
   [super_column]=
   NULL
 }
 object(cassandra_SlicePredicate)#1710 (2) {
   [column_names]=
   NULL
   [slice_range]=
   object(cassandra_SliceRange)#1711 (4) {
 [start]=
 string(0) 
 [finish]=
 string(0) 
 [reversed]=
 bool(false)
 [count]=
 int(10)
   }
 }
 int(1)
 {code}
 Here is excerpt from storage-conf.xml:
 {code}
 Keyspace Name=Torrents
   !--...--
   ColumnFamily Name=Scrapes  ColumnType=Super 
 CompareWith=BytesType CompareSubcolumnsWith=BytesType /
   !--...--
   
 ReplicaPlacementStrategyorg.apache.cassandra.locator.RackUnawareStrategy/ReplicaPlacementStrategy
   ReplicationFactor1/ReplicationFactor
   
 EndPointSnitchorg.apache.cassandra.locator.EndPointSnitch/EndPointSnitch
 /Keyspace
 {code}
 Here is var_dump of exception:
 {code}
 TApplicationException Object
 (
 [message:protected] = Internal error processing get_slice
 [string:Exception:private] = 
 [code:protected] = 6
 [file:protected] = 
 /home/team/TORRENTS_SE/include/Tools/phpcassa/thrift/packages/cassandra/Cassandra.php
 [line:protected] = 206
 [trace:Exception:private] = Array
 (
 [0] = Array
 (
 [file] = 
 /home/team/TORRENTS_SE/include/Tools/phpcassa/thrift/packages/cassandra/Cassandra.php
 [line] = 169
 [function] = recv_get_slice
 [class] = CassandraClient
 [type] = -
 [args] = Array
 (
 )
 )
 [1] = Array
 (
 [file] = 
 /home/team/TORRENTS_SE/include/Tools/CassandraTools.php
 [line] = 99
 [function] = get_slice
 [class] = CassandraClient
 [type] = -
 [args] = Array
 (
 [0] = Torrents
 [1] = e6c797bdcea982877f74cd41f257427bbe31b189
 [2] = cassandra_ColumnParent Object
 (
 [column_family] = Scrapes
 [super_column] = 
 )
 [3] = cassandra_SlicePredicate Object
 (
 [column_names] = 
 [slice_range] = cassandra_SliceRange 
 Object
 (
 [start] = 
 [finish] = 
 [reversed] = 
 [count] = 10
 )
 )
 [4] = 1
 )
 )
 [2] = Array
 (
 [file] = 
 /home/team/TORRENTS_SE/data/bin/cassandra-to-sphinx.php
 [line] = 10
 [function] = GetColumns
 [class] = CassandraTools
 [type] = ::
 [args] = Array
 (
 [0] = Array
 (
 [ColumnFamily] = Scrapes
 [RowKey] = 
 e6c797bdcea982877f74cd41f257427bbe31b189
 )
 [1] = Array
 (
 [count] = 10
 )
 )
 )
 )
 [previous:Exception:private] = 
 )
 {code}
 Excerpt from 

[jira] Resolved: (CASSANDRA-1078) Add system tests to Hudson build

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1078.
---

Resolution: Won't Fix

Eric reports that getting Hudson to run our python-based system tests is not 
practical.

 Add system tests to Hudson build
 

 Key: CASSANDRA-1078
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1078
 Project: Cassandra
  Issue Type: Task
  Components: Tools
Reporter: Johan Oskarsson

 Currently the Hudson build only runs the junit tests. It should also run the 
 system tests to catch any issues not found by the junit tests.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1459) Allow modification of HintedHandoff configuration to be changed on the fly per node.

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1459:
--

Fix Version/s: 0.7.1

 Allow modification of HintedHandoff configuration to be changed on the fly 
 per node.
 

 Key: CASSANDRA-1459
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1459
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
 Environment: N/A
Reporter: Jeff Davey
 Fix For: 0.7.1


 If there is an extended downtime of a node, allow Hinted Handoff to be 
 disabled specifically for that node rather than having to decommission it. 
 benblack If a node will be down for an extended period, it would be useful 
 to be able to disable HH for it until it returns, without having to 
 reconfigure and restart nodes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1163) Put the CassandraBulkLoader into core

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1163.
---

Resolution: Won't Fix

see CASSANDRA-1805 and CASSANDRA-1278

 Put the CassandraBulkLoader into core
 -

 Key: CASSANDRA-1163
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1163
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jeremy Hanna
Priority: Minor

 The CassandraBulkLoader is something that would be useful as a utility in the 
 Cassandra core rather than just being a contrib.  Gary has been making sure 
 it continues to work with all of the changes going into core. However, it 
 would be nice to have it as part of core so that he at least wouldn't have to 
 keep vigilant about it building properly - running properly is another 
 concern.
 So it would likely go into a util package somewhere.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (CASSANDRA-1143) Nodetool gives cryptic errors when given a nonexistent keyspace arg

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-1143:
-

Assignee: Joaquin Casares

Is this still a problem?

 Nodetool gives cryptic errors when given a nonexistent keyspace arg
 ---

 Key: CASSANDRA-1143
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1143
 Project: Cassandra
  Issue Type: Wish
  Components: Tools
 Environment: Sun Java 1.6u20, Cassandra 0.6.2, CentOS 5.5.
Reporter: Ian Soboroff
Assignee: Joaquin Casares
Priority: Trivial
   Original Estimate: 1h
  Remaining Estimate: 1h

 I typoed the keyspace arg to 'nodetool repair', and got the following 
 exception:
 /usr/local/src/cassandra/bin/nodetool --host node4 repair DocDb
 Exception in thread main java.lang.RuntimeException: No replica strategy 
 configured for DocDb
 at 
 org.apache.cassandra.service.StorageService.getReplicationStrategy(StorageService.java:246)
 at 
 org.apache.cassandra.service.StorageService.constructRangeToEndPointMap(StorageService.java:466)
 at 
 org.apache.cassandra.service.StorageService.getRangeToAddressMap(StorageService.java:452)
 at 
 org.apache.cassandra.service.AntiEntropyService.getNeighbors(AntiEntropyService.java:145)
 at 
 org.apache.cassandra.service.StorageService.forceTableRepair(StorageService.java:1075)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
 at sun.rmi.transport.Transport$1.run(Transport.java:159)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 It would be better to report that the keyspace doesn't exist, rather than the 
 keyspace doesn't have a replication strategy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1255) Explore interning keys and column names

2010-12-28 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975538#action_12975538
 ] 

Jonathan Ellis commented on CASSANDRA-1255:
---

Any update on this?

 Explore interning keys and column names
 ---

 Key: CASSANDRA-1255
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1255
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Stu Hood

 With multiple Memtables, key caches and row caches holding DecoratedKey 
 references, it could potentially be a huge memory savings (and relief to GC) 
 to intern DecoratedKeys. Taking the idea farther, for the skinny row pattern, 
 and for certain types of wide row patterns, interning of column names could 
 be very beneficial as well (although we would need to wrap the byte[]s in 
 something for hashCode/equals).
 This ticket should explore the benefits and overhead of interning.
 Google collections/guava MapMaker is a very convenient way to create this 
 type of cache: example call: 
 http://stackoverflow.com/questions/2865026/use-permgen-space-or-roll-my-own-intern-method/2865083#2865083

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1270) Path not found under Windows 7

2010-12-28 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975539#action_12975539
 ] 

Jonathan Ellis commented on CASSANDRA-1270:
---

I believe this is fixed in recent 0.6 releases

 Path not found under Windows 7
 --

 Key: CASSANDRA-1270
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1270
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.6.3
 Environment: Windows 7 Professional, JRE 1.6
Reporter: Wojciech Gomoła
Priority: Minor

 I'm not sure that this is bug maybe it is my fault but when I try to run 
 Cassandra using bin\cassandra -f my system returns Path not found message. 
 When i comment ECHO OFF from cassandra.bat I have seen that last line of 
 output contains 
 .8.jar;D:\Cassandra\bin\..\lib\slf4j-log4j12-1.5.8.jar;D:\Cassandra\bin\..\build\classes
  org.apache.cassandra.thrift.CassandraDaemon
 D:\Cassandra is my cassandra root directory. Directory 
 D:\Casandra\build\classes\org\apache\cassandra\thrift contains 
 CassandraDaemon.class, CassandraDaemon$1.class, CassandraDaemon$2.class files.
 Apologize for my Vocabulary and Grammar.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1270) Path not found under Windows 7

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1270.
---

Resolution: Fixed

 Path not found under Windows 7
 --

 Key: CASSANDRA-1270
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1270
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.6.3
 Environment: Windows 7 Professional, JRE 1.6
Reporter: Wojciech Gomoła
Priority: Minor

 I'm not sure that this is bug maybe it is my fault but when I try to run 
 Cassandra using bin\cassandra -f my system returns Path not found message. 
 When i comment ECHO OFF from cassandra.bat I have seen that last line of 
 output contains 
 .8.jar;D:\Cassandra\bin\..\lib\slf4j-log4j12-1.5.8.jar;D:\Cassandra\bin\..\build\classes
  org.apache.cassandra.thrift.CassandraDaemon
 D:\Cassandra is my cassandra root directory. Directory 
 D:\Casandra\build\classes\org\apache\cassandra\thrift contains 
 CassandraDaemon.class, CassandraDaemon$1.class, CassandraDaemon$2.class files.
 Apologize for my Vocabulary and Grammar.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1253) Forcing CL on a per user basis

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1253.
---

Resolution: Later

 Forcing CL on a per user basis
 --

 Key: CASSANDRA-1253
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1253
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Edward Capriolo
Priority: Trivial

 The user that writes data is allowed to chose the CL they want. This presents 
 several challenges for administration. For example, if I am sure ALL of our 
 applications are writing/reading and CL.QUORUM or CL.ALL, I have some 
 flexibility such as joining a node without auto-bootstrap. Also a policy 
 might be made all data must be written at QUORUM.
 The new feature is to associate users with a list of available CL. Thus an 
 administrator can enable disable levels.
 {noformat}
 admin: read level [ CL.ZERO,CL.ONE,CL.QUORUM, CL.ALL ]
 user1: read level [CL.QUORUM, CL.ALL]
 {noformat}
 If the user attempts to write/read with a disallowed level an Exception can 
 be thrown or the request could be silently modified/upgraded to a different 
 CL.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1331) Any request after a TApplicationException hangs

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1331.
---

Resolution: Cannot Reproduce

 Any request after a TApplicationException hangs
 ---

 Key: CASSANDRA-1331
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1331
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Apache
Priority: Minor

 Observed that the design of request validation can return without consuming 
 the complete request. The remnant is then read by the next request and 
 produces a large read size.
 readMessageBegin size: 134218752
 Sample test case in python:
 from cassandra.ttypes import *
 from cassandra import Cassandra
 from thrift import Thrift
 from thrift.transport import TTransport
 from thrift.transport import TSocket
 from thrift.protocol.TBinaryProtocol import TBinaryProtocolAccelerated
 socket = TSocket.TSocket(127.0.0.1, 9160)
 transport = TTransport.TBufferedTransport(socket)
 protocol = TBinaryProtocolAccelerated(transport)
 client = Cassandra.Client(protocol)
 transport.open()
 client.transport = transport
 # don't specify a column_family to force a TApplicationException
 parent = ColumnParent()
 try:
 client.get_count(ignore_keyspace, ignore_key, parent, 1)
 print ERROR: we didn't see the problem
 except TApplicationException as e:
 message = Required field 'column_family' was not present!
 if e.message.startswith(message):
 print OK, we got the error we were looking for.
 print The server input buffer was only partially read,
 print   up to the validation error, so our next request
 print   will start reading the stale data causing a hang.
 print hanging...
 client.get_count(ignore_keyspace, ignore_key, parent, 1)
 client.transport.close()

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1513) BufferUnderflowExceptions

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1513.
---

Resolution: Fixed

believe this was one of our thrift 0.5 upgrade bugs.  should be fixed in latest 
0.7 rc

 BufferUnderflowExceptions
 -

 Key: CASSANDRA-1513
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1513
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 10.04.1, 1.6.0_18-b18
Reporter: Matt Conway

 Seeing a number of these in my log when running a trunk build from 9/11/2010
 No idea how to duplicate it, hopefully you can make sense of it from the 
 stack trace
 ERROR [MUTATION_STAGE:19] 2010-09-14 02:24:50,704 
 DebuggableThreadPoolExecutor.
 java (line 102) Error in ThreadPoolExecutorjava.nio.BufferUnderflowException
 at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145)
 at java.nio.ByteBuffer.get(ByteBuffer.java:692)
 at 
 org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:62)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:50)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-926) implement alternative RPC interface using Avro

2010-12-28 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975546#action_12975546
 ] 

Eric Evans commented on CASSANDRA-926:
--

It's been suggested that we eliminate the avro rpc implementation entirely, see 
http://thread.gmane.org/gmane.comp.db.cassandra.client.devel/36 for the 
discussion.

 implement alternative RPC interface using Avro
 --

 Key: CASSANDRA-926
 URL: https://issues.apache.org/jira/browse/CASSANDRA-926
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Eric Evans
Priority: Minor

 Avro is data serialization and RPC framework similar to Thrift. It provides:
 * Rich data structures.
 * A compact, fast, binary data format.
 * A container file, to store persistent data.
 * Remote procedure call (RPC).
 * Simple integration with dynamic languages. Code generation is not required 
 to read or write data files nor to use or implement RPC protocols. Code 
 generation as an optional optimization, only worth implementing for 
 statically typed languages. 
 Cassandra's Avro interface is being structured in a way that closely mirrors 
 the existing Thrift interface, both in terms of public facing API, and how it 
 is implemented. GSOC students interested in this task should begin by 
 familiarizing themselves with Cassandra's Thrift service 
 (org.apache.cassandra.thrift).
 Note: This is a very large and long-running task so treat this as a 
 meta-issue and add sub-tasks and/or blocking issues as appropriate.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (CASSANDRA-926) implement alternative RPC interface using Avro

2010-12-28 Thread Eric Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Evans reassigned CASSANDRA-926:


Assignee: Eric Evans

 implement alternative RPC interface using Avro
 --

 Key: CASSANDRA-926
 URL: https://issues.apache.org/jira/browse/CASSANDRA-926
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Eric Evans
Assignee: Eric Evans
Priority: Minor

 Avro is data serialization and RPC framework similar to Thrift. It provides:
 * Rich data structures.
 * A compact, fast, binary data format.
 * A container file, to store persistent data.
 * Remote procedure call (RPC).
 * Simple integration with dynamic languages. Code generation is not required 
 to read or write data files nor to use or implement RPC protocols. Code 
 generation as an optional optimization, only worth implementing for 
 statically typed languages. 
 Cassandra's Avro interface is being structured in a way that closely mirrors 
 the existing Thrift interface, both in terms of public facing API, and how it 
 is implemented. GSOC students interested in this task should begin by 
 familiarizing themselves with Cassandra's Thrift service 
 (org.apache.cassandra.thrift).
 Note: This is a very large and long-running task so treat this as a 
 meta-issue and add sub-tasks and/or blocking issues as appropriate.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1632) Thread workflow and cpu affinity

2010-12-28 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975553#action_12975553
 ] 

T Jake Luciani commented on CASSANDRA-1632:
---

Regarding 2)  I think it's possible to set the affinity on a particular thread 
in linux using the taskset command.  

Basically you get the native thread's PPID: ps -Lf ${pid}  and call taskset 
with each of these to pin them to a given core. This could be done on startup.

 

 Thread workflow and cpu affinity
 

 Key: CASSANDRA-1632
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1632
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Goffinet
 Fix For: 0.7.1


 Here are some thoughts I wanted to write down, we need to run some serious 
 benchmarks to see the benefits:
 1) All thread pools for our stages use a shared queue per stage. For some 
 stages we could move to a model where each thread has its own queue. This 
 would reduce lock contention on the shared queue. This workload only suits 
 the stages that have no variance, else you run into thread starvation. Some 
 stages that this might work: ROW-MUTATION.
 2) Set cpu affinity for each thread in each stage. If we can pin threads to 
 specific cores, and control the workflow of a message from Thrift down to 
 each stage, we should see improvements on reducing L1 cache misses. We would 
 need to build a JNI extension (to set cpu affinity), as I could not find 
 anywhere in JDK where it was exposed. 
 3) Batching the delivery of requests across stage boundaries. Peter Schuller 
 hasn't looked deep enough yet into the JDK, but he thinks there may be 
 significant improvements to be had there. Especially in high-throughput 
 situations. If on each consumption you were to consume everything in the 
 queue, rather than implying a synchronization point in between each request.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (CASSANDRA-1632) Thread workflow and cpu affinity

2010-12-28 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975553#action_12975553
 ] 

T Jake Luciani edited comment on CASSANDRA-1632 at 12/28/10 11:20 AM:
--

Regarding 2)  I think it's possible to set the affinity on a particular thread 
in linux using the taskset command.  

Basically you get the native thread's LWPID: ps -Lf ${pid}  and call taskset 
with each of these to pin them to a given core. This could be done on startup.

 

  was (Author: tjake):
Regarding 2)  I think it's possible to set the affinity on a particular 
thread in linux using the taskset command.  

Basically you get the native thread's PPID: ps -Lf ${pid}  and call taskset 
with each of these to pin them to a given core. This could be done on startup.

 
  
 Thread workflow and cpu affinity
 

 Key: CASSANDRA-1632
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1632
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Goffinet
 Fix For: 0.7.1


 Here are some thoughts I wanted to write down, we need to run some serious 
 benchmarks to see the benefits:
 1) All thread pools for our stages use a shared queue per stage. For some 
 stages we could move to a model where each thread has its own queue. This 
 would reduce lock contention on the shared queue. This workload only suits 
 the stages that have no variance, else you run into thread starvation. Some 
 stages that this might work: ROW-MUTATION.
 2) Set cpu affinity for each thread in each stage. If we can pin threads to 
 specific cores, and control the workflow of a message from Thrift down to 
 each stage, we should see improvements on reducing L1 cache misses. We would 
 need to build a JNI extension (to set cpu affinity), as I could not find 
 anywhere in JDK where it was exposed. 
 3) Batching the delivery of requests across stage boundaries. Peter Schuller 
 hasn't looked deep enough yet into the JDK, but he thinks there may be 
 significant improvements to be had there. Especially in high-throughput 
 situations. If on each consumption you were to consume everything in the 
 queue, rather than implying a synchronization point in between each request.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1882) rate limit all background I/O

2010-12-28 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975557#action_12975557
 ] 

T Jake Luciani commented on CASSANDRA-1882:
---

Peter, I need to dig into this but *i think* it could also be done via the 
http://linux.die.net/man/2/ioprio_set call in linux for the compaction thread. 
Obviously not as portable, but could be a quick win.

 rate limit all background I/O
 -

 Key: CASSANDRA-1882
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1882
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
Priority: Minor
 Fix For: 0.7.1


 There is a clear need to support rate limiting of all background I/O (e.g., 
 compaction, repair). In some cases background I/O is naturally rate limited 
 as a result of being CPU bottlenecked, but in all cases where the CPU is not 
 the bottleneck, background streaming I/O is almost guaranteed (barring a very 
 very smart RAID controller or I/O subsystem that happens to cater extremely 
 well to the use case) to be detrimental to the latency and throughput of 
 regular live traffic (reads).
 Ways in which live traffic is negatively affected by backgrounds I/O includes:
 * Indirectly by page cache eviction (see e.g. CASSANDRA-1470).
 * Reads are directly detrimental when not otherwise limited for the usual 
 reasons; large continuing read requests that keep coming are battling with 
 latency sensitive live traffic (mostly seek bound). Mixing seek-bound latency 
 critical with bulk streaming is a classic no-no for I/O scheduling.
 * Writes are directly detrimental in a similar fashion.
 * But in particular, writes are more difficult still: Caching effects tend to 
 augment the effects because lacking any kind of fsync() or direct I/O, the 
 operating system and/or RAID controller tends to defer writes when possible. 
 This often leads to a very sudden throttling of the application when caches 
 are filled, at which point there is potentially a huge backlog of data to 
 write.
 ** This may evict a lot of data from page cache since dirty buffers cannot be 
 evicted prior to being flushed out (though CASSANDRA-1470 and related will 
 hopefully help here).
 ** In particular, one major reason why batter-backed RAID controllers are 
 great is that they have the capability to eat storms of writes very quickly 
 and schedule them pretty efficiently with respect to a concurrent continuous 
 stream of reads. But this ability is defeated if we just throw data at it 
 until entirely full. Instead a rate-limited approach means that data can be 
 thrown at said RAID controller at a reasonable pace and it can be allowed to 
 do its job of limiting the impact of those writes on reads.
 I propose a mechanism whereby all such backgrounds reads are rate limited in 
 terms of MB/sec throughput. There would be:
 * A configuration option to state the target rate (probably a global, until 
 there is support for per-cf sstable placement)
 * A configuration option to state the sampling granularity. The granularity 
 would have to be small enough for rate limiting to be effective (i.e., the 
 amount of I/O generated in between each sample must be reasonably small) 
 while large enough to not be expensive (neither in terms of gettimeofday() 
 type over-head, nor in terms of causing smaller writes so that would-be 
 streaming operations become seek bound). There would likely be a recommended 
 value on the order of say 5 MB, with a recommendation to multiply that with 
 the number of disks in the underlying device (5 MB assumes classic mechanical 
 disks).
 Because of coarse granularity (= infrequent synchronization), there should 
 not be a significant overhead associated with maintaining shared global rate 
 limiter for the Cassandra instance.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1901) getRestrictedRanges bug where node owns minimum token

2010-12-28 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975558#action_12975558
 ] 

Jonathan Ellis commented on CASSANDRA-1901:
---

I get

bq. Hunk #2 FAILED at 67.

against 0.6

 getRestrictedRanges bug where node owns minimum token
 -

 Key: CASSANDRA-1901
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1901
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6.9, 0.7.0 rc 2
Reporter: Jonathan Ellis
Assignee: Stu Hood
 Fix For: 0.6.9, 0.7.1

 Attachments: 
 0001-Switch-minimum-token-for-RP-to-1-for-midpoint-purposes.txt


 From the ML, there are two RF=1 nodes, 0 for the local node (17.224.36.17) 
 and 85070591730234615865843651857942052864 for the remote node 
 (17.224.109.80).  Debug log shows
 {code}
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 CassandraServer.java (line 
 479) range_slice
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 StorageProxy.java (line 412) 
 RangeSliceCommand{keyspace='Harvest', column_family='TestCentroids', 
 super_column=null, predicate=SlicePredicate(slice_range:SliceRange(start:80 
 01 00 01 00 00 00 10 67 65 74 5F 72 61 6E 67 65 5F 73 6C 69 63 65 73 00 00 00 
 0C 0C 00 01 0B 00 03 00 00 00 0D 54 65 73 74 43 65 6E 74 72 6F 69 64 73 00 0C 
 00 02 0C 00 02 0B 00 01 00 00 00 00, finish:80 01 00 01 00 00 00 10 67 65 74 
 5F 72 61 6E 67 65 5F 73 6C 69 63 65 73 00 00 00 0C 0C 00 01 0B 00 03 00 00 00 
 0D 54 65 73 74 43 65 6E 74 72 6F 69 64 73 00 0C 00 02 0C 00 02 0B 00 01 00 00 
 00 00 0B 00 02 00 00 00 00, reversed:false, count:1)), range=[0,0], 
 max_keys=11}
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 StorageProxy.java (line 597) 
 restricted ranges for query [0,0] are [[0,0]]
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,959 StorageProxy.java (line 423) 
 === endpoint: belize1.apple.com/17.224.36.17 for range.right 0
 {code}
 Thus, node 85070591730234615865843651857942052864 is left out.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (CASSANDRA-1901) getRestrictedRanges bug where node owns minimum token

2010-12-28 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975558#action_12975558
 ] 

Jonathan Ellis edited comment on CASSANDRA-1901 at 12/28/10 11:35 AM:
--

I get

{noformat}
patching file src/java/org/apache/cassandra/dht/RandomPartitioner.java
...
Hunk #2 FAILED at 67.
{noformat}

against 0.6

  was (Author: jbellis):
I get

bq. Hunk #2 FAILED at 67.

against 0.6
  
 getRestrictedRanges bug where node owns minimum token
 -

 Key: CASSANDRA-1901
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1901
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6.9, 0.7.0 rc 2
Reporter: Jonathan Ellis
Assignee: Stu Hood
 Fix For: 0.6.9, 0.7.1

 Attachments: 
 0001-Switch-minimum-token-for-RP-to-1-for-midpoint-purposes.txt


 From the ML, there are two RF=1 nodes, 0 for the local node (17.224.36.17) 
 and 85070591730234615865843651857942052864 for the remote node 
 (17.224.109.80).  Debug log shows
 {code}
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 CassandraServer.java (line 
 479) range_slice
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 StorageProxy.java (line 412) 
 RangeSliceCommand{keyspace='Harvest', column_family='TestCentroids', 
 super_column=null, predicate=SlicePredicate(slice_range:SliceRange(start:80 
 01 00 01 00 00 00 10 67 65 74 5F 72 61 6E 67 65 5F 73 6C 69 63 65 73 00 00 00 
 0C 0C 00 01 0B 00 03 00 00 00 0D 54 65 73 74 43 65 6E 74 72 6F 69 64 73 00 0C 
 00 02 0C 00 02 0B 00 01 00 00 00 00, finish:80 01 00 01 00 00 00 10 67 65 74 
 5F 72 61 6E 67 65 5F 73 6C 69 63 65 73 00 00 00 0C 0C 00 01 0B 00 03 00 00 00 
 0D 54 65 73 74 43 65 6E 74 72 6F 69 64 73 00 0C 00 02 0C 00 02 0B 00 01 00 00 
 00 00 0B 00 02 00 00 00 00, reversed:false, count:1)), range=[0,0], 
 max_keys=11}
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 StorageProxy.java (line 597) 
 restricted ranges for query [0,0] are [[0,0]]
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,959 StorageProxy.java (line 423) 
 === endpoint: belize1.apple.com/17.224.36.17 for range.right 0
 {code}
 Thus, node 85070591730234615865843651857942052864 is left out.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1337) parallelize fetching rows for low-cardinality indexes

2010-12-28 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-1337:
--

Remaining Estimate: 24h
 Original Estimate: 24h

 parallelize fetching rows for low-cardinality indexes
 -

 Key: CASSANDRA-1337
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1337
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 0.7.1

   Original Estimate: 24h
  Remaining Estimate: 24h

 currently, we read the indexed rows from the first node (in partitioner 
 order); if that does not have enough matching rows, we read the rows from the 
 next, and so forth.
 we should use the statistics fom CASSANDRA-1155 to query multiple nodes in 
 parallel, such that we have a high chance of getting enough rows w/o having 
 to do another round of queries (but, if our estimate is incorrect, we do need 
 to loop and do more rounds until we have enough data or we have fetched from 
 each node).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1337) parallelize fetching rows for low-cardinality indexes

2010-12-28 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-1337:
--

Remaining Estimate: 8h  (was: 24h)
 Original Estimate: 8h  (was: 24h)

 parallelize fetching rows for low-cardinality indexes
 -

 Key: CASSANDRA-1337
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1337
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 0.7.1

   Original Estimate: 8h
  Remaining Estimate: 8h

 currently, we read the indexed rows from the first node (in partitioner 
 order); if that does not have enough matching rows, we read the rows from the 
 next, and so forth.
 we should use the statistics fom CASSANDRA-1155 to query multiple nodes in 
 parallel, such that we have a high chance of getting enough rows w/o having 
 to do another round of queries (but, if our estimate is incorrect, we do need 
 to loop and do more rounds until we have enough data or we have fetched from 
 each node).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1053392 - in /cassandra/trunk/test: long/org/apache/cassandra/db/ unit/org/apache/cassandra/io/sstable/ unit/org/apache/cassandra/streaming/

2010-12-28 Thread gdusbabek
Author: gdusbabek
Date: Tue Dec 28 17:15:26 2010
New Revision: 1053392

URL: http://svn.apache.org/viewvc?rev=1053392view=rev
Log:
refactor SSTableUtils to add chainable configuration. patch by stuhood, 
reviewed by gdusbabek. CASSANDRA-1822

Modified:

cassandra/trunk/test/long/org/apache/cassandra/db/LongCompactionSpeedTest.java
cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableTest.java
cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java

cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableWriterAESCommutativeTest.java

cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableWriterTest.java

cassandra/trunk/test/unit/org/apache/cassandra/streaming/StreamingTransferTest.java

Modified: 
cassandra/trunk/test/long/org/apache/cassandra/db/LongCompactionSpeedTest.java
URL: 
http://svn.apache.org/viewvc/cassandra/trunk/test/long/org/apache/cassandra/db/LongCompactionSpeedTest.java?rev=1053392r1=1053391r2=1053392view=diff
==
--- 
cassandra/trunk/test/long/org/apache/cassandra/db/LongCompactionSpeedTest.java 
(original)
+++ 
cassandra/trunk/test/long/org/apache/cassandra/db/LongCompactionSpeedTest.java 
Tue Dec 28 17:15:26 2010
@@ -86,7 +86,7 @@ public class LongCompactionSpeedTest ext
 }
 rows.put(key, SSTableUtils.createCF(Long.MIN_VALUE, 
Integer.MIN_VALUE, cols));
 }
-SSTableReader sstable = SSTableUtils.writeSSTable(rows);
+SSTableReader sstable = SSTableUtils.prepare().write(rows);
 sstables.add(sstable);
 store.addSSTable(sstable);
 }

Modified: 
cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableTest.java
URL: 
http://svn.apache.org/viewvc/cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableTest.java?rev=1053392r1=1053391r2=1053392view=diff
==
--- cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableTest.java 
(original)
+++ cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableTest.java 
Tue Dec 28 17:15:26 2010
@@ -40,7 +40,7 @@ public class SSTableTest extends Cleanup
 
 MapByteBuffer, ByteBuffer map = new HashMapByteBuffer,ByteBuffer();
 map.put(key, bytes);
-SSTableReader ssTable = SSTableUtils.writeRawSSTable(Keyspace1, 
Standard1, map);
+SSTableReader ssTable = 
SSTableUtils.prepare().cf(Standard1).writeRaw(map);
 
 // verify
 verifySingle(ssTable, bytes, key);
@@ -68,7 +68,7 @@ public class SSTableTest extends Cleanup
 }
 
 // write
-SSTableReader ssTable = SSTableUtils.writeRawSSTable(Keyspace1, 
Standard2, map);
+SSTableReader ssTable = 
SSTableUtils.prepare().cf(Standard2).writeRaw(map);
 
 // verify
 verifyMany(ssTable, map);

Modified: 
cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java
URL: 
http://svn.apache.org/viewvc/cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java?rev=1053392r1=1053391r2=1053392view=diff
==
--- cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java 
(original)
+++ cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java 
Tue Dec 28 17:15:26 2010
@@ -71,47 +71,98 @@ public class SSTableUtils
 return datafile;
 }
 
-public static SSTableReader writeSSTable(SetString keys) throws 
IOException
+/**
+ * @return A Context with chainable methods to configure and write a 
SSTable.
+ */
+public static Context prepare()
 {
-MapString, ColumnFamily map = new HashMapString, ColumnFamily();
-for (String key : keys)
-{
-ColumnFamily cf = ColumnFamily.create(TABLENAME, CFNAME);
-cf.addColumn(new Column(ByteBuffer.wrap(key.getBytes()), 
ByteBuffer.wrap(key.getBytes()), 0));
-map.put(key, cf);
-}
-return writeSSTable(map);
+return new Context();
 }
 
-public static SSTableReader writeSSTable(MapString, ColumnFamily 
entries) throws IOException
+public static class Context
 {
-MapByteBuffer, ByteBuffer map = new HashMapByteBuffer, 
ByteBuffer();
-for (Map.EntryString, ColumnFamily entry : entries.entrySet())
+private String ksname = TABLENAME;
+private String cfname = CFNAME;
+private Descriptor dest = null;
+private boolean cleanup = true;
+private int generation = 0;
+
+Context() {}
+
+public Context ks(String ksname)
 {
-DataOutputBuffer buffer = new DataOutputBuffer();
-ColumnFamily.serializer().serializeWithIndexes(entry.getValue(), 
buffer);
-map.put(ByteBuffer.wrap(entry.getKey().getBytes()), 

svn commit: r1053393 - in /cassandra/trunk/test/unit/org/apache/cassandra/io/sstable: LegacySSTableTest.java SSTableUtils.java

2010-12-28 Thread gdusbabek
Author: gdusbabek
Date: Tue Dec 28 17:15:35 2010
New Revision: 1053393

URL: http://svn.apache.org/viewvc?rev=1053393view=rev
Log:
refactor LegacySSTableTest to inspect row contents (breaks tests). patch by 
stuhood, reviewed by gdusbabek. CASSANDRA-1822

Modified:

cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/LegacySSTableTest.java
cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java

Modified: 
cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/LegacySSTableTest.java
URL: 
http://svn.apache.org/viewvc/cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/LegacySSTableTest.java?rev=1053393r1=1053392r2=1053393view=diff
==
--- 
cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/LegacySSTableTest.java
 (original)
+++ 
cassandra/trunk/test/unit/org/apache/cassandra/io/sstable/LegacySSTableTest.java
 Tue Dec 28 17:15:35 2010
@@ -22,15 +22,11 @@ package org.apache.cassandra.io.sstable;
 import java.io.File;
 import java.io.IOException;
 import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
+import java.util.*;
 
 import org.apache.cassandra.CleanupHelper;
-import org.apache.cassandra.io.util.BufferedRandomAccessFile;
+import org.apache.cassandra.db.DecoratedKey;
+import org.apache.cassandra.db.columniterator.SSTableNamesIterator;
 import org.apache.cassandra.utils.FBUtilities;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -45,7 +41,7 @@ public class LegacySSTableTest extends C
 public static final String KSNAME = Keyspace1;
 public static final String CFNAME = Standard1;
 
-public static MapByteBuffer, ByteBuffer TEST_DATA;
+public static SetString TEST_DATA;
 public static File LEGACY_SSTABLE_ROOT;
 
 @BeforeClass
@@ -56,11 +52,9 @@ public class LegacySSTableTest extends C
 LEGACY_SSTABLE_ROOT = new File(scp).getAbsoluteFile();
 assert LEGACY_SSTABLE_ROOT.isDirectory();
 
-TEST_DATA = new HashMapByteBuffer,ByteBuffer();
+TEST_DATA = new HashSetString();
 for (int i = 100; i  1000; ++i)
-{
-TEST_DATA.put(ByteBuffer.wrap(Integer.toString(i).getBytes()), 
ByteBuffer.wrap((Avinash Lakshman is a good man:  + i).getBytes()));
-}
+TEST_DATA.add(Integer.toString(i));
 }
 
 /**
@@ -83,44 +77,39 @@ public class LegacySSTableTest extends C
 Descriptor dest = getDescriptor(Descriptor.CURRENT_VERSION);
 assert dest.directory.mkdirs() : Could not create  + dest.directory 
+ . Might it already exist?;
 
-SSTableReader ssTable = SSTableUtils.writeRawSSTable(new 
File(dest.filenameFor(SSTable.COMPONENT_DATA)),
- KSNAME,
- CFNAME,
- TEST_DATA);
-assert ssTable.desc.generation == 0 :
+SSTableReader ssTable = 
SSTableUtils.prepare().ks(KSNAME).cf(CFNAME).dest(dest).write(TEST_DATA);
+assert ssTable.descriptor.generation == 0 :
 In order to create a generation 0 sstable, please run this test 
alone.;
 System.out.println( Wrote  + dest);
 }
 */
 
 @Test
-public void testVersions() throws IOException
+public void testVersions() throws Throwable
 {
 for (File version : LEGACY_SSTABLE_ROOT.listFiles())
 if (Descriptor.versionValidate(version.getName()))
 testVersion(version.getName());
 }
 
-public void testVersion(String version)
+public void testVersion(String version) throws Throwable
 {
 try
 {
 SSTableReader reader = SSTableReader.open(getDescriptor(version));
-
-ListByteBuffer keys = new 
ArrayListByteBuffer(TEST_DATA.keySet());
-Collections.shuffle(keys);
-BufferedRandomAccessFile file = new 
BufferedRandomAccessFile(reader.getFilename(), r);
-for (ByteBuffer key : keys)
+for (String keystring : TEST_DATA)
 {
-// confirm that the bloom filter does not reject any keys
-
file.seek(reader.getPosition(reader.partitioner.decorateKey(key), 
SSTableReader.Operator.EQ));
-assert key.equals( FBUtilities.readShortByteArray(file));
+ByteBuffer key = ByteBuffer.wrap(keystring.getBytes());
+// confirm that the bloom filter does not reject any keys/names
+DecoratedKey dk = reader.partitioner.decorateKey(key);
+SSTableNamesIterator iter = new SSTableNamesIterator(reader, 
dk, FBUtilities.singleton(key));
+assert iter.next().name().equals(key);
 }
 }
 catch (Throwable e)
 {

[jira] Commented: (CASSANDRA-1822) Row level coverage in LegacySSTableTest

2010-12-28 Thread Gary Dusbabek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975565#action_12975565
 ] 

Gary Dusbabek commented on CASSANDRA-1822:
--

committed to trunk.
stu, I'm willing to commit this to the 0.7 branch if you work up a new patch 
set.

 Row level coverage in LegacySSTableTest
 ---

 Key: CASSANDRA-1822
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1822
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Stu Hood
Priority: Minor
 Fix For: 0.7.1

 Attachments: 1822.tgz, legacy-sstables.tgz


 LegacySSTableTest should check compatibility of content within rows.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-1910) validation of time uuid is incorrect

2010-12-28 Thread Dave (JIRA)
validation of time uuid is incorrect


 Key: CASSANDRA-1910
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1910
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.0 rc 2
Reporter: Dave


It appears _TimeUUIDType_ (as of 12/9) is checking the wrong bits when 
validating a time UUID as version 1.

Per the comment and rfc4122, _version is bits 4-7 of byte 6_, however 
validate() is actually checking the least significant bits:

 _if ((slice.get()  0x0f) != 1)_

Sample java/hector code:

{code}
// displays version 1 but validation fails
java.util.UUID uuid1 = 
java.util.UUID.fromString(--1000--);
System.out.println(uuid1 +   + uuid1.version());
TimeUUIDType.instance.validate(UUIDSerializer.get().toByteBuffer(uuid1));

// displays version 2 but validation succeeds
java.util.UUID uuid2 = 
java.util.UUID.fromString(--2100--);
System.out.println(uuid2 +   + uuid2.version());
TimeUUIDType.instance.validate(UUIDSerializer.get().toByteBuffer(uuid2));
{code}

The issue can be seen with any UUID where the timestamp doesn't start with 1:

b54adc00-67f9-10d9-9669-0800200c9a66, (timestamp year 1776) version 1 fails
b54adc00-67f9-12d9-9669-0800200c9a66, (timestamp year 2233) version 1 fails


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1901) getRestrictedRanges bug where node owns minimum token

2010-12-28 Thread Stu Hood (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stu Hood updated CASSANDRA-1901:


Attachment: 
0.6-0001-Switch-minimum-token-for-RP-to-1-for-midpoint-purposes.txt

Rebased for 0.6

 getRestrictedRanges bug where node owns minimum token
 -

 Key: CASSANDRA-1901
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1901
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6.9, 0.7.0 rc 2
Reporter: Jonathan Ellis
Assignee: Stu Hood
 Fix For: 0.6.9, 0.7.1

 Attachments: 
 0.6-0001-Switch-minimum-token-for-RP-to-1-for-midpoint-purposes.txt, 
 0001-Switch-minimum-token-for-RP-to-1-for-midpoint-purposes.txt


 From the ML, there are two RF=1 nodes, 0 for the local node (17.224.36.17) 
 and 85070591730234615865843651857942052864 for the remote node 
 (17.224.109.80).  Debug log shows
 {code}
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 CassandraServer.java (line 
 479) range_slice
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 StorageProxy.java (line 412) 
 RangeSliceCommand{keyspace='Harvest', column_family='TestCentroids', 
 super_column=null, predicate=SlicePredicate(slice_range:SliceRange(start:80 
 01 00 01 00 00 00 10 67 65 74 5F 72 61 6E 67 65 5F 73 6C 69 63 65 73 00 00 00 
 0C 0C 00 01 0B 00 03 00 00 00 0D 54 65 73 74 43 65 6E 74 72 6F 69 64 73 00 0C 
 00 02 0C 00 02 0B 00 01 00 00 00 00, finish:80 01 00 01 00 00 00 10 67 65 74 
 5F 72 61 6E 67 65 5F 73 6C 69 63 65 73 00 00 00 0C 0C 00 01 0B 00 03 00 00 00 
 0D 54 65 73 74 43 65 6E 74 72 6F 69 64 73 00 0C 00 02 0C 00 02 0B 00 01 00 00 
 00 00 0B 00 02 00 00 00 00, reversed:false, count:1)), range=[0,0], 
 max_keys=11}
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 StorageProxy.java (line 597) 
 restricted ranges for query [0,0] are [[0,0]]
 DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,959 StorageProxy.java (line 423) 
 === endpoint: belize1.apple.com/17.224.36.17 for range.right 0
 {code}
 Thus, node 85070591730234615865843651857942052864 is left out.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-1911) write path should call MessagingService.removeRegisteredCallback

2010-12-28 Thread Jonathan Ellis (JIRA)
write path should call MessagingService.removeRegisteredCallback


 Key: CASSANDRA-1911
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1911
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1


it would reduce memory overhead to pre-emptively clear the callback when done 
the way the read path does.

(other IAsyncCallbacks could do this too, but only read/write have enough 
volume to matter.)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1714) zero-copy reads

2010-12-28 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich updated CASSANDRA-1714:
---

Remaining Estimate: 96h
 Original Estimate: 96h

 zero-copy reads
 ---

 Key: CASSANDRA-1714
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1714
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Pavel Yaskevich
 Fix For: 0.7.1

 Attachments: zerocopy.txt

   Original Estimate: 96h
  Remaining Estimate: 96h

 Since we are already using mmap'd ByteBuffers in MappedFileDataInput we 
 should be able to do zero-copy reads (via buffer.slice()), which would give 
 us better performance than CASSANDRA-1651 without having to worry about 
 buffer management.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1053417 - /cassandra/trunk/src/java/org/apache/cassandra/service/ReadResponseResolver.java

2010-12-28 Thread jbellis
Author: jbellis
Date: Tue Dec 28 19:59:25 2010
New Revision: 1053417

URL: http://svn.apache.org/viewvc?rev=1053417view=rev
Log:
undo gratuitous refactoring of RRR code from #1072
patch by jbellis

Modified:

cassandra/trunk/src/java/org/apache/cassandra/service/ReadResponseResolver.java

Modified: 
cassandra/trunk/src/java/org/apache/cassandra/service/ReadResponseResolver.java
URL: 
http://svn.apache.org/viewvc/cassandra/trunk/src/java/org/apache/cassandra/service/ReadResponseResolver.java?rev=1053417r1=1053416r2=1053417view=diff
==
--- 
cassandra/trunk/src/java/org/apache/cassandra/service/ReadResponseResolver.java 
(original)
+++ 
cassandra/trunk/src/java/org/apache/cassandra/service/ReadResponseResolver.java 
Tue Dec 28 19:59:25 2010
@@ -56,13 +56,6 @@ public class ReadResponseResolver implem
 this.key = StorageService.getPartitioner().decorateKey(key);
 }
 
-private void checkDigest(DecoratedKey key, ByteBuffer digest, ByteBuffer 
resultDigest) throws DigestMismatchException
-{
-if (resultDigest.equals(digest))
-return;
-throw new DigestMismatchException(key, digest, resultDigest);
-}
-
 /*
   * This method for resolving read data should look at the timestamps of 
each
   * of the columns that are read and should pick up columns with the latest
@@ -92,9 +85,16 @@ public class ReadResponseResolver implem
 Message message = entry.getKey();
 if (result.isDigestQuery())
 {
-if (digest != null)
-checkDigest(key, digest, result.digest());
-digest = result.digest();
+if (digest == null)
+{
+digest = result.digest();
+}
+else
+{
+ByteBuffer digest2 = result.digest();
+if (!digest.equals(digest2))
+throw new DigestMismatchException(key, digest, 
digest2);
+}
 }
 else
 {
@@ -122,7 +122,9 @@ public class ReadResponseResolver implem
 
 for (ColumnFamily cf : versions)
 {
-checkDigest(key, digest, ColumnFamily.digest(cf));
+ByteBuffer digest2 = ColumnFamily.digest(cf);
+if (!digest2.equals(digest))
+throw new DigestMismatchException(key, digest, digest2);
 }
 if (logger_.isDebugEnabled())
 logger_.debug(digests verified);




[jira] Updated: (CASSANDRA-1370) TokenMetaData.getPendingRangesMM() is unnecessarily synchronized

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1370:
--

Affects Version/s: (was: 0.6.4)
   0.6
Fix Version/s: 0.7.1
   0.6.9
 Assignee: Brandon Williams

Also: getWriteEndpoints calls getPendingRanges twice when it is not empty; 
cheaper to save the first result.

 TokenMetaData.getPendingRangesMM() is unnecessarily synchronized
 

 Key: CASSANDRA-1370
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1370
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.6
Reporter: Jason Fager
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.6.9, 0.7.1


 TokenMetaData.getPendingRangesMM() is currently synchronized to avoid a race 
 condition where multiple threads might create a multimap for the given table. 
  However, the pendingRanges instance variable that's the subject of the race 
 condition is already a ConcurrentHashMap, and the race condition can be 
 avoided by using putIfAbsent, leaving the case where the table's map is 
 already initialized lock-free:
 private MultimapRange, InetAddress getPendingRangesMM(String table)
 {
 MultimapRange, InetAddress map = pendingRanges.get(table);
 if (map == null)
 {
 map = HashMultimap.create();
 MultimapRange, InetAddress fasterHorse 
   = pendingRanges.putIfAbsent(table, map);
 if(fasterHorse != null) {
   //another thread beat us to creating the map, oh well.
   map = fasterHorse;
 }
 }
 return map;
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1912) Cassandra should flush a keyspace after it was dropped

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1912:
--

Fix Version/s: 0.7.1
 Assignee: Gary Dusbabek

 Cassandra should flush a keyspace after it was dropped
 --

 Key: CASSANDRA-1912
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1912
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0 rc 2
 Environment: any
Reporter: Daniel Kraft
Assignee: Gary Dusbabek
 Fix For: 0.7.1


 After dropping a keyspace one must not forget to flush it via
   nodetool -h host flush dropped keyspace
 If you forget to do this and restart cassandra, the following traceback 
 happens and cassandra doesn't start up again:
  INFO 20:22:56,420 Heap size: 4117889024/4137549824
  INFO 20:22:56,424 JNA not found. Native methods will be disabled.
  INFO 20:22:56,432 Loading settings from 
 file:/home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/conf/cassandra.yaml
  INFO 20:22:56,547 DiskAccessMode 'auto' determined to be mmap, 
 indexAccessMode is mmap
  INFO 20:22:56,612 Creating new commitlog segment 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/commitlog/CommitLog-1293564176612.log
  INFO 20:22:56,666 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/system-IndexInfo-KeyCache
  INFO 20:22:56,673 Opening 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/data/system/IndexInfo-e-1
  INFO 20:22:56,698 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/system-Schema-KeyCache
  INFO 20:22:56,700 Opening 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/data/system/Schema-e-41
  INFO 20:22:56,703 Opening 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/data/system/Schema-e-43
  INFO 20:22:56,705 Opening 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/data/system/Schema-e-42
  INFO 20:22:56,727 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/system-Migrations-KeyCache
  INFO 20:22:56,728 Opening 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/data/system/Migrations-e-41
  INFO 20:22:56,730 Opening 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/data/system/Migrations-e-42
  INFO 20:22:56,734 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/system-LocationInfo-KeyCache
  INFO 20:22:56,735 Opening 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/data/system/LocationInfo-e-6
  INFO 20:22:56,739 Opening 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/data/system/LocationInfo-e-7
  INFO 20:22:56,741 Opening 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/data/system/LocationInfo-e-5
  INFO 20:22:56,746 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/system-HintsColumnFamily-KeyCache
  INFO 20:22:56,776 Loading schema version 460e882a-1291-11e0-92ff-e700f669bcfc
  WARN 20:22:56,943 Schema definitions were defined both locally and in 
 cassandra.yaml. Definitions in cassandra.yaml were ignored.
  INFO 20:22:56,954 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/Keyspace1-Indexed1-KeyCache
  INFO 20:22:56,959 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/Keyspace1-Super1-KeyCache
  INFO 20:22:56,960 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/Keyspace1-Standard2-KeyCache
  INFO 20:22:56,961 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/Keyspace1-Super2-KeyCache
  INFO 20:22:56,962 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/Keyspace1-Standard1-KeyCache
  INFO 20:22:56,963 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/Keyspace1-Super3-KeyCache
  INFO 20:22:56,964 reading saved cache 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/saved_caches/Keyspace1-StandardByUUID1-KeyCache
  INFO 20:22:56,971 Replaying 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/commitlog/CommitLog-1293524239077.log,
  
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/commitlog/CommitLog-1293563866498.log
  INFO 20:22:56,974 Finished reading 
 /home/dk/develop/cassandra/apache-cassandra-0.7.0-rc3/STORAGE/commitlog/CommitLog-1293524239077.log
 ERROR 20:22:56,975 Exception encountered during startup.
 java.lang.NullPointerException
 at 
 

[jira] Updated: (CASSANDRA-1902) Migrate cached pages during compaction

2010-12-28 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-1902:
--

Remaining Estimate: 32h  (was: 96h)
 Original Estimate: 32h  (was: 96h)

 Migrate cached pages during compaction 
 ---

 Key: CASSANDRA-1902
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1902
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.1
Reporter: T Jake Luciani
Assignee: T Jake Luciani
 Fix For: 0.7.1

   Original Estimate: 32h
  Remaining Estimate: 32h

 Post CASSANDRA-1470 there is an opportunity to migrate cached pages from a 
 pre-compacted CF during the compaction process.  
 First, add a method to MmappedSegmentFile: long[] pagesInPageCache() that 
 uses the posix mincore() function to detect the offsets of pages for this 
 file currently in page cache.
 Then add getActiveKeys() which uses underlying pagesInPageCache() to get the 
 keys actually in the page cache.
 use getActiveKeys() to detect which SSTables being compacted are in the os 
 cache and make sure the subsequent pages in the new compacted SSTable are 
 kept in the page cache for these keys. This will minimize the impact of 
 compacting a hot SSTable.
 A simpler yet similar approach is described here: 
 http://insights.oetiker.ch/linux/fadvise/

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1369) FBUtilities.hash can result in thread contention on call to MessageDigest.getInstance()

2010-12-28 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-1369:


Attachment: 1369.txt

Patch to use the threadlocal approach.

 FBUtilities.hash can result in thread contention on call to 
 MessageDigest.getInstance()
 ---

 Key: CASSANDRA-1369
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1369
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jason Fager
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.7.1

 Attachments: 1369.txt


 FBUtilities.hash() calls MessageDigest.getInstance() on every invocation, 
 which in turns calls the synchronized method Provider.getService().  
 FBUtilities.md5hash() is frequently invoked from RandomPartitioner, and minor 
 thread contention in this codepath can be observed when running 
 contrib/py_stress against an out-of-box Cassandra installation.
 One possible solution is to preallocate md5 MessageDigest instances and store 
 them as threadlocals.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1369) FBUtilities.hash can result in thread contention on call to MessageDigest.getInstance()

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1369:
--

Attachment: 1369-v2.txt

Isn't this going to NPE?  The threadlocal is never set.  Unit tests use COPP so 
probably never call hash.

v2 attached (still untested :)

 FBUtilities.hash can result in thread contention on call to 
 MessageDigest.getInstance()
 ---

 Key: CASSANDRA-1369
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1369
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jason Fager
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.7.1

 Attachments: 1369-v2.txt, 1369.txt


 FBUtilities.hash() calls MessageDigest.getInstance() on every invocation, 
 which in turns calls the synchronized method Provider.getService().  
 FBUtilities.md5hash() is frequently invoked from RandomPartitioner, and minor 
 thread contention in this codepath can be observed when running 
 contrib/py_stress against an out-of-box Cassandra installation.
 One possible solution is to preallocate md5 MessageDigest instances and store 
 them as threadlocals.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1053443 - /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/ReadResponseResolver.java

2010-12-28 Thread jbellis
Author: jbellis
Date: Tue Dec 28 21:23:50 2010
New Revision: 1053443

URL: http://svn.apache.org/viewvc?rev=1053443view=rev
Log:
simplify and update comments for RRR.resolve
patch by jbellis

Modified:

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/ReadResponseResolver.java

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/ReadResponseResolver.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/ReadResponseResolver.java?rev=1053443r1=1053442r2=1053443view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/ReadResponseResolver.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/ReadResponseResolver.java
 Tue Dec 28 21:23:50 2010
@@ -54,13 +54,17 @@ public class ReadResponseResolver implem
 }
 
 /*
-  * This method for resolving read data should look at the timestamps of 
each
-  * of the columns that are read and should pick up columns with the latest
-  * timestamp. For those columns where the timestamp is not the latest a
-  * repair request should be scheduled.
-  *
-  */
-   public Row resolve() throws DigestMismatchException, IOException
+ * This method handles two different scenarios:
+ *
+ * 1) we're handling the initial read, of data from the closest replica + 
digests
+ *from the rest.  In this case we check the digests against each other,
+ *throw an exception if there is a mismatch, otherwise return the data 
row.
+ *
+ * 2) there was a mismatch on the initial read, so we redid the digest 
requests
+ *as full data reads.  In this case we need to compute the most recent 
version
+ *of each column, and send diffs to out-of-date replicas.
+ */
+public Row resolve() throws DigestMismatchException, IOException
 {
 if (logger_.isDebugEnabled())
 logger_.debug(resolving  + results.size() +  responses);
@@ -70,50 +74,27 @@ public class ReadResponseResolver implem
ListInetAddress endpoints = new ArrayListInetAddress();
ByteBuffer digest = null;
 
-/*
-* Populate the list of rows from each of the messages
-* Check to see if there is a digest query. If a digest 
- * query exists then we need to compare the digest with 
- * the digest of the data that is received.
-*/
+// validate digests against each other; throw immediately on mismatch.
+// also, collects data results into versions/endpoints lists.
 for (Map.EntryMessage, ReadResponse entry : results.entrySet())
 {
 ReadResponse result = entry.getValue();
 Message message = entry.getKey();
-if (result.isDigestQuery())
-{
-if (digest == null)
-{
-digest = result.digest();
-}
-else
-{
-ByteBuffer digest2 = result.digest();
-if (!digest.equals(digest2))
-throw new DigestMismatchException(key, digest, 
digest2);
-}
-}
-else
+ByteBuffer resultDigest = result.isDigestQuery() ? result.digest() 
: ColumnFamily.digest(result.row().cf);
+if (digest == null)
+digest = resultDigest;
+else if (!digest.equals(resultDigest))
+throw new DigestMismatchException(key, digest, resultDigest);
+
+if (!result.isDigestQuery())
 {
 versions.add(result.row().cf);
 endpoints.add(message.getFrom());
 }
 }
 
-   // If there was a digest query compare it with all the data 
digests
-   // If there is a mismatch then throw an exception so that read 
repair can happen.
-if (digest != null)
-{
-
-for (ColumnFamily cf : versions)
-{
-ByteBuffer digest2 = ColumnFamily.digest(cf);
-if (!digest.equals(digest2))
-throw new DigestMismatchException(key, digest, digest2);
-}
-if (logger_.isDebugEnabled())
-logger_.debug(digests verified);
-}
+if (logger_.isDebugEnabled())
+logger_.debug(digests verified);
 
 ColumnFamily resolved;
 if (versions.size()  1)




svn commit: r1053445 - in /cassandra/trunk: ./ interface/thrift/gen-java/org/apache/cassandra/thrift/ src/java/org/apache/cassandra/service/

2010-12-28 Thread jbellis
Author: jbellis
Date: Tue Dec 28 21:27:47 2010
New Revision: 1053445

URL: http://svn.apache.org/viewvc?rev=1053445view=rev
Log:
merge from 0.7

Modified:
cassandra/trunk/   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/SuperColumn.java
   (props changed)

cassandra/trunk/src/java/org/apache/cassandra/service/ReadResponseResolver.java

Propchange: cassandra/trunk/
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:27:47 2010
@@ -1,5 +1,5 @@
 /cassandra/branches/cassandra-0.6:922689-1052356,1052358-1053244
-/cassandra/branches/cassandra-0.7:1026517-1053409
+/cassandra/branches/cassandra-0.7:1026517-1053443
 /incubator/cassandra/branches/cassandra-0.3:774578-796573
 /incubator/cassandra/branches/cassandra-0.4:810145-834239,834349-834350
 /incubator/cassandra/branches/cassandra-0.5:72-915439

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:27:47 2010
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:922689-1052356,1052358-1053244
-/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1026517-1053409
+/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1026517-1053443
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/Cassandra.java:774578-796573
 
/incubator/cassandra/branches/cassandra-0.4/interface/gen-java/org/apache/cassandra/service/Cassandra.java:810145-834239,834349-834350
 
/incubator/cassandra/branches/cassandra-0.5/interface/gen-java/org/apache/cassandra/service/Cassandra.java:72-903502

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:27:47 2010
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:922689-1052356,1052358-1053244
-/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1026517-1053409
+/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1026517-1053443
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/column_t.java:774578-792198
 
/incubator/cassandra/branches/cassandra-0.4/interface/gen-java/org/apache/cassandra/service/Column.java:810145-834239,834349-834350
 
/incubator/cassandra/branches/cassandra-0.5/interface/gen-java/org/apache/cassandra/service/Column.java:72-903502

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:27:47 2010
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:922689-1052356,1052358-1053244
-/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1026517-1053409
+/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1026517-1053443
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/InvalidRequestException.java:774578-796573
 
/incubator/cassandra/branches/cassandra-0.4/interface/gen-java/org/apache/cassandra/service/InvalidRequestException.java:810145-834239,834349-834350
 
/incubator/cassandra/branches/cassandra-0.5/interface/gen-java/org/apache/cassandra/service/InvalidRequestException.java:72-903502

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:27:47 2010
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java:922689-1052356,1052358-1053244

[jira] Updated: (CASSANDRA-1370) TokenMetaData.getPendingRangesMM() is unnecessarily synchronized

2010-12-28 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-1370:


Attachment: 1370.txt

Patch to remove synchronization and avoid the redundant call.

 TokenMetaData.getPendingRangesMM() is unnecessarily synchronized
 

 Key: CASSANDRA-1370
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1370
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.6
Reporter: Jason Fager
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.6.9, 0.7.1

 Attachments: 1370.txt


 TokenMetaData.getPendingRangesMM() is currently synchronized to avoid a race 
 condition where multiple threads might create a multimap for the given table. 
  However, the pendingRanges instance variable that's the subject of the race 
 condition is already a ConcurrentHashMap, and the race condition can be 
 avoided by using putIfAbsent, leaving the case where the table's map is 
 already initialized lock-free:
 private MultimapRange, InetAddress getPendingRangesMM(String table)
 {
 MultimapRange, InetAddress map = pendingRanges.get(table);
 if (map == null)
 {
 map = HashMultimap.create();
 MultimapRange, InetAddress fasterHorse 
   = pendingRanges.putIfAbsent(table, map);
 if(fasterHorse != null) {
   //another thread beat us to creating the map, oh well.
   map = fasterHorse;
 }
 }
 return map;
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1370) TokenMetaData.getPendingRangesMM() is unnecessarily synchronized

2010-12-28 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975638#action_12975638
 ] 

Jonathan Ellis commented on CASSANDRA-1370:
---

nit: would prefer calling newmap oldMap, priorMap, otherMap, or previousMap.

otherwise, +1

 TokenMetaData.getPendingRangesMM() is unnecessarily synchronized
 

 Key: CASSANDRA-1370
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1370
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.6
Reporter: Jason Fager
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.6.9, 0.7.1

 Attachments: 1370.txt


 TokenMetaData.getPendingRangesMM() is currently synchronized to avoid a race 
 condition where multiple threads might create a multimap for the given table. 
  However, the pendingRanges instance variable that's the subject of the race 
 condition is already a ConcurrentHashMap, and the race condition can be 
 avoided by using putIfAbsent, leaving the case where the table's map is 
 already initialized lock-free:
 private MultimapRange, InetAddress getPendingRangesMM(String table)
 {
 MultimapRange, InetAddress map = pendingRanges.get(table);
 if (map == null)
 {
 map = HashMultimap.create();
 MultimapRange, InetAddress fasterHorse 
   = pendingRanges.putIfAbsent(table, map);
 if(fasterHorse != null) {
   //another thread beat us to creating the map, oh well.
   map = fasterHorse;
 }
 }
 return map;
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1053450 - /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/locator/TokenMetadata.java

2010-12-28 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Dec 28 21:43:45 2010
New Revision: 1053450

URL: http://svn.apache.org/viewvc?rev=1053450view=rev
Log:
Avoid synchronization in getPendingRanges and unecessarily calling it
twice.
Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-1370

Modified:

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/locator/TokenMetadata.java

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/locator/TokenMetadata.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/locator/TokenMetadata.java?rev=1053450r1=1053449r2=1053450view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/locator/TokenMetadata.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/locator/TokenMetadata.java
 Tue Dec 28 21:43:45 2010
@@ -323,13 +323,15 @@ public class TokenMetadata
 }
 }
 
-private synchronized MultimapRange, InetAddress 
getPendingRangesMM(String table)
+private MultimapRange, InetAddress getPendingRangesMM(String table)
 {
 MultimapRange, InetAddress map = pendingRanges.get(table);
 if (map == null)
 {
- map = HashMultimap.create();
-pendingRanges.put(table, map);
+map = HashMultimap.create();
+MultimapRange, InetAddress priorMap = 
pendingRanges.putIfAbsent(table, map);
+if (priorMap != null)
+map = priorMap;
 }
 return map;
 }
@@ -556,12 +558,13 @@ public class TokenMetadata
  */
 public CollectionInetAddress getWriteEndpoints(Token token, String 
table, CollectionInetAddress naturalEndpoints)
 {
-if (getPendingRanges(table).isEmpty())
+MapRange, CollectionInetAddress ranges = getPendingRanges(table);
+if (ranges.isEmpty())
 return naturalEndpoints;
 
 ListInetAddress endpoints = new 
ArrayListInetAddress(naturalEndpoints);
 
-for (Map.EntryRange, CollectionInetAddress entry : 
getPendingRanges(table).entrySet())
+for (Map.EntryRange, CollectionInetAddress entry : 
ranges.entrySet())
 {
 if (entry.getKey().contains(token))
 {




[jira] Commented: (CASSANDRA-1908) Implement the CLibrary using JNI module to avoid the LGPL dependency on JNA

2010-12-28 Thread Peter Schuller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975641#action_12975641
 ] 

Peter Schuller commented on CASSANDRA-1908:
---

I was thinking along those lines (separate project) before when there was some 
JNA vs. JNI debate w.r.t. direct I/O. A separate project could that provides 
some fairly simple and specific things that tend to be useful in a pragmatic 
way, without trying to be overly formal or a complete posix wrapper. It could 
be useful for others except Cassandra, and it would remove any build hassle 
trade-offs in the JNA vs. JNI decision from Cassandra.

Also, for future work that might imply very frequent mincore() calls, the 
calling overhead, if the numbers claimed are correct, could possibly be 
significant for Cassandra (although I'm pretty paranoid about the cost of 
mincore() to begin with, which might dwarf the JNI vs. JNA issue).

 Implement the CLibrary using JNI module to avoid the LGPL dependency on JNA
 ---

 Key: CASSANDRA-1908
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1908
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Hiram Chirino
 Fix For: 0.7.1

 Attachments: cassandra-jni.zip


 Cassandra can't ship JNA out of the box since it's LGPL licensed, so many of 
 the performance optimizing features in the CLibrary class are not available 
 in a simple install.  It should be trivial to implement a real JNI library 
 for the CLibrary class.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1053453 - /cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java

2010-12-28 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Dec 28 21:49:01 2010
New Revision: 1053453

URL: http://svn.apache.org/viewvc?rev=1053453view=rev
Log:
Avoid synchronization in getPendingRanges.
Patch by brandonwilliams, reviewed by jbellis for
CASSANDRA-1370

Modified:

cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java

Modified: 
cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java?rev=1053453r1=1053452r2=1053453view=diff
==
--- 
cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java
 (original)
+++ 
cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java
 Tue Dec 28 21:49:01 2010
@@ -319,13 +319,15 @@ public class TokenMetadata
 }
 }
 
-private synchronized MultimapRange, InetAddress 
getPendingRangesMM(String table)
+private MultimapRange, InetAddress getPendingRangesMM(String table)
 {
 MultimapRange, InetAddress map = pendingRanges.get(table);
 if (map == null)
 {
- map = HashMultimap.create();
-pendingRanges.put(table, map);
+map = HashMultimap.create();
+MultimapRange, InetAddress newmap = 
pendingRanges.putIfAbsent(table, map);
+if (newmap != null)
+map = newmap;
 }
 return map;
 }




svn commit: r1053455 - /cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java

2010-12-28 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Dec 28 21:52:35 2010
New Revision: 1053455

URL: http://svn.apache.org/viewvc?rev=1053455view=rev
Log:
newmap - priorMap

Modified:

cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java

Modified: 
cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java?rev=1053455r1=1053454r2=1053455view=diff
==
--- 
cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java
 (original)
+++ 
cassandra/branches/cassandra-0.6/src/java/org/apache/cassandra/locator/TokenMetadata.java
 Tue Dec 28 21:52:35 2010
@@ -325,9 +325,9 @@ public class TokenMetadata
 if (map == null)
 {
 map = HashMultimap.create();
-MultimapRange, InetAddress newmap = 
pendingRanges.putIfAbsent(table, map);
-if (newmap != null)
-map = newmap;
+MultimapRange, InetAddress priorMap = 
pendingRanges.putIfAbsent(table, map);
+if (priorMap != null)
+map = priorMap;
 }
 return map;
 }




svn commit: r1053457 - in /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra: dht/RandomPartitioner.java utils/FBUtilities.java

2010-12-28 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Dec 28 21:55:37 2010
New Revision: 1053457

URL: http://svn.apache.org/viewvc?rev=1053457view=rev
Log:
Avoid thread contention in FBUtilities.hash
Patch by brandonwilliams and jbellis, reviewed by brandonwilliams for
CASSANDRA-1369

Modified:

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/dht/RandomPartitioner.java

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/FBUtilities.java

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/dht/RandomPartitioner.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/dht/RandomPartitioner.java?rev=1053457r1=1053456r2=1053457view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/dht/RandomPartitioner.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/dht/RandomPartitioner.java
 Tue Dec 28 21:55:37 2010
@@ -80,7 +80,7 @@ public class RandomPartitioner implement
 
 public BigIntegerToken getRandomToken()
 {
-BigInteger token = FBUtilities.md5hash(GuidGenerator.guidAsBytes());
+BigInteger token = 
FBUtilities.hashToBigInteger(GuidGenerator.guidAsBytes());
 if ( token.signum() == -1 )
 token = token.multiply(BigInteger.valueOf(-1L));
 return new BigIntegerToken(token);
@@ -126,7 +126,7 @@ public class RandomPartitioner implement
 {
 if (key.remaining() == 0)
 return MINIMUM;
-return new BigIntegerToken(FBUtilities.md5hash(key));
+return new BigIntegerToken(FBUtilities.hashToBigInteger(key));
 }
 
 public MapToken, Float describeOwnership(ListToken sortedTokens)

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/FBUtilities.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/FBUtilities.java?rev=1053457r1=1053456r2=1053457view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/FBUtilities.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/FBUtilities.java
 Tue Dec 28 21:55:37 2010
@@ -28,6 +28,7 @@ import java.net.UnknownHostException;
 import java.nio.ByteBuffer;
 import java.nio.charset.CharacterCodingException;
 import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
 import java.util.*;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.Future;
@@ -64,6 +65,22 @@ public class FBUtilities
 
 private static volatile InetAddress localInetAddress_;
 
+private static final ThreadLocalMessageDigest localMessageDigest = new 
ThreadLocalMessageDigest()
+{
+@Override
+protected MessageDigest initialValue()
+{
+try
+{
+return MessageDigest.getInstance(MD5);
+}
+catch (NoSuchAlgorithmException e)
+{
+throw new AssertionError(e);
+}
+}
+};
+
 public static final int MAX_UNSIGNED_SHORT = 0x;
 
 /**
@@ -218,19 +235,20 @@ public class FBUtilities
 return out;
 }
 
-public static BigInteger md5hash(ByteBuffer data)
+public static BigInteger hashToBigInteger(ByteBuffer data)
 {
-byte[] result = hash(MD5, data);
+byte[] result = hash(data);
 BigInteger hash = new BigInteger(result);
 return hash.abs();
 }
 
-public static byte[] hash(String type, ByteBuffer... data)
+public static byte[] hash(ByteBuffer... data)
 {
byte[] result;
try
 {
-MessageDigest messageDigest = MessageDigest.getInstance(type);
+MessageDigest messageDigest = localMessageDigest.get();
+messageDigest.reset();
 for(ByteBuffer block : data)
 
messageDigest.update(block.array(),block.position()+block.arrayOffset(),block.remaining());
 result = messageDigest.digest();




svn commit: r1053458 - in /cassandra/branches/cassandra-0.7: ./ interface/thrift/gen-java/org/apache/cassandra/thrift/

2010-12-28 Thread jbellis
Author: jbellis
Date: Tue Dec 28 21:56:38 2010
New Revision: 1053458

URL: http://svn.apache.org/viewvc?rev=1053458view=rev
Log:
merge from 0.6

Modified:
cassandra/branches/cassandra-0.7/   (props changed)

cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
   (props changed)

cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java
   (props changed)

cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java
   (props changed)

cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java
   (props changed)

cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/SuperColumn.java
   (props changed)

Propchange: cassandra/branches/cassandra-0.7/
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:56:38 2010
@@ -1,4 +1,4 @@
-/cassandra/branches/cassandra-0.6:922689-1053244
+/cassandra/branches/cassandra-0.6:922689-1053244,1053453,1053455
 /cassandra/branches/cassandra-0.7:1035666,1050269
 /cassandra/trunk:1026516-1026734,1028929
 /incubator/cassandra/branches/cassandra-0.3:774578-796573

Propchange: 
cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:56:38 2010
@@ -1,4 +1,4 @@
-/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:922689-1053244
+/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:922689-1053244,1053453,1053455
 
/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1035666,1050269
 
/cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1026516-1026734,1028929
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/Cassandra.java:774578-796573

Propchange: 
cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:56:38 2010
@@ -1,4 +1,4 @@
-/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:922689-1053244
+/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:922689-1053244,1053453,1053455
 
/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1035666,1050269
 
/cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1026516-1026734,1028929
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/column_t.java:774578-792198

Propchange: 
cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:56:38 2010
@@ -1,4 +1,4 @@
-/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:922689-1053244
+/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:922689-1053244,1053453,1053455
 
/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1035666,1050269
 
/cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1026516-1026734,1028929
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/InvalidRequestException.java:774578-796573

Propchange: 
cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:56:38 2010
@@ -1,4 +1,4 @@
-/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java:922689-1053244
+/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java:922689-1053244,1053453,1053455
 
/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java:1035666,1050269
 
/cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java:1026516-1026734,1028929
 

svn commit: r1053459 - in /cassandra/trunk: ./ interface/thrift/gen-java/org/apache/cassandra/thrift/

2010-12-28 Thread jbellis
Author: jbellis
Date: Tue Dec 28 21:57:17 2010
New Revision: 1053459

URL: http://svn.apache.org/viewvc?rev=1053459view=rev
Log:
merge from 0.7

Modified:
cassandra/trunk/   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java
   (props changed)

cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/SuperColumn.java
   (props changed)

Propchange: cassandra/trunk/
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:57:17 2010
@@ -1,5 +1,5 @@
 /cassandra/branches/cassandra-0.6:922689-1052356,1052358-1053244
-/cassandra/branches/cassandra-0.7:1026517-1053443
+/cassandra/branches/cassandra-0.7:1026517-1053443,1053457-1053458
 /incubator/cassandra/branches/cassandra-0.3:774578-796573
 /incubator/cassandra/branches/cassandra-0.4:810145-834239,834349-834350
 /incubator/cassandra/branches/cassandra-0.5:72-915439

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:57:17 2010
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:922689-1052356,1052358-1053244
-/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1026517-1053443
+/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Cassandra.java:1026517-1053443,1053457-1053458
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/Cassandra.java:774578-796573
 
/incubator/cassandra/branches/cassandra-0.4/interface/gen-java/org/apache/cassandra/service/Cassandra.java:810145-834239,834349-834350
 
/incubator/cassandra/branches/cassandra-0.5/interface/gen-java/org/apache/cassandra/service/Cassandra.java:72-903502

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:57:17 2010
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:922689-1052356,1052358-1053244
-/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1026517-1053443
+/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/Column.java:1026517-1053443,1053457-1053458
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/column_t.java:774578-792198
 
/incubator/cassandra/branches/cassandra-0.4/interface/gen-java/org/apache/cassandra/service/Column.java:810145-834239,834349-834350
 
/incubator/cassandra/branches/cassandra-0.5/interface/gen-java/org/apache/cassandra/service/Column.java:72-903502

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:57:17 2010
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:922689-1052356,1052358-1053244
-/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1026517-1053443
+/cassandra/branches/cassandra-0.7/interface/thrift/gen-java/org/apache/cassandra/thrift/InvalidRequestException.java:1026517-1053443,1053457-1053458
 
/incubator/cassandra/branches/cassandra-0.3/interface/gen-java/org/apache/cassandra/service/InvalidRequestException.java:774578-796573
 
/incubator/cassandra/branches/cassandra-0.4/interface/gen-java/org/apache/cassandra/service/InvalidRequestException.java:810145-834239,834349-834350
 
/incubator/cassandra/branches/cassandra-0.5/interface/gen-java/org/apache/cassandra/service/InvalidRequestException.java:72-903502

Propchange: 
cassandra/trunk/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java
--
--- svn:mergeinfo (original)
+++ svn:mergeinfo Tue Dec 28 21:57:17 2010
@@ -1,5 +1,5 @@
 
/cassandra/branches/cassandra-0.6/interface/thrift/gen-java/org/apache/cassandra/thrift/NotFoundException.java:922689-1052356,1052358-1053244

[jira] Commented: (CASSANDRA-1882) rate limit all background I/O

2010-12-28 Thread Peter Schuller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975647#action_12975647
 ] 

Peter Schuller commented on CASSANDRA-1882:
---

(First, haven't done further work yet because I'm away traveling and not really 
doing development.)

Jake: Thanks. However I'm pretty skeptical as io niceness only gives a very 
very coarse way of specifying what you want. So even if it worked beautifully 
in some particular case, it won't in others, and there is no good way to 
control it AFAIK.

For example, the very first test I did (writing at a fixed speed at fixed chunk 
size concurrently with seek-bound small reads) failed miserably by completely 
starving the writes (and this was *without* ionice)  until I switched away from 
cfq to noop or deadline because cfq refused to actually submit I/O requests to 
the device to do it's own scheduling based on better information (more on that 
in a future comment). The support for io nice is specific to cfq btw.

I don't want to talk too many specifics yet because I want to do some more 
testing and try a bit harder to make cfq do what I want before I start making 
claims, but I think that in general, rate limiting I/O in such a way that you 
get sufficient throughput while not having a too adverse effect on foreground 
reads is going to take some runtime tuning depending on both workload and 
hardware (e.g., lone disk vs. 6 disk RAID10 are entirely different matters). I 
think that simply telling the kernel to de-prioritize the compaction workload 
might work well in some very specific situations (exactly the right kernel 
version, io scheduler choice/parameters, workloads and underlying storage 
device), but not in general. 

More to come. Hopefully with some Python code + sysbench command lines for easy 
testing by others on differing hardware setups. (I have not yet tested with a 
real rate limited cassandra, but did testing with sysbench for reads and a 
Python writer doing chunk-size I/O with fsync(). Test done on raid5/raid10 and 
with xfs and ext4 (not all permutations). While file system choice impacts 
somewhat, all results instantly got useless once I realized the I/O scheduling 
was orders of magnitude more important.


 rate limit all background I/O
 -

 Key: CASSANDRA-1882
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1882
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Peter Schuller
Assignee: Peter Schuller
Priority: Minor
 Fix For: 0.7.1


 There is a clear need to support rate limiting of all background I/O (e.g., 
 compaction, repair). In some cases background I/O is naturally rate limited 
 as a result of being CPU bottlenecked, but in all cases where the CPU is not 
 the bottleneck, background streaming I/O is almost guaranteed (barring a very 
 very smart RAID controller or I/O subsystem that happens to cater extremely 
 well to the use case) to be detrimental to the latency and throughput of 
 regular live traffic (reads).
 Ways in which live traffic is negatively affected by backgrounds I/O includes:
 * Indirectly by page cache eviction (see e.g. CASSANDRA-1470).
 * Reads are directly detrimental when not otherwise limited for the usual 
 reasons; large continuing read requests that keep coming are battling with 
 latency sensitive live traffic (mostly seek bound). Mixing seek-bound latency 
 critical with bulk streaming is a classic no-no for I/O scheduling.
 * Writes are directly detrimental in a similar fashion.
 * But in particular, writes are more difficult still: Caching effects tend to 
 augment the effects because lacking any kind of fsync() or direct I/O, the 
 operating system and/or RAID controller tends to defer writes when possible. 
 This often leads to a very sudden throttling of the application when caches 
 are filled, at which point there is potentially a huge backlog of data to 
 write.
 ** This may evict a lot of data from page cache since dirty buffers cannot be 
 evicted prior to being flushed out (though CASSANDRA-1470 and related will 
 hopefully help here).
 ** In particular, one major reason why batter-backed RAID controllers are 
 great is that they have the capability to eat storms of writes very quickly 
 and schedule them pretty efficiently with respect to a concurrent continuous 
 stream of reads. But this ability is defeated if we just throw data at it 
 until entirely full. Instead a rate-limited approach means that data can be 
 thrown at said RAID controller at a reasonable pace and it can be allowed to 
 do its job of limiting the impact of those writes on reads.
 I propose a mechanism whereby all such backgrounds reads are rate limited in 
 terms of MB/sec throughput. There would be:
 * A configuration option to state 

[jira] Commented: (CASSANDRA-1370) TokenMetaData.getPendingRangesMM() is unnecessarily synchronized

2010-12-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975650#action_12975650
 ] 

Hudson commented on CASSANDRA-1370:
---

Integrated in Cassandra-0.6 #39 (See 
[https://hudson.apache.org/hudson/job/Cassandra-0.6/39/])
Avoid synchronization in getPendingRanges.
Patch by brandonwilliams, reviewed by jbellis for
CASSANDRA-1370


 TokenMetaData.getPendingRangesMM() is unnecessarily synchronized
 

 Key: CASSANDRA-1370
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1370
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.6
Reporter: Jason Fager
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.6.9, 0.7.1

 Attachments: 1370.txt


 TokenMetaData.getPendingRangesMM() is currently synchronized to avoid a race 
 condition where multiple threads might create a multimap for the given table. 
  However, the pendingRanges instance variable that's the subject of the race 
 condition is already a ConcurrentHashMap, and the race condition can be 
 avoided by using putIfAbsent, leaving the case where the table's map is 
 already initialized lock-free:
 private MultimapRange, InetAddress getPendingRangesMM(String table)
 {
 MultimapRange, InetAddress map = pendingRanges.get(table);
 if (map == null)
 {
 map = HashMultimap.create();
 MultimapRange, InetAddress fasterHorse 
   = pendingRanges.putIfAbsent(table, map);
 if(fasterHorse != null) {
   //another thread beat us to creating the map, oh well.
   map = fasterHorse;
 }
 }
 return map;
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1053465 - /cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/RangeSliceResponseResolver.java

2010-12-28 Thread jbellis
Author: jbellis
Date: Tue Dec 28 22:08:21 2010
New Revision: 1053465

URL: http://svn.apache.org/viewvc?rev=1053465view=rev
Log:
RSRR doesn't actually throw DigestMismatchException

Modified:

cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/RangeSliceResponseResolver.java

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/RangeSliceResponseResolver.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/RangeSliceResponseResolver.java?rev=1053465r1=1053464r2=1053465view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/RangeSliceResponseResolver.java
 (original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/service/RangeSliceResponseResolver.java
 Tue Dec 28 22:08:21 2010
@@ -54,7 +54,7 @@ public class RangeSliceResponseResolver 
 this.table = table;
 }
 
-public ListRow resolve() throws DigestMismatchException, IOException
+public ListRow resolve() throws IOException
 {
 CollatingIterator collator = new CollatingIterator(new 
ComparatorPairRow,InetAddress()
 {




[jira] Commented: (CASSANDRA-1370) TokenMetaData.getPendingRangesMM() is unnecessarily synchronized

2010-12-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975655#action_12975655
 ] 

Hudson commented on CASSANDRA-1370:
---

Integrated in Cassandra-0.7 #126 (See 
[https://hudson.apache.org/hudson/job/Cassandra-0.7/126/])
Avoid synchronization in getPendingRanges and unecessarily calling it
twice.
Patch by brandonwilliams, reviewed by jbellis for CASSANDRA-1370


 TokenMetaData.getPendingRangesMM() is unnecessarily synchronized
 

 Key: CASSANDRA-1370
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1370
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.6
Reporter: Jason Fager
Assignee: Brandon Williams
Priority: Minor
 Fix For: 0.6.9, 0.7.1

 Attachments: 1370.txt


 TokenMetaData.getPendingRangesMM() is currently synchronized to avoid a race 
 condition where multiple threads might create a multimap for the given table. 
  However, the pendingRanges instance variable that's the subject of the race 
 condition is already a ConcurrentHashMap, and the race condition can be 
 avoided by using putIfAbsent, leaving the case where the table's map is 
 already initialized lock-free:
 private MultimapRange, InetAddress getPendingRangesMM(String table)
 {
 MultimapRange, InetAddress map = pendingRanges.get(table);
 if (map == null)
 {
 map = HashMultimap.create();
 MultimapRange, InetAddress fasterHorse 
   = pendingRanges.putIfAbsent(table, map);
 if(fasterHorse != null) {
   //another thread beat us to creating the map, oh well.
   map = fasterHorse;
 }
 }
 return map;
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (CASSANDRA-1913) move CQL from avro to thrift

2010-12-28 Thread Eric Evans (JIRA)
move CQL from avro to thrift


 Key: CASSANDRA-1913
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1913
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Affects Versions: 0.8
Reporter: Eric Evans
Assignee: Eric Evans
Priority: Minor
 Fix For: 0.8


Ultimately I'd like to create a custom transport for CQL, but in the meantime 
it makes sense to use one of the existing RPC frameworks while concentrating on 
the language and implementation.  Of the two (Avro/Thrift), Thrift seems to 
make more sense due to momentum.

See also: http://thread.gmane.org/gmane.comp.db.cassandra.client.devel/36

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-1255) Explore interning keys and column names

2010-12-28 Thread Stu Hood (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975673#action_12975673
 ] 

Stu Hood commented on CASSANDRA-1255:
-

I looked into interning, but did not see any obvious performance improvement: 
the branch is small and understandable though: 
https://github.com/stuhood/cassandra-old/commits/1255 , if somebody wants to 
run with it.

 Explore interning keys and column names
 ---

 Key: CASSANDRA-1255
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1255
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Stu Hood

 With multiple Memtables, key caches and row caches holding DecoratedKey 
 references, it could potentially be a huge memory savings (and relief to GC) 
 to intern DecoratedKeys. Taking the idea farther, for the skinny row pattern, 
 and for certain types of wide row patterns, interning of column names could 
 be very beneficial as well (although we would need to wrap the byte[]s in 
 something for hashCode/equals).
 This ticket should explore the benefits and overhead of interning.
 Google collections/guava MapMaker is a very convenient way to create this 
 type of cache: example call: 
 http://stackoverflow.com/questions/2865026/use-permgen-space-or-roll-my-own-intern-method/2865083#2865083

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1913) move CQL from avro to thrift

2010-12-28 Thread Eric Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Evans updated CASSANDRA-1913:
--

Attachment: v1-0005-port-python-driver-avro-thrift.txt
v1-0004-port-java-driver-avro-thrift.txt
v1-0003-port-CQL-server-code-avro-thrift.txt
v1-0002-thrift-compiler-generated-code.txt
v1-0001-CASSANDRA-1913-move-RPC-code-generation-avro-thrift.txt

 move CQL from avro to thrift
 

 Key: CASSANDRA-1913
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1913
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Affects Versions: 0.8
Reporter: Eric Evans
Assignee: Eric Evans
Priority: Minor
 Fix For: 0.8

 Attachments: 
 v1-0001-CASSANDRA-1913-move-RPC-code-generation-avro-thrift.txt, 
 v1-0002-thrift-compiler-generated-code.txt, 
 v1-0003-port-CQL-server-code-avro-thrift.txt, 
 v1-0004-port-java-driver-avro-thrift.txt, 
 v1-0005-port-python-driver-avro-thrift.txt

   Original Estimate: 0h
  Remaining Estimate: 0h

 Ultimately I'd like to create a custom transport for CQL, but in the meantime 
 it makes sense to use one of the existing RPC frameworks while concentrating 
 on the language and implementation.  Of the two (Avro/Thrift), Thrift seems 
 to make more sense due to momentum.
 See also: http://thread.gmane.org/gmane.comp.db.cassandra.client.devel/36

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1374) Make snitches configurable at runtime

2010-12-28 Thread Jon Hermes (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Hermes updated CASSANDRA-1374:
--

Attachment: 1374-v4.txt

DES registers itself during construction.
Explicit call to ((DES)eps).unregisterMBean() added.

Braces on newlines removed.

 Make snitches configurable at runtime
 -

 Key: CASSANDRA-1374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1374
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0 rc 1
Reporter: Jeremy Hanna
Assignee: Jon Hermes
 Fix For: 0.7.1

 Attachments: 1374-2.txt, 1374-rebase.txt, 1374-v4.txt, 1374.txt


 There needs to be the capability to configure snitches at runtime, even 
 though there is now a dynamic endpoint snitch - CASSANDRA-981.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (CASSANDRA-1374) Make snitches configurable at runtime

2010-12-28 Thread Jon Hermes (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975675#action_12975675
 ] 

Jon Hermes edited comment on CASSANDRA-1374 at 12/28/10 6:16 PM:
-

DES registers itself during construction.
Explicit call to ((DES)eps).unregisterMBean() added.

Braces not on newlines removed, then added immediately afterward (on a newline, 
no less).

  was (Author: jhermes):
DES registers itself during construction.
Explicit call to ((DES)eps).unregisterMBean() added.

Braces on newlines removed.
  
 Make snitches configurable at runtime
 -

 Key: CASSANDRA-1374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1374
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.7.0 rc 1
Reporter: Jeremy Hanna
Assignee: Jon Hermes
 Fix For: 0.7.1

 Attachments: 1374-2.txt, 1374-rebase.txt, 1374-v4.txt, 1374.txt


 There needs to be the capability to configure snitches at runtime, even 
 though there is now a dynamic endpoint snitch - CASSANDRA-981.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (CASSANDRA-1255) Explore interning keys and column names

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-1255.
---

Resolution: Not A Problem

Thanks for investigating.

 Explore interning keys and column names
 ---

 Key: CASSANDRA-1255
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1255
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Stu Hood

 With multiple Memtables, key caches and row caches holding DecoratedKey 
 references, it could potentially be a huge memory savings (and relief to GC) 
 to intern DecoratedKeys. Taking the idea farther, for the skinny row pattern, 
 and for certain types of wide row patterns, interning of column names could 
 be very beneficial as well (although we would need to wrap the byte[]s in 
 something for hashCode/equals).
 This ticket should explore the benefits and overhead of interning.
 Google collections/guava MapMaker is a very convenient way to create this 
 type of cache: example call: 
 http://stackoverflow.com/questions/2865026/use-permgen-space-or-roll-my-own-intern-method/2865083#2865083

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1822) Row level coverage in LegacySSTableTest

2010-12-28 Thread Stu Hood (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stu Hood updated CASSANDRA-1822:


Attachment: 0.7-1822.tgz

Here is a rebased copy for 0.7.

 Row level coverage in LegacySSTableTest
 ---

 Key: CASSANDRA-1822
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1822
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Stu Hood
Priority: Minor
 Fix For: 0.7.1

 Attachments: 0.7-1822.tgz, 1822.tgz, legacy-sstables.tgz


 LegacySSTableTest should check compatibility of content within rows.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



svn commit: r1053481 - in /cassandra/branches/cassandra-0.7: CHANGES.txt src/java/org/apache/cassandra/db/Table.java

2010-12-28 Thread brandonwilliams
Author: brandonwilliams
Date: Tue Dec 28 23:55:32 2010
New Revision: 1053481

URL: http://svn.apache.org/viewvc?rev=1053481view=rev
Log:
increase indexLocks for faster commitlog replay

Modified:
cassandra/branches/cassandra-0.7/CHANGES.txt
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/Table.java

Modified: cassandra/branches/cassandra-0.7/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/CHANGES.txt?rev=1053481r1=1053480r2=1053481view=diff
==
--- cassandra/branches/cassandra-0.7/CHANGES.txt (original)
+++ cassandra/branches/cassandra-0.7/CHANGES.txt Tue Dec 28 23:55:32 2010
@@ -16,6 +16,7 @@ dev
(CASSANDRA-1871)
  * allow [LOCAL|EACH]_QUORUM to be used with non-NetworkTopology 
replication Strategies
+ * increased amount of index locks for faster commitlog replay
 
 
 0.7.0-rc3

Modified: 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/Table.java
URL: 
http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/Table.java?rev=1053481r1=1053480r2=1053481view=diff
==
--- 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/Table.java 
(original)
+++ 
cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/db/Table.java 
Tue Dec 28 23:55:32 2010
@@ -262,7 +262,7 @@ public class Table
 throw new RuntimeException(e);
 }
 
-indexLocks = new Object[DatabaseDescriptor.getConcurrentWriters() * 8];
+indexLocks = new Object[DatabaseDescriptor.getConcurrentWriters() * 
128];
 for (int i = 0; i  indexLocks.length; i++)
 indexLocks[i] = new Object();
 // create data directories.




[jira] Commented: (CASSANDRA-1143) Nodetool gives cryptic errors when given a nonexistent keyspace arg

2010-12-28 Thread Ian Soboroff (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975698#action_12975698
 ] 

Ian Soboroff commented on CASSANDRA-1143:
-

Don't know. Gave up on Cassandra for Hbase. 







 Nodetool gives cryptic errors when given a nonexistent keyspace arg
 ---

 Key: CASSANDRA-1143
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1143
 Project: Cassandra
  Issue Type: Wish
  Components: Tools
 Environment: Sun Java 1.6u20, Cassandra 0.6.2, CentOS 5.5.
Reporter: Ian Soboroff
Assignee: Joaquin Casares
Priority: Trivial
   Original Estimate: 1h
  Remaining Estimate: 1h

 I typoed the keyspace arg to 'nodetool repair', and got the following 
 exception:
 /usr/local/src/cassandra/bin/nodetool --host node4 repair DocDb
 Exception in thread main java.lang.RuntimeException: No replica strategy 
 configured for DocDb
 at 
 org.apache.cassandra.service.StorageService.getReplicationStrategy(StorageService.java:246)
 at 
 org.apache.cassandra.service.StorageService.constructRangeToEndPointMap(StorageService.java:466)
 at 
 org.apache.cassandra.service.StorageService.getRangeToAddressMap(StorageService.java:452)
 at 
 org.apache.cassandra.service.AntiEntropyService.getNeighbors(AntiEntropyService.java:145)
 at 
 org.apache.cassandra.service.StorageService.forceTableRepair(StorageService.java:1075)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
 at sun.rmi.transport.Transport$1.run(Transport.java:159)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 It would be better to report that the keyspace doesn't exist, rather than the 
 keyspace doesn't have a replication strategy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1895) Loadbalance during gossip issues leaves cluster in bad state

2010-12-28 Thread Stu Hood (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stu Hood updated CASSANDRA-1895:


Priority: Major  (was: Critical)
 Summary: Loadbalance during gossip issues leaves cluster in bad state  
(was: Loadbalance in trunk leaves cluster in bad state)

After a rebase and changes to the EC2 images I was using, I'm no longer able to 
reproduce this against either 0.7 or trunk.

Nick noticed some gossip related problems in the attached logs, so I'm going to 
chalk this up to _either_ a bad rebase, or problems related to bootstrapping 
during gossip problems. Nick: could you chime in with details, and whether you 
think those gossip issues might be worth pursuing?

 Loadbalance during gossip issues leaves cluster in bad state
 

 Key: CASSANDRA-1895
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1895
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Stu Hood
Assignee: Stu Hood
 Fix For: 0.8

 Attachments: logs.tgz, ring-views.txt


 Running loadbalance against a node in a 4 node cluster leaves gossip in a 
 wonky state.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-982) read repair on quorum consistencylevel

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-982:
-

Attachment: 0003-rename-QuorumResponseHandler-ReadCallback.txt
0002-implement-read-repair-as-a-second-resolve-after-the-in.txt
0001-r-m-SP.weakRead-rename-strongRead-to-fetchRows.-read-r.txt

 read repair on quorum consistencylevel
 --

 Key: CASSANDRA-982
 URL: https://issues.apache.org/jira/browse/CASSANDRA-982
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-better-digest-checking-for-ReadResponseResolver.patch, 
 0001-r-m-SP.weakRead-rename-strongRead-to-fetchRows.-read-r.txt, 
 0002-implement-read-repair-as-a-second-resolve-after-the-in.txt, 
 0002-quorum-only-read.txt, 
 0003-rename-QuorumResponseHandler-ReadCallback.txt, 982-resolve-digests-v2.txt

   Original Estimate: 6h
  Remaining Estimate: 6h

 CASSANDRA-930 made read repair fuzzy optional, but this only helps with 
 ConsistencyLevel.ONE:
 - Quorum reads always send requests to all nodes
 - only the first Quorum's worth of responses get compared
 So what we'd like to do two changes:
 - only send read requests to the closest R live nodes
 - if read repair is enabled, also compare results from the other nodes in the 
 background

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (CASSANDRA-982) read repair on quorum consistencylevel

2010-12-28 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975724#action_12975724
 ] 

Jonathan Ellis commented on CASSANDRA-982:
--

03
rename QuorumResponseHandler - ReadCallback

02
implement read repair as a second resolve after the initial one for the 
data, using RepairCallback

01
r/m SP.weakRead, rename strongRead to fetchRows.  read repair is broken (no 
ConsistencyCheckers are called)


 read repair on quorum consistencylevel
 --

 Key: CASSANDRA-982
 URL: https://issues.apache.org/jira/browse/CASSANDRA-982
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 0.7.1

 Attachments: 
 0001-better-digest-checking-for-ReadResponseResolver.patch, 
 0001-r-m-SP.weakRead-rename-strongRead-to-fetchRows.-read-r.txt, 
 0002-implement-read-repair-as-a-second-resolve-after-the-in.txt, 
 0002-quorum-only-read.txt, 
 0003-rename-QuorumResponseHandler-ReadCallback.txt, 982-resolve-digests-v2.txt

   Original Estimate: 6h
  Remaining Estimate: 6h

 CASSANDRA-930 made read repair fuzzy optional, but this only helps with 
 ConsistencyLevel.ONE:
 - Quorum reads always send requests to all nodes
 - only the first Quorum's worth of responses get compared
 So what we'd like to do two changes:
 - only send read requests to the closest R live nodes
 - if read repair is enabled, also compare results from the other nodes in the 
 background

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (CASSANDRA-1438) Stream*Manager doesn't clean up broken Streams

2010-12-28 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1438:
--

Fix Version/s: (was: 0.6.9)
   (was: 0.7.1)

 Stream*Manager doesn't clean up broken Streams
 --

 Key: CASSANDRA-1438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1438
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.6
Reporter: Nick Bailey
 Attachments: 1438.txt


 StreamInManager and StreamOutManager only remove stream contexts/managers 
 when a stream completes successfully.  Any broken streams will cause objects 
 to hang around and never get garbage collected.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.