[jira] [Commented] (CASSANDRA-5474) failure when passing null parameter to prepared statement

2013-04-16 Thread Pierre Chalamet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632636#comment-13632636
 ] 

Pierre Chalamet commented on CASSANDRA-5474:


Really not sure.

Here is the frame I'm sending, and this fails.

{code}
// header
1, 0, 125, 10

// len
0, 0, 0, 34

// body

// requestId
0, 16, 
89, 179, 214, 186, 237, 103, 213, 192, 163, 206, 210, 158, 187, 66, 119, 197

// nb columns
0, 2, 

// a = 1
0, 0, 0, 4
0, 0, 0, 1

// b = null
255, 255, 255, 255

// CL
0, 4
{code}

 failure when passing null parameter to prepared statement
 -

 Key: CASSANDRA-5474
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5474
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: windows 8 x64, 1.7.0_11-b21 x64
Reporter: Pierre Chalamet

 I have a failure when passing a null parameter to the prepared statement 
 bellow when going through the cql 3 bin protocol:
 {code}
 Exec: CREATE KEYSPACE Tests WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor' : 1}
 Exec: CREATE TABLE Tests.AllTypes (a int, b int, primary key (a))
 Prepare: insert into Tests.AllTypes (a, b) values (?, ?)
 {code}
 Passing a=1 and b=null cause the following error:
 {code}
 DEBUG 23:07:23,315 Responding: RESULT PREPARED 
 59b3d6baed67d5c0a3ced29ebb4277c5 [a(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type][b(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type]
 DEBUG 23:07:23,292 Compaction buckets are []
 DEBUG 23:07:23,336 Received: EXECUTE 59b3d6baed67d5c0a3ced29ebb4277c5 with 2 
 values at consistency QUORUM
 ERROR 23:07:23,338 Unexpected exception during request
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.marshal.Int32Type.validate(Int32Type.java:95)
 at 
 org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:257)
 at 
 org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:282)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.mutationForKey(UpdateStatement.java:250)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.getMutations(UpdateStatement.java:133)
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:92)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:254)
 at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:122)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:287)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:45)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:69)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 DEBUG 23:07:23,337 No tasks available
 DEBUG 23:07:23,341 request complete
 DEBUG 23:07:23,343 Responding: ERROR SERVER_ERROR: 
 java.lang.NullPointerException
 {code}
 When serializing value for b, a bytes array of len -1 is transmitted 
 (accordingly to the spec):
 {code}
 [bytes] A [int] n, followed by n bytes if n = 0. If n  0,
 no byte should follow and the value represented is `null`.
 {code}
 CASSANDRA-5081 added support for null params. Am I doing something wrong 
 there ? Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5476) Exceptions in 1.1 nodes with 1.2 nodes in ring

2013-04-16 Thread John Watson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Watson updated CASSANDRA-5476:
---

Affects Version/s: 1.1.9
   1.2.3

 Exceptions in 1.1 nodes with 1.2 nodes in ring
 --

 Key: CASSANDRA-5476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5476
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.9, 1.2.3
Reporter: John Watson

 As 1.1.9 nodes were being upgraded to 1.2.3 nodes, the 1.1.9 nodes started 
 having this exception:
 Exception in thread Thread[RequestResponseStage:19496,5,main]
 java.io.IOError: java.io.EOFException
 at 
 org.apache.cassandra.service.AbstractRowResolver.preprocess(AbstractRowResolver.java:71)
 at 
 org.apache.cassandra.service.ReadCallback.response(ReadCallback.java:155)
 at 
 org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.EOFException
 at java.io.DataInputStream.readFully(DataInputStream.java:180)
 at 
 org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:100)
 at 
 org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:81)
 at 
 org.apache.cassandra.service.AbstractRowResolver.preprocess(AbstractRowResolver.java:64)
 ... 6 more
 As more 1.2.3 nodes were upgraded, the 1.2.3 nodes began logging for 1.1.9 
 node IPs:
 Unable to store hint for host with missing ID, /10.37.62.71 (old node?)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5476) Exceptions in 1.1 nodes with 1.2 nodes in ring

2013-04-16 Thread John Watson (JIRA)
John Watson created CASSANDRA-5476:
--

 Summary: Exceptions in 1.1 nodes with 1.2 nodes in ring
 Key: CASSANDRA-5476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5476
 Project: Cassandra
  Issue Type: Bug
Reporter: John Watson


As 1.1.9 nodes were being upgraded to 1.2.3 nodes, the 1.1.9 nodes started 
having this exception:

Exception in thread Thread[RequestResponseStage:19496,5,main]
java.io.IOError: java.io.EOFException
at 
org.apache.cassandra.service.AbstractRowResolver.preprocess(AbstractRowResolver.java:71)
at 
org.apache.cassandra.service.ReadCallback.response(ReadCallback.java:155)
at 
org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)
at 
org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:100)
at 
org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:81)
at 
org.apache.cassandra.service.AbstractRowResolver.preprocess(AbstractRowResolver.java:64)
... 6 more

As more 1.2.3 nodes were upgraded, the 1.2.3 nodes began logging for 1.1.9 node 
IPs:

Unable to store hint for host with missing ID, /10.37.62.71 (old node?)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5477) Nodetool throws IOException when snapshot name already exists

2013-04-16 Thread Geert Schuring (JIRA)
Geert Schuring created CASSANDRA-5477:
-

 Summary: Nodetool throws IOException when snapshot name already 
exists
 Key: CASSANDRA-5477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5477
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.2.2
 Environment: MacOS, Datastax Cassandra 1.2.2
Reporter: Geert Schuring


When requesting a snapshot the nodetool throws an exception with a full 
stackstrace when a snapshot with the requested name already exists.

Instead it should just print a single line stating the fact that a snapshot 
with that name already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5478) Nodetool clearsnapshot incorrectly reports to have requested a snapshot

2013-04-16 Thread Geert Schuring (JIRA)
Geert Schuring created CASSANDRA-5478:
-

 Summary: Nodetool clearsnapshot incorrectly reports to have 
requested a snapshot
 Key: CASSANDRA-5478
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5478
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.2.2
 Environment: MacOS, Datastax Cassandra 1.2.2
Reporter: Geert Schuring


When requesting a snapshot the nodetool throws an exception with a full 
stackstrace when a snapshot with the requested name already exists.

Instead it should just print a single line stating the fact that a snapshot 
with that name already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5478) Nodetool clearsnapshot incorrectly reports to have requested a snapshot

2013-04-16 Thread Geert Schuring (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geert Schuring updated CASSANDRA-5478:
--

Description: 
When running nodetool clearsnapshot all existing snapshots are removed, but 
the following message is printed:

./nodetool clearsnapshot
Requested snapshot for: all keyspaces 

Instead it should just print a single line stating that all snapshots have been 
removed.

  was:
When requesting a snapshot the nodetool throws an exception with a full 
stackstrace when a snapshot with the requested name already exists.

Instead it should just print a single line stating the fact that a snapshot 
with that name already exists.


 Nodetool clearsnapshot incorrectly reports to have requested a snapshot
 ---

 Key: CASSANDRA-5478
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5478
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.2.2
 Environment: MacOS, Datastax Cassandra 1.2.2
Reporter: Geert Schuring
  Labels: exception-reporting, lhf, nodetool
   Original Estimate: 1h
  Remaining Estimate: 1h

 When running nodetool clearsnapshot all existing snapshots are removed, but 
 the following message is printed:
 ./nodetool clearsnapshot
 Requested snapshot for: all keyspaces 
 Instead it should just print a single line stating that all snapshots have 
 been removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5478) Nodetool clearsnapshot incorrectly reports to have requested a snapshot

2013-04-16 Thread Geert Schuring (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geert Schuring updated CASSANDRA-5478:
--

Issue Type: Bug  (was: Improvement)

 Nodetool clearsnapshot incorrectly reports to have requested a snapshot
 ---

 Key: CASSANDRA-5478
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5478
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.2
 Environment: MacOS, Datastax Cassandra 1.2.2
Reporter: Geert Schuring
  Labels: exception-reporting, lhf, nodetool
   Original Estimate: 1h
  Remaining Estimate: 1h

 When running nodetool clearsnapshot all existing snapshots are removed, but 
 the following message is printed:
 ./nodetool clearsnapshot
 Requested snapshot for: all keyspaces 
 Instead it should just print a single line stating that all snapshots have 
 been removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5479) hyper log log (hyperloglog) probabilistic counters

2013-04-16 Thread Nikolay (JIRA)
Nikolay created CASSANDRA-5479:
--

 Summary: hyper log log (hyperloglog) probabilistic counters
 Key: CASSANDRA-5479
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5479
 Project: Cassandra
  Issue Type: Wish
Reporter: Nikolay
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5479) hyper log log (hyperloglog) probabilistic counters

2013-04-16 Thread Nikolay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay updated CASSANDRA-5479:
---

Description: 
Would be nice if we can include probabilistic counters in Cassandra.
People do not use them because people do not know their power.


 hyper log log (hyperloglog) probabilistic counters
 --

 Key: CASSANDRA-5479
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5479
 Project: Cassandra
  Issue Type: Wish
Reporter: Nikolay
Priority: Minor

 Would be nice if we can include probabilistic counters in Cassandra.
 People do not use them because people do not know their power.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5479) hyper log log (hyperloglog) probabilistic counters

2013-04-16 Thread Nikolay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay updated CASSANDRA-5479:
---

Description: 
Would be nice if we can include probabilistic counters in Cassandra.
People do not use them because people do not know their power.

HyperLogLog also can merge counters, so in case of node disconnections,
will be easy to have different versions on each node. When node come online,
it just merge the data from other nodes.

Same can be used when adding data - similarly to Counter_CF,
HyperLogLog will need to read single replica and add there.

Adding can be even add without reading - just drop the new item somewhere,
but then reading will be much more slow.

  was:
Would be nice if we can include probabilistic counters in Cassandra.
People do not use them because people do not know their power.



 hyper log log (hyperloglog) probabilistic counters
 --

 Key: CASSANDRA-5479
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5479
 Project: Cassandra
  Issue Type: Wish
Reporter: Nikolay
Priority: Minor

 Would be nice if we can include probabilistic counters in Cassandra.
 People do not use them because people do not know their power.
 HyperLogLog also can merge counters, so in case of node disconnections,
 will be easy to have different versions on each node. When node come online,
 it just merge the data from other nodes.
 Same can be used when adding data - similarly to Counter_CF,
 HyperLogLog will need to read single replica and add there.
 Adding can be even add without reading - just drop the new item somewhere,
 but then reading will be much more slow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5480) Case insensitive cqlsh

2013-04-16 Thread JIRA
Kévin LOVATO created CASSANDRA-5480:
---

 Summary: Case insensitive cqlsh
 Key: CASSANDRA-5480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5480
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.2
Reporter: Kévin LOVATO


cqlsh doesn't seem to be case sensitive for strategy_options, so the following 
query:
{code} ALTER KEYSPACE MyKeyspace WITH replication = {'class': 
'NetworkTopologyStrategy', 'Paris-CEN' : 1 };
{code}
Modified my keyspace with strategy_options 'paris-cen' which differs from what 
is configured in my {{cassandra-topology.properties}} and made subsequent 
queries to this keyspace fail with an UnavailableException.
I could fix my issue by updating the Keyspace configuration by code but it 
would be nice to be able to do it using cqlsh.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5481) CQLSH exception handling could leave a session in a bad state

2013-04-16 Thread Jordan Pittier (JIRA)
Jordan Pittier created CASSANDRA-5481:
-

 Summary: CQLSH exception handling could leave a session in a bad 
state
 Key: CASSANDRA-5481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5481
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.4
 Environment: cqlsh 2.3.0 | Cassandra 1.2.4 | CQL spec 3.0.0 | Thrift 
protocol 19.35.0
Reporter: Jordan Pittier
Priority: Minor


Playing with CTRL+C in a cqlsh session can leave the (Thrift|Native) connection 
in a bad state.

To reproduce :
1) Run a long running COPY FROM command (COPY test (k, v) FROM '/tmp/test.csv')
2) Interrupt the importer with CTRL+C

Repeat step 1 and 2 until you start seeing weird things in the cql shell (see 
attached screenshot)

The reason is, I believe, the connection (and the cursor) is not correclty 
closed and reopened on interruption.

I am working to propose a fix.

Jordan

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5481) CQLSH exception handling could leave a session in a bad state

2013-04-16 Thread Jordan Pittier (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan Pittier updated CASSANDRA-5481:
--

Attachment: CQLSession.png

Broken CQL shell session

 CQLSH exception handling could leave a session in a bad state
 -

 Key: CASSANDRA-5481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5481
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.4
 Environment: cqlsh 2.3.0 | Cassandra 1.2.4 | CQL spec 3.0.0 | Thrift 
 protocol 19.35.0
Reporter: Jordan Pittier
Priority: Minor
 Attachments: CQLSession.png


 Playing with CTRL+C in a cqlsh session can leave the (Thrift|Native) 
 connection in a bad state.
 To reproduce :
 1) Run a long running COPY FROM command (COPY test (k, v) FROM 
 '/tmp/test.csv')
 2) Interrupt the importer with CTRL+C
 Repeat step 1 and 2 until you start seeing weird things in the cql shell (see 
 attached screenshot)
 The reason is, I believe, the connection (and the cursor) is not correclty 
 closed and reopened on interruption.
 I am working to propose a fix.
 Jordan

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5474) failure when passing null parameter to prepared statement

2013-04-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632867#comment-13632867
 ] 

Aleksey Yeschenko commented on CASSANDRA-5474:
--

Interesting. The frame indeed looks correct, and an analogous one seems to work 
all right for me (1.2.4 and latest cassandra-1.2 branch both).

{noformat}
1,0,0,10,
  0,0,0,34,
  0,16,228,43,12,34,120,160,113,48,251,103,244,149,202,119,0,103,
  0,2,
  0,0,0,4,0,0,0,1,
  255,255,255,255,
  0,4
{noformat}

Frame id and query id are different, but other than that the two frames are 
identical.


 failure when passing null parameter to prepared statement
 -

 Key: CASSANDRA-5474
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5474
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: windows 8 x64, 1.7.0_11-b21 x64
Reporter: Pierre Chalamet

 I have a failure when passing a null parameter to the prepared statement 
 bellow when going through the cql 3 bin protocol:
 {code}
 Exec: CREATE KEYSPACE Tests WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor' : 1}
 Exec: CREATE TABLE Tests.AllTypes (a int, b int, primary key (a))
 Prepare: insert into Tests.AllTypes (a, b) values (?, ?)
 {code}
 Passing a=1 and b=null cause the following error:
 {code}
 DEBUG 23:07:23,315 Responding: RESULT PREPARED 
 59b3d6baed67d5c0a3ced29ebb4277c5 [a(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type][b(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type]
 DEBUG 23:07:23,292 Compaction buckets are []
 DEBUG 23:07:23,336 Received: EXECUTE 59b3d6baed67d5c0a3ced29ebb4277c5 with 2 
 values at consistency QUORUM
 ERROR 23:07:23,338 Unexpected exception during request
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.marshal.Int32Type.validate(Int32Type.java:95)
 at 
 org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:257)
 at 
 org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:282)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.mutationForKey(UpdateStatement.java:250)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.getMutations(UpdateStatement.java:133)
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:92)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:254)
 at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:122)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:287)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:45)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:69)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 DEBUG 23:07:23,337 No tasks available
 DEBUG 23:07:23,341 request complete
 DEBUG 23:07:23,343 Responding: ERROR SERVER_ERROR: 
 java.lang.NullPointerException
 {code}
 When serializing value for b, a bytes array of len -1 is transmitted 
 (accordingly to the spec):
 {code}
 [bytes] A [int] n, followed by n bytes if n = 0. If n  0,
 no byte should follow and the value represented is `null`.
 {code}
 CASSANDRA-5081 added support for null params. Am I doing something wrong 
 there ? Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5474) failure when passing null parameter to prepared statement

2013-04-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632878#comment-13632878
 ] 

Aleksey Yeschenko commented on CASSANDRA-5474:
--

Judging by your stack trace, you don't really have 1.2.4, or so it seems. I 
suggest you check out cassandra-1.2.4 tag or pull the latest cassandra-1.2 
branch and build it yourself, then see if you still have the issue (you 
shouldn't).

 failure when passing null parameter to prepared statement
 -

 Key: CASSANDRA-5474
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5474
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: windows 8 x64, 1.7.0_11-b21 x64
Reporter: Pierre Chalamet

 I have a failure when passing a null parameter to the prepared statement 
 bellow when going through the cql 3 bin protocol:
 {code}
 Exec: CREATE KEYSPACE Tests WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor' : 1}
 Exec: CREATE TABLE Tests.AllTypes (a int, b int, primary key (a))
 Prepare: insert into Tests.AllTypes (a, b) values (?, ?)
 {code}
 Passing a=1 and b=null cause the following error:
 {code}
 DEBUG 23:07:23,315 Responding: RESULT PREPARED 
 59b3d6baed67d5c0a3ced29ebb4277c5 [a(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type][b(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type]
 DEBUG 23:07:23,292 Compaction buckets are []
 DEBUG 23:07:23,336 Received: EXECUTE 59b3d6baed67d5c0a3ced29ebb4277c5 with 2 
 values at consistency QUORUM
 ERROR 23:07:23,338 Unexpected exception during request
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.marshal.Int32Type.validate(Int32Type.java:95)
 at 
 org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:257)
 at 
 org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:282)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.mutationForKey(UpdateStatement.java:250)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.getMutations(UpdateStatement.java:133)
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:92)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:254)
 at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:122)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:287)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:45)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:69)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 DEBUG 23:07:23,337 No tasks available
 DEBUG 23:07:23,341 request complete
 DEBUG 23:07:23,343 Responding: ERROR SERVER_ERROR: 
 java.lang.NullPointerException
 {code}
 When serializing value for b, a bytes array of len -1 is transmitted 
 (accordingly to the spec):
 {code}
 [bytes] A [int] n, followed by n bytes if n = 0. If n  0,
 no byte should follow and the value represented is `null`.
 {code}
 CASSANDRA-5081 added support for null params. Am I doing something wrong 
 there ? Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5482) Incorrect query created when fetching Metadata information

2013-04-16 Thread Frederico Ramos (JIRA)
Frederico Ramos created CASSANDRA-5482:
--

 Summary: Incorrect query created when fetching Metadata information
 Key: CASSANDRA-5482
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5482
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
Affects Versions: 1.2.3
 Environment: Suse Linux 11.4 x86_64
Java JDK 1.6.0_41 64-bits
Reporter: Frederico Ramos


When calling the {{getColumns()}} method from the {{DatabaseMetaData}} class, 
leaving the first parameter ({{catalog}}) NULL and specifying only the schema, 
table and column names, the internal query created by the metadata class is 
incorrect. It generates something like:

{{SELECT keyspace_name, columnfamily_name, column_name, component_index, 
index_name, index_options, index_type, validator FROM system.schema_columns  
WHERE AND columnfamily_name = ''column_name = ''  ALLOW FILTERING;}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5477) Nodetool throws IOException when snapshot name already exists

2013-04-16 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5477.
---

Resolution: Duplicate

 Nodetool throws IOException when snapshot name already exists
 -

 Key: CASSANDRA-5477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5477
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Affects Versions: 1.2.2
 Environment: MacOS, Datastax Cassandra 1.2.2
Reporter: Geert Schuring
  Labels: exception-reporting, lhf, nodetool
   Original Estimate: 1h
  Remaining Estimate: 1h

 When requesting a snapshot the nodetool throws an exception with a full 
 stackstrace when a snapshot with the requested name already exists.
 Instead it should just print a single line stating the fact that a snapshot 
 with that name already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5479) hyper log log (hyperloglog) probabilistic counters

2013-04-16 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5479.
---

Resolution: Won't Fix

Already noted as a feature request for CASSANDRA-4775.

 hyper log log (hyperloglog) probabilistic counters
 --

 Key: CASSANDRA-5479
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5479
 Project: Cassandra
  Issue Type: Wish
Reporter: Nikolay
Priority: Minor

 Would be nice if we can include probabilistic counters in Cassandra.
 People do not use them because people do not know their power.
 HyperLogLog also can merge counters, so in case of node disconnections,
 will be easy to have different versions on each node. When node come online,
 it just merge the data from other nodes.
 Same can be used when adding data - similarly to Counter_CF,
 HyperLogLog will need to read single replica and add there.
 Adding can be even add without reading - just drop the new item somewhere,
 but then reading will be much more slow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5480) Case insensitive cqlsh

2013-04-16 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5480.
---

Resolution: Duplicate

Fixed in CASSANDRA-5292.  As a rule of thumb, it is good practice to run the 
latest release before filing bug reports.

 Case insensitive cqlsh
 --

 Key: CASSANDRA-5480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5480
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.2
Reporter: Kévin LOVATO

 cqlsh doesn't seem to be case sensitive for strategy_options, so the 
 following query:
 {code} ALTER KEYSPACE MyKeyspace WITH replication = {'class': 
 'NetworkTopologyStrategy', 'Paris-CEN' : 1 };
 {code}
 Modified my keyspace with strategy_options 'paris-cen' which differs from 
 what is configured in my {{cassandra-topology.properties}} and made 
 subsequent queries to this keyspace fail with an UnavailableException.
 I could fix my issue by updating the Keyspace configuration by code but it 
 would be nice to be able to do it using cqlsh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5482) Incorrect query created when fetching Metadata information

2013-04-16 Thread Frederico Ramos (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frederico Ramos updated CASSANDRA-5482:
---

Attachment: fix.patch

The fix was implemented using the trunk branch. I did not validate if the 
error occurred in previous versions.

 Incorrect query created when fetching Metadata information
 --

 Key: CASSANDRA-5482
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5482
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
Affects Versions: 1.2.3
 Environment: Suse Linux 11.4 x86_64
 Java JDK 1.6.0_41 64-bits
Reporter: Frederico Ramos
 Attachments: fix.patch


 When calling the {{getColumns()}} method from the {{DatabaseMetaData}} class, 
 leaving the first parameter ({{catalog}}) NULL and specifying only the 
 schema, table and column names, the internal query created by the metadata 
 class is incorrect. It generates something like:
 {{SELECT keyspace_name, columnfamily_name, column_name, component_index, 
 index_name, index_options, index_type, validator FROM system.schema_columns  
 WHERE AND columnfamily_name = ''column_name = ''  ALLOW FILTERING;}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5482) Incorrect query created when fetching Metadata information

2013-04-16 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5482.
---

Resolution: Invalid

There is no DatabaseMetadata class in the Cassandra tree.  Please file this 
with the project that you got this code from.

 Incorrect query created when fetching Metadata information
 --

 Key: CASSANDRA-5482
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5482
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
Affects Versions: 1.2.3
 Environment: Suse Linux 11.4 x86_64
 Java JDK 1.6.0_41 64-bits
Reporter: Frederico Ramos
 Attachments: fix.patch


 When calling the {{getColumns()}} method from the {{DatabaseMetaData}} class, 
 leaving the first parameter ({{catalog}}) NULL and specifying only the 
 schema, table and column names, the internal query created by the metadata 
 class is incorrect. It generates something like:
 {{SELECT keyspace_name, columnfamily_name, column_name, component_index, 
 index_name, index_options, index_type, validator FROM system.schema_columns  
 WHERE AND columnfamily_name = ''column_name = ''  ALLOW FILTERING;}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Issue Comment Deleted] (CASSANDRA-5482) Incorrect query created when fetching Metadata information

2013-04-16 Thread Frederico Ramos (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frederico Ramos updated CASSANDRA-5482:
---

Comment: was deleted

(was: The fix was implemented using the trunk branch. I did not validate if 
the error occurred in previous versions.)

 Incorrect query created when fetching Metadata information
 --

 Key: CASSANDRA-5482
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5482
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
Affects Versions: 1.2.3
 Environment: Suse Linux 11.4 x86_64
 Java JDK 1.6.0_41 64-bits
Reporter: Frederico Ramos
 Attachments: fix.patch


 When calling the {{getColumns()}} method from the {{DatabaseMetaData}} class, 
 leaving the first parameter ({{catalog}}) NULL and specifying only the 
 schema, table and column names, the internal query created by the metadata 
 class is incorrect. It generates something like:
 {{SELECT keyspace_name, columnfamily_name, column_name, component_index, 
 index_name, index_options, index_type, validator FROM system.schema_columns  
 WHERE AND columnfamily_name = ''column_name = ''  ALLOW FILTERING;}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Update of ContributorsGroup by JonathanEllis

2013-04-16 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The ContributorsGroup page has been changed by JonathanEllis:
http://wiki.apache.org/cassandra/ContributorsGroup?action=diffrev1=9rev2=10

   * LukasGutschmidt
   * LukasWingerberg
   * MakiWatanabe
+  * MarcusEriksson
   * MarkWatson
   * MatthewDennis
   * NickBailey


[Cassandra Wiki] Update of Committers by MarcusEriksson

2013-04-16 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Committers page has been changed by MarcusEriksson:
http://wiki.apache.org/cassandra/Committers?action=diffrev1=33rev2=34

  ||Yuki Morishita||May 2012||Datastax
  ||Aleksey Yeschenko||Nov 2012||Datastax|| ||
  ||Jason Brown||Feb 2013||Netflix|| ||
+ ||Marcus Eriksson||April 2013||Spotify|| ||
  


[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632933#comment-13632933
 ] 

Jonathan Ellis commented on CASSANDRA-5424:
---

Some questions:
- Were we relying on the Set behavior to de-duplicate entries in {{replicas}} 
before copying it into an ArrayList at the end, or was that just a case of 
being over-cautious?
- Why don't we need to check {{ranges.size  0}} any more in 
{{forceRepairAsync}}?
- Do we need to fix other uses of {{tokenMetadata.getPrimaryRangesFor}} such as 
{{SS.sampleKeyRange}}?
- Can we use {{getCachedEndpoints}} instead of {{calculateNaturalEndpoints}}?

Also:
- It's probably worth adding some comments to {{getPrimaryRangesForEndpoint}} 
-- superficially, it looks like it is incorrect since it is still using the 
non-Strategy-aware {{metadata.getPredecessor}}, but after working some examples 
I am satisfied that it does the right thing, as it does here.


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 --

 Key: CASSANDRA-5424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.7
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 1.2.5

 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 Commands follow, but the TL;DR of it, range 
 (127605887595351923798765477786913079296,0] doesn't get repaired between .38 
 node and .236 node until I run a repair, no -pr, on .38
 It seems like primary arnge calculation doesn't take schema into account, but 
 deciding who to ask for merkle tree's from does.
 {noformat}
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  127605887595351923798765477786913079296 
 10.72.111.225   Cassandra   rack1   Up Normal  455.87 KB   25.00% 
  0   
 10.2.29.38  Analytics   rack1   Up Normal  40.74 MB25.00% 
  42535295865117307932921825928971026432  
 10.46.113.236   Analytics   rack1   Up Normal  20.65 MB50.00% 
  127605887595351923798765477786913079296 
 create keyspace Keyspace1
   with placement_strategy = 'NetworkTopologyStrategy'
   and strategy_options = {Analytics : 2}
   and durable_writes = true;
 ---
 # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1
 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e 
 for range (0,42535295865117307932921825928971026432] finished
 [2013-04-03 15:47:00,881] Repair command #1 finished
 root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java 
 (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will 
 sync a1/10.2.29.38, /10.46.113.236 on range 
 (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1]
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java 
 (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle 
 trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38])
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from /10.46.113.236
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from a1/10.2.29.38
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java 
 (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints 
 /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully 
 synced
  INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed 
 successfully
 root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java 
 (line 244) 

[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's

2013-04-16 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632946#comment-13632946
 ] 

Yuki Morishita commented on CASSANDRA-5424:
---

bq. Were we relying on the Set behavior to de-duplicate entries in replicas 
before copying it into an ArrayList at the end, or was that just a case of 
being over-cautious?

hmm, I think we need to check if we have duplicates.

bq. Why don't we need to check ranges.size  0 any more in forceRepairAsync?

I added 'isEmpty' check at the beginning instead. Without that, repair command 
hangs on client side.

bq. Do we need to fix other uses of tokenMetadata.getPrimaryRangesFor such as 
SS.sampleKeyRange?

I was not sure if we need to fix. It looks like sampleKeyRange is only used by 
nodetool.

bq. Can we use getCachedEndpoints instead of calculateNaturalEndpoints?

Probably we can use getNaturalEndpoints, which uses cached endpoints.

I'll brush up my patch with comments and unit tests.

 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 --

 Key: CASSANDRA-5424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.7
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 1.2.5

 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 Commands follow, but the TL;DR of it, range 
 (127605887595351923798765477786913079296,0] doesn't get repaired between .38 
 node and .236 node until I run a repair, no -pr, on .38
 It seems like primary arnge calculation doesn't take schema into account, but 
 deciding who to ask for merkle tree's from does.
 {noformat}
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  127605887595351923798765477786913079296 
 10.72.111.225   Cassandra   rack1   Up Normal  455.87 KB   25.00% 
  0   
 10.2.29.38  Analytics   rack1   Up Normal  40.74 MB25.00% 
  42535295865117307932921825928971026432  
 10.46.113.236   Analytics   rack1   Up Normal  20.65 MB50.00% 
  127605887595351923798765477786913079296 
 create keyspace Keyspace1
   with placement_strategy = 'NetworkTopologyStrategy'
   and strategy_options = {Analytics : 2}
   and durable_writes = true;
 ---
 # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1
 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e 
 for range (0,42535295865117307932921825928971026432] finished
 [2013-04-03 15:47:00,881] Repair command #1 finished
 root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java 
 (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will 
 sync a1/10.2.29.38, /10.46.113.236 on range 
 (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1]
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java 
 (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle 
 trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38])
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from /10.46.113.236
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from a1/10.2.29.38
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java 
 (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints 
 /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully 
 synced
  INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed 
 successfully
 root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 

[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632960#comment-13632960
 ] 

Jonathan Ellis commented on CASSANDRA-5424:
---

bq. It looks like sampleKeyRange is only used by nodetool

It's a minor problem (looks like it's mostly there to support OPP: 
CASSANDRA-2917) but we should probably fix it.

Also, it looks like Bootstrap is using it to determine where to bisect ranges.  
We should fix that one way or another (where another might be get rid of 
token selection on bootstrap and force people to either use vnodes or specify 
token manually).  Separate ticket as followup is fine here IMO.

 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 --

 Key: CASSANDRA-5424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.7
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 1.2.5

 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 Commands follow, but the TL;DR of it, range 
 (127605887595351923798765477786913079296,0] doesn't get repaired between .38 
 node and .236 node until I run a repair, no -pr, on .38
 It seems like primary arnge calculation doesn't take schema into account, but 
 deciding who to ask for merkle tree's from does.
 {noformat}
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  127605887595351923798765477786913079296 
 10.72.111.225   Cassandra   rack1   Up Normal  455.87 KB   25.00% 
  0   
 10.2.29.38  Analytics   rack1   Up Normal  40.74 MB25.00% 
  42535295865117307932921825928971026432  
 10.46.113.236   Analytics   rack1   Up Normal  20.65 MB50.00% 
  127605887595351923798765477786913079296 
 create keyspace Keyspace1
   with placement_strategy = 'NetworkTopologyStrategy'
   and strategy_options = {Analytics : 2}
   and durable_writes = true;
 ---
 # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1
 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e 
 for range (0,42535295865117307932921825928971026432] finished
 [2013-04-03 15:47:00,881] Repair command #1 finished
 root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java 
 (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will 
 sync a1/10.2.29.38, /10.46.113.236 on range 
 (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1]
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java 
 (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle 
 trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38])
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from /10.46.113.236
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from a1/10.2.29.38
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java 
 (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints 
 /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully 
 synced
  INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed 
 successfully
 root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java 
 (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed 
 merkle tree to /10.2.29.38 for (Keyspace1,Standard1)
 root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
 root@ip-10-72-111-225:/home/ubuntu# 
 ---
 # nodetool -h 

[jira] [Commented] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)

2013-04-16 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632970#comment-13632970
 ] 

Ryan McGuire commented on CASSANDRA-4860:
-

I increased my key_cache_size_in_mb and did in fact get better results:

{code}
Averages from the middle 80% of values:
interval_op_rate  : 27848
interval_key_rate : 27848
latency median: 0.7
latency 95th percentile   : 1.4
latency 99.9th percentile : 22.4
Total operation time  : 00:01:20
{code}

This is roughly equal to the original messure() method in read performance, I'm 
happy with it!

 Estimated Row Cache Entry size incorrect (always 24?)
 -

 Key: CASSANDRA-4860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0, 1.2.3, 2.0
Reporter: Chris Burroughs
Assignee: Vijay
 Fix For: 1.2.0 beta 3

 Attachments: 0001-4860-v2.patch, 0001-4860-v3.patch, 
 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, 
 4860-perf-test.zip, trunk-4860-revert.patch


 After running for several hours the RowCacheSize was suspicious low (ie 70 
 something MB)  I used  CASSANDRA-4859 to measure the size and number of 
 entries on a node:
 In [3]: 1560504./65021
 Out[3]: 24.0
 In [4]: 2149464./89561
 Out[4]: 24.0
 In [6]: 7216096./300785
 Out[6]: 23.990877204647838
 That's RowCacheSize/RowCacheNumEntires  .  Just to prove I don't have crazy 
 small rows the mean size of the row *keys* in the saved cache is 67 and 
 Compacted row mean size: 355.  No jamm errors in the log
 Config notes:
 row_cache_provider: ConcurrentLinkedHashCacheProvider
 row_cache_size_in_mb: 2048
 Version info:
  * C*: 1.1.6
  * centos 2.6.32-220.13.1.el6.x86_64
  * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632971#comment-13632971
 ] 

Jonathan Ellis commented on CASSANDRA-5424:
---

bq. get rid of token selection on bootstrap and force people to either use 
vnodes or specify token manually

To clarify: this would be best done in 2.0.

 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 --

 Key: CASSANDRA-5424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.7
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 1.2.5

 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 Commands follow, but the TL;DR of it, range 
 (127605887595351923798765477786913079296,0] doesn't get repaired between .38 
 node and .236 node until I run a repair, no -pr, on .38
 It seems like primary arnge calculation doesn't take schema into account, but 
 deciding who to ask for merkle tree's from does.
 {noformat}
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  127605887595351923798765477786913079296 
 10.72.111.225   Cassandra   rack1   Up Normal  455.87 KB   25.00% 
  0   
 10.2.29.38  Analytics   rack1   Up Normal  40.74 MB25.00% 
  42535295865117307932921825928971026432  
 10.46.113.236   Analytics   rack1   Up Normal  20.65 MB50.00% 
  127605887595351923798765477786913079296 
 create keyspace Keyspace1
   with placement_strategy = 'NetworkTopologyStrategy'
   and strategy_options = {Analytics : 2}
   and durable_writes = true;
 ---
 # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1
 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e 
 for range (0,42535295865117307932921825928971026432] finished
 [2013-04-03 15:47:00,881] Repair command #1 finished
 root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java 
 (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will 
 sync a1/10.2.29.38, /10.46.113.236 on range 
 (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1]
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java 
 (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle 
 trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38])
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from /10.46.113.236
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from a1/10.2.29.38
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java 
 (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints 
 /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully 
 synced
  INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed 
 successfully
 root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java 
 (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed 
 merkle tree to /10.2.29.38 for (Keyspace1,Standard1)
 root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
 root@ip-10-72-111-225:/home/ubuntu# 
 ---
 # nodetool -h 10.46.113.236  repair -pr Keyspace1 Standard1
 [2013-04-03 15:48:00,274] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:48:02,032] Repair session dcb91540-9c75-11e2--a839ee2ccbef 
 for range 
 (42535295865117307932921825928971026432,127605887595351923798765477786913079296]
  

[jira] [Commented] (CASSANDRA-5051) Allow automatic cleanup after gc_grace

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632976#comment-13632976
 ] 

Jonathan Ellis commented on CASSANDRA-5051:
---

I meant that localhost=Normal, other=Bootstrap does not exercise include 
pending ranges code, since local node ranges do not change during BS.

 Allow automatic cleanup after gc_grace
 --

 Key: CASSANDRA-5051
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5051
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Brandon Williams
Assignee: Vijay
  Labels: vnodes
 Fix For: 2.0

 Attachments: 0001-5051-v4.patch, 0001-5051-with-test-fixes.patch, 
 0001-CASSANDRA-5051.patch, 0002-5051-remove-upgradesstable.patch, 
 0002-5051-remove-upgradesstable-v4.patch, 0004-5051-additional-test-v4.patch, 
 5051-v2.txt


 When using vnodes, after adding a new node you have to run cleanup on all the 
 machines, because you don't know which are affected and chances are it was 
 most if not all of them.  As an alternative to this intensive process, we 
 could allow cleanup during compaction if the data is older than gc_grace (or 
 perhaps some other time period since people tend to use gc_grace hacks to get 
 rid of tombstones.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632991#comment-13632991
 ] 

Jonathan Ellis commented on CASSANDRA-4860:
---

bq. 2M KV with Measure.measure() will take 96,000,000 or 96M (2 *24 * 200 
bytes) will fit in key cache.
b1. 2M KV with measureDeep() will take 96M + 48M (48 * 200 + 24 * 200) 
where 48 is the index min size and 24 is the key size.

Clarifying for my own benefit: Vijay is saying that before the original fix, 
the key cache underestimated the real entry size by 1/3, so a configured size 
of 2M would actually allow caching 3M worth of entries.  So to compare 
performance apples-to-apples, we need to allow the fixed code to use an 
equivalent size.

 Estimated Row Cache Entry size incorrect (always 24?)
 -

 Key: CASSANDRA-4860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0, 1.2.3, 2.0
Reporter: Chris Burroughs
Assignee: Vijay
 Fix For: 1.2.0 beta 3

 Attachments: 0001-4860-v2.patch, 0001-4860-v3.patch, 
 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, 
 4860-perf-test.zip, trunk-4860-revert.patch


 After running for several hours the RowCacheSize was suspicious low (ie 70 
 something MB)  I used  CASSANDRA-4859 to measure the size and number of 
 entries on a node:
 In [3]: 1560504./65021
 Out[3]: 24.0
 In [4]: 2149464./89561
 Out[4]: 24.0
 In [6]: 7216096./300785
 Out[6]: 23.990877204647838
 That's RowCacheSize/RowCacheNumEntires  .  Just to prove I don't have crazy 
 small rows the mean size of the row *keys* in the saved cache is 67 and 
 Compacted row mean size: 355.  No jamm errors in the log
 Config notes:
 row_cache_provider: ConcurrentLinkedHashCacheProvider
 row_cache_size_in_mb: 2048
 Version info:
  * C*: 1.1.6
  * centos 2.6.32-220.13.1.el6.x86_64
  * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632991#comment-13632991
 ] 

Jonathan Ellis edited comment on CASSANDRA-4860 at 4/16/13 4:38 PM:


{quote}
2M KV with Measure.measure() will take 96,000,000 or 96M (2 *24 * 200 
bytes) will fit in key cache.
2M KV with measureDeep() will take 96M + 48M (48 * 200 + 24 * 200) 
where 48 is the index min size and 24 is the key size.
{quote}

Clarifying for my own benefit: Vijay is saying that before the original fix, 
the key cache underestimated the real entry size by 1/3, so a configured size 
of 2M would actually allow caching 3M worth of entries.  So to compare 
performance apples-to-apples, we need to allow the fixed code to use an 
equivalent size.

  was (Author: jbellis):
bq. 2M KV with Measure.measure() will take 96,000,000 or 96M (2 *24 * 
200 bytes) will fit in key cache.
b1. 2M KV with measureDeep() will take 96M + 48M (48 * 200 + 24 * 200) 
where 48 is the index min size and 24 is the key size.

Clarifying for my own benefit: Vijay is saying that before the original fix, 
the key cache underestimated the real entry size by 1/3, so a configured size 
of 2M would actually allow caching 3M worth of entries.  So to compare 
performance apples-to-apples, we need to allow the fixed code to use an 
equivalent size.
  
 Estimated Row Cache Entry size incorrect (always 24?)
 -

 Key: CASSANDRA-4860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0, 1.2.3, 2.0
Reporter: Chris Burroughs
Assignee: Vijay
 Fix For: 1.2.0 beta 3

 Attachments: 0001-4860-v2.patch, 0001-4860-v3.patch, 
 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, 
 4860-perf-test.zip, trunk-4860-revert.patch


 After running for several hours the RowCacheSize was suspicious low (ie 70 
 something MB)  I used  CASSANDRA-4859 to measure the size and number of 
 entries on a node:
 In [3]: 1560504./65021
 Out[3]: 24.0
 In [4]: 2149464./89561
 Out[4]: 24.0
 In [6]: 7216096./300785
 Out[6]: 23.990877204647838
 That's RowCacheSize/RowCacheNumEntires  .  Just to prove I don't have crazy 
 small rows the mean size of the row *keys* in the saved cache is 67 and 
 Compacted row mean size: 355.  No jamm errors in the log
 Config notes:
 row_cache_provider: ConcurrentLinkedHashCacheProvider
 row_cache_size_in_mb: 2048
 Version info:
  * C*: 1.1.6
  * centos 2.6.32-220.13.1.el6.x86_64
  * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632999#comment-13632999
 ] 

Jonathan Ellis commented on CASSANDRA-4860:
---

Suggest standardizing on {{memorySize}} instead of {{size}} in 
IMeasureableMemory, to avoid confusion with size = number of contained 
elements.  Otherwise +1.

 Estimated Row Cache Entry size incorrect (always 24?)
 -

 Key: CASSANDRA-4860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0, 1.2.3, 2.0
Reporter: Chris Burroughs
Assignee: Vijay
 Fix For: 1.2.0 beta 3

 Attachments: 0001-4860-v2.patch, 0001-4860-v3.patch, 
 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, 
 4860-perf-test.zip, trunk-4860-revert.patch


 After running for several hours the RowCacheSize was suspicious low (ie 70 
 something MB)  I used  CASSANDRA-4859 to measure the size and number of 
 entries on a node:
 In [3]: 1560504./65021
 Out[3]: 24.0
 In [4]: 2149464./89561
 Out[4]: 24.0
 In [6]: 7216096./300785
 Out[6]: 23.990877204647838
 That's RowCacheSize/RowCacheNumEntires  .  Just to prove I don't have crazy 
 small rows the mean size of the row *keys* in the saved cache is 67 and 
 Compacted row mean size: 355.  No jamm errors in the log
 Config notes:
 row_cache_provider: ConcurrentLinkedHashCacheProvider
 row_cache_size_in_mb: 2048
 Version info:
  * C*: 1.1.6
  * centos 2.6.32-220.13.1.el6.x86_64
  * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)

2013-04-16 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632970#comment-13632970
 ] 

Ryan McGuire edited comment on CASSANDRA-4860 at 4/16/13 4:50 PM:
--

I increased my key_cache_size_in_mb and did in fact get better results:

{code}
Averages from the middle 80% of values:
interval_op_rate  : 27848
interval_key_rate : 27848
latency median: 0.7
latency 95th percentile   : 1.4
latency 99.9th percentile : 22.4
Total operation time  : 00:01:20
{code}

The old measure way with 300M key cache:
{code}
Running stress : -n 200 -o read -i 1
output is hidden while collecting stats...
Averages from the middle 80% of values:
interval_op_rate  : 27877
interval_key_rate : 27877
latency median: 0.7
latency 95th percentile   : 1.5
latency 99.9th percentile : 23.4
Total operation time  : 00:01:20
{code}

This is roughly equal to the original messure() method in read performance, I'm 
happy with it!

  was (Author: enigmacurry):
I increased my key_cache_size_in_mb and did in fact get better results:

{code}
Averages from the middle 80% of values:
interval_op_rate  : 27848
interval_key_rate : 27848
latency median: 0.7
latency 95th percentile   : 1.4
latency 99.9th percentile : 22.4
Total operation time  : 00:01:20
{code}

This is roughly equal to the original messure() method in read performance, I'm 
happy with it!
  
 Estimated Row Cache Entry size incorrect (always 24?)
 -

 Key: CASSANDRA-4860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0, 1.2.3, 2.0
Reporter: Chris Burroughs
Assignee: Vijay
 Fix For: 1.2.0 beta 3

 Attachments: 0001-4860-v2.patch, 0001-4860-v3.patch, 
 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, 
 4860-perf-test.zip, trunk-4860-revert.patch


 After running for several hours the RowCacheSize was suspicious low (ie 70 
 something MB)  I used  CASSANDRA-4859 to measure the size and number of 
 entries on a node:
 In [3]: 1560504./65021
 Out[3]: 24.0
 In [4]: 2149464./89561
 Out[4]: 24.0
 In [6]: 7216096./300785
 Out[6]: 23.990877204647838
 That's RowCacheSize/RowCacheNumEntires  .  Just to prove I don't have crazy 
 small rows the mean size of the row *keys* in the saved cache is 67 and 
 Compacted row mean size: 355.  No jamm errors in the log
 Config notes:
 row_cache_provider: ConcurrentLinkedHashCacheProvider
 row_cache_size_in_mb: 2048
 Version info:
  * C*: 1.1.6
  * centos 2.6.32-220.13.1.el6.x86_64
  * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)

2013-04-16 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632970#comment-13632970
 ] 

Ryan McGuire edited comment on CASSANDRA-4860 at 4/16/13 4:50 PM:
--

I increased my key_cache_size_in_mb and did in fact get better results:

{code}
Averages from the middle 80% of values:
interval_op_rate  : 27848
interval_key_rate : 27848
latency median: 0.7
latency 95th percentile   : 1.4
latency 99.9th percentile : 22.4
Total operation time  : 00:01:20
{code}

The old measure() way with 300M key cache:
{code}
Running stress : -n 200 -o read -i 1
output is hidden while collecting stats...
Averages from the middle 80% of values:
interval_op_rate  : 27877
interval_key_rate : 27877
latency median: 0.7
latency 95th percentile   : 1.5
latency 99.9th percentile : 23.4
Total operation time  : 00:01:20
{code}

This is roughly equal to the original messure() method in read performance, I'm 
happy with it!

  was (Author: enigmacurry):
I increased my key_cache_size_in_mb and did in fact get better results:

{code}
Averages from the middle 80% of values:
interval_op_rate  : 27848
interval_key_rate : 27848
latency median: 0.7
latency 95th percentile   : 1.4
latency 99.9th percentile : 22.4
Total operation time  : 00:01:20
{code}

The old measure way with 300M key cache:
{code}
Running stress : -n 200 -o read -i 1
output is hidden while collecting stats...
Averages from the middle 80% of values:
interval_op_rate  : 27877
interval_key_rate : 27877
latency median: 0.7
latency 95th percentile   : 1.5
latency 99.9th percentile : 23.4
Total operation time  : 00:01:20
{code}

This is roughly equal to the original messure() method in read performance, I'm 
happy with it!
  
 Estimated Row Cache Entry size incorrect (always 24?)
 -

 Key: CASSANDRA-4860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0, 1.2.3, 2.0
Reporter: Chris Burroughs
Assignee: Vijay
 Fix For: 1.2.0 beta 3

 Attachments: 0001-4860-v2.patch, 0001-4860-v3.patch, 
 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, 
 4860-perf-test.zip, trunk-4860-revert.patch


 After running for several hours the RowCacheSize was suspicious low (ie 70 
 something MB)  I used  CASSANDRA-4859 to measure the size and number of 
 entries on a node:
 In [3]: 1560504./65021
 Out[3]: 24.0
 In [4]: 2149464./89561
 Out[4]: 24.0
 In [6]: 7216096./300785
 Out[6]: 23.990877204647838
 That's RowCacheSize/RowCacheNumEntires  .  Just to prove I don't have crazy 
 small rows the mean size of the row *keys* in the saved cache is 67 and 
 Compacted row mean size: 355.  No jamm errors in the log
 Config notes:
 row_cache_provider: ConcurrentLinkedHashCacheProvider
 row_cache_size_in_mb: 2048
 Version info:
  * C*: 1.1.6
  * centos 2.6.32-220.13.1.el6.x86_64
  * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: rename SSTR.markCompacted - markObsolete to disambiguate with the compacting status in DataTracker

2013-04-16 Thread jbellis
Updated Branches:
  refs/heads/trunk 35ef47ec0 - 57e51b4bd


rename SSTR.markCompacted - markObsolete to disambiguate with the compacting 
status in DataTracker


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/57e51b4b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/57e51b4b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/57e51b4b

Branch: refs/heads/trunk
Commit: 57e51b4bd99173fde8656b902900c5a251853806
Parents: 35ef47e
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Apr 16 12:11:40 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Apr 16 12:11:40 2013 -0500

--
 src/java/org/apache/cassandra/db/DataTracker.java  |4 ++--
 .../cassandra/db/compaction/CompactionTask.java|2 +-
 .../apache/cassandra/io/sstable/SSTableReader.java |5 +++--
 .../apache/cassandra/tools/StandaloneScrubber.java |2 +-
 4 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/57e51b4b/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index 27c0480..311099d 100644
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@ -212,7 +212,7 @@ public class DataTracker
 // A good enough approach is to mark the sstables involved 
compacted, which if compaction succeeded
 // is harmlessly redundant, and if it failed ensures that at least 
the sstable will get deleted on restart.
 for (SSTableReader sstable : unmark)
-sstable.markCompacted();
+sstable.markObsolete();
 }
 
 View currentView, newView;
@@ -364,7 +364,7 @@ public class DataTracker
 long size = sstable.bytesOnDisk();
 StorageMetrics.load.dec(size);
 cfstore.metric.liveDiskSpaceUsed.dec(size);
-boolean firstToCompact = sstable.markCompacted();
+boolean firstToCompact = sstable.markObsolete();
 assert firstToCompact : sstable +  was already marked compacted;
 sstable.releaseReference();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/57e51b4b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index 8c4a102..e7e0ec6 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -213,7 +213,7 @@ public class CompactionTask extends AbstractCompactionTask
 // also remove already completed SSTables
 for (SSTableReader sstable : sstables)
 {
-sstable.markCompacted();
+sstable.markObsolete();
 sstable.releaseReference();
 }
 throw Throwables.propagate(t);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/57e51b4b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index 1d07fea..153729d 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -964,14 +964,15 @@ public class SSTableReader extends SSTable
 }
 
 /**
- * Mark the sstable as compacted.
+ * Mark the sstable as obsolete, i.e., compacted into newer sstables.
+ *
  * When calling this function, the caller must ensure that the 
SSTableReader is not referenced anywhere
  * except for threads holding a reference.
  *
  * @return true if the this is the first time the file was marked 
compacted.  With rare exceptions
  * (see DataTracker.unmarkCompacted) calling this multiple times would be 
buggy.
  */
-public boolean markCompacted()
+public boolean markObsolete()
 {
 if (logger.isDebugEnabled())
 logger.debug(Marking  + getFilename() +  compacted);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/57e51b4b/src/java/org/apache/cassandra/tools/StandaloneScrubber.java
--
diff --git a/src/java/org/apache/cassandra/tools/StandaloneScrubber.java 
b/src/java/org/apache/cassandra/tools/StandaloneScrubber.java
index 

buildbot success in ASF Buildbot on cassandra-trunk

2013-04-16 Thread buildbot
The Buildbot has detected a restored build on builder cassandra-trunk while 
building cassandra.
Full details are available at:
 http://ci.apache.org/builders/cassandra-trunk/builds/2581

Buildbot URL: http://ci.apache.org/

Buildslave for this Build: portunus_ubuntu

Build Reason: scheduler
Build Source Stamp: [branch trunk] 57e51b4bd99173fde8656b902900c5a251853806
Blamelist: Jonathan Ellis jbel...@apache.org

Build succeeded!

sincerely,
 -The Buildbot





[jira] [Commented] (CASSANDRA-5480) Case insensitive cqlsh

2013-04-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633068#comment-13633068
 ] 

Kévin LOVATO commented on CASSANDRA-5480:
-

My bad, i searched the bug DB but didn't find this one. Thank you

 Case insensitive cqlsh
 --

 Key: CASSANDRA-5480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5480
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.2
Reporter: Kévin LOVATO

 cqlsh doesn't seem to be case sensitive for strategy_options, so the 
 following query:
 {code} ALTER KEYSPACE MyKeyspace WITH replication = {'class': 
 'NetworkTopologyStrategy', 'Paris-CEN' : 1 };
 {code}
 Modified my keyspace with strategy_options 'paris-cen' which differs from 
 what is configured in my {{cassandra-topology.properties}} and made 
 subsequent queries to this keyspace fail with an UnavailableException.
 I could fix my issue by updating the Keyspace configuration by code but it 
 would be nice to be able to do it using cqlsh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: finish s/markCompacted/markObsolete/

2013-04-16 Thread jbellis
Updated Branches:
  refs/heads/trunk 57e51b4bd - 46595896c


finish s/markCompacted/markObsolete/


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/46595896
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/46595896
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/46595896

Branch: refs/heads/trunk
Commit: 46595896c0bc74f7305cab890778cc411dd38375
Parents: 57e51b4
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Apr 16 12:45:08 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Apr 16 12:45:08 2013 -0500

--
 .../org/apache/cassandra/db/ColumnFamilyStore.java |6 +++---
 src/java/org/apache/cassandra/db/DataTracker.java  |6 +++---
 .../cassandra/db/compaction/CompactionManager.java |2 +-
 .../cassandra/db/compaction/CompactionTask.java|2 +-
 .../apache/cassandra/io/sstable/SSTableReader.java |4 ++--
 .../cassandra/db/compaction/CompactionsTest.java   |2 +-
 6 files changed, 11 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/46595896/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 8dbe52a..3d930bd 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1050,10 +1050,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
CompactionManager.instance.performSSTableRewrite(ColumnFamilyStore.this, 
excludeCurrentVersion);
 }
 
-public void markCompacted(CollectionSSTableReader sstables, 
OperationType compactionType)
+public void markObsolete(CollectionSSTableReader sstables, OperationType 
compactionType)
 {
 assert !sstables.isEmpty();
-data.markCompacted(sstables, compactionType);
+data.markObsolete(sstables, compactionType);
 }
 
 public void replaceCompactedSSTables(CollectionSSTableReader sstables, 
IterableSSTableReader replacements, OperationType compactionType)
@@ -2190,7 +2190,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 if (truncatedSSTables.isEmpty())
 return ReplayPosition.NONE;
 
-markCompacted(truncatedSSTables, OperationType.UNKNOWN);
+markObsolete(truncatedSSTables, OperationType.UNKNOWN);
 return ReplayPosition.getReplayPosition(truncatedSSTables);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/46595896/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index 311099d..6d14d24 100644
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@ -183,7 +183,7 @@ public class DataTracker
  * @return true if we are able to mark the given @param sstables as 
compacted, before anyone else
  *
  * Note that we could acquire references on the marked sstables and 
release them in
- * unmarkCompacting, but since we will never call markCompacted on a 
sstable marked
+ * unmarkCompacting, but since we will never call markObsolete on a 
sstable marked
  * as compacting (unless there is a serious bug), we can skip this.
  */
 public boolean markCompacting(IterableSSTableReader sstables)
@@ -200,7 +200,7 @@ public class DataTracker
 }
 
 /**
- * Removes files from compacting status: this is different from 
'markCompacted'
+ * Removes files from compacting status: this is different from 
'markObsolete'
  * because it should be run regardless of whether a compaction succeeded.
  */
 public void unmarkCompacting(IterableSSTableReader unmark)
@@ -224,7 +224,7 @@ public class DataTracker
 while (!view.compareAndSet(currentView, newView));
 }
 
-public void markCompacted(CollectionSSTableReader sstables, 
OperationType compactionType)
+public void markObsolete(CollectionSSTableReader sstables, OperationType 
compactionType)
 {
 replace(sstables, Collections.SSTableReaderemptyList());
 notifySSTablesChanged(sstables, 
Collections.SSTableReaderemptyList(), compactionType);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/46595896/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 

git commit: comment

2013-04-16 Thread jbellis
Updated Branches:
  refs/heads/trunk 46595896c - 42810a5be


comment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/42810a5b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/42810a5b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/42810a5b

Branch: refs/heads/trunk
Commit: 42810a5be972995d459d70dbe071aa44b9f92b3e
Parents: 4659589
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Apr 16 12:54:17 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Apr 16 12:54:17 2013 -0500

--
 src/java/org/apache/cassandra/db/DataTracker.java |6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/42810a5b/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index 6d14d24..d4974d8 100644
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@ -207,9 +207,9 @@ public class DataTracker
 {
 if (!cfstore.isValid())
 {
-// We don't know if the original compaction suceeded or failed, 
which makes it difficult to know
-// if the sstable reference has already been released.
-// A good enough approach is to mark the sstables involved 
compacted, which if compaction succeeded
+// The CF has been dropped.  We don't know if the original 
compaction suceeded or failed,
+// which makes it difficult to know if the sstable reference has 
already been released.
+// A good enough approach is to mark the sstables involved 
obsolete, which if compaction succeeded
 // is harmlessly redundant, and if it failed ensures that at least 
the sstable will get deleted on restart.
 for (SSTableReader sstable : unmark)
 sstable.markObsolete();



[jira] [Commented] (CASSANDRA-4734) Move CQL3 consistency to protocol

2013-04-16 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633224#comment-13633224
 ] 

Edward Capriolo commented on CASSANDRA-4734:


Anecdotal, I feel few actually understand CL, (heck maybe I do not even 
understand it). What I find, is that users come to realize that Cassandra is 
not atomic in the atomic isolated sense. Once that mental hurdle is crossed 
users fall into a thinking similar to CL.ONE is the most likely to succeed, and 
the fastest. Naturally since ONE is fastest and most likely to not cause 
problems it becomes the application default. 

IMHO there is a logic to setting CL at the column family level, however that 
implies that all operations on a column family would require the same CL which 
is not true, even inside a row key some columns might require a different CL 
based on application requirements. Personally, I do not see how anything but a 
per operation CL could be correct. If only 1/3 natural endpoints is up I would 
not want my client to deny me the ability to write because the column family, 
keyspace,...whatever dictates the CL it should be written at.

 Move CQL3 consistency to protocol
 -

 Key: CASSANDRA-4734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4734
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Move-consistency-level-to-the-protocol-level-2.txt, 
 0001-Move-consistency-level-to-the-protocol-level-3.txt, 
 0001-Move-consistency-level-to-the-protocol-level.txt, 
 0002-Remove-remains-of-4448-3.txt, 0002-Remove-remains-of-4448.txt, 
 0002-Thrift-generated-file-diffs-2.txt, 
 0003-Thrift-generated-file-diffs-3.txt, 0003-Thrift-generated-file-diffs.txt


 Currently, in CQL3, you set the consistency level of an operation in
 the language, eg 'SELECT * FROM foo USING CONSISTENCY QUORUM'.  It now
 looks like this was a mistake, and that consistency should be set at
 the protocol level, i.e. as a separate parameter along with the query.
 The reasoning is that the CL applies to the guarantee provided by the
 operation being successful, not to the query itself.  Specifically,
 having the CL being part of the language means that CL is opaque to
 low level client libraries without themselves parsing the CQL, which
 we want to avoid.  Thus,
 - Those libraries can't implement automatic retries policy, where a query 
 would be retried with a smaller CL.  (I'm aware that this is often a Bad 
 Idea, but it does have legitimate uses and not having that available is seen 
 as a regression from the Thrift api.)
 - We had to introduce CASSANDRA-4448 to allow the client to configure some  
 form of default CL since the library can't handle that anymore, which is  
 hackish.
 - Executing prepared statements with different CL requires preparing multiple 
 statements.
 - CL only makes sense for BATCH operations as a whole, not the sub-statements 
 within the batch. Currently CQL3 fixes that by validating the given CLs 
 match, but it would be much more clear if the CL was on the protocol side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's

2013-04-16 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-5424:
--

Attachment: 5424-v3-1.2.txt

v3 attached.

- NTS now uses LinkedHashSet in calculateNaturalEndpoint to preserve insertion 
order while eliminating duplicates.

- I think it is unsafe to use cached endpoints through getNaturalEndpoints 
since tokenMetadata cannot be consistent inside getPrimaryRangesForEndpoint, so 
I stick with impl from v2.

- fix sampleKeyRange. I think the problem is the nome of the method 
tokenMetadata.getPrimaryRangeFor is confusing. Probably we should rename that 
to just getRangeFor.

- Added test for getPrimaryRangesForEndpoint to StorageServiceServerTest.


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 --

 Key: CASSANDRA-5424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.7
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 1.2.5

 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 Commands follow, but the TL;DR of it, range 
 (127605887595351923798765477786913079296,0] doesn't get repaired between .38 
 node and .236 node until I run a repair, no -pr, on .38
 It seems like primary arnge calculation doesn't take schema into account, but 
 deciding who to ask for merkle tree's from does.
 {noformat}
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  127605887595351923798765477786913079296 
 10.72.111.225   Cassandra   rack1   Up Normal  455.87 KB   25.00% 
  0   
 10.2.29.38  Analytics   rack1   Up Normal  40.74 MB25.00% 
  42535295865117307932921825928971026432  
 10.46.113.236   Analytics   rack1   Up Normal  20.65 MB50.00% 
  127605887595351923798765477786913079296 
 create keyspace Keyspace1
   with placement_strategy = 'NetworkTopologyStrategy'
   and strategy_options = {Analytics : 2}
   and durable_writes = true;
 ---
 # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1
 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e 
 for range (0,42535295865117307932921825928971026432] finished
 [2013-04-03 15:47:00,881] Repair command #1 finished
 root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java 
 (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will 
 sync a1/10.2.29.38, /10.46.113.236 on range 
 (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1]
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java 
 (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle 
 trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38])
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from /10.46.113.236
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from a1/10.2.29.38
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java 
 (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints 
 /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully 
 synced
  INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
 (line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed 
 successfully
 root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java 
 (line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed 
 merkle tree to /10.2.29.38 for (Keyspace1,Standard1)
 root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 

[jira] [Comment Edited] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's

2013-04-16 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633274#comment-13633274
 ] 

Yuki Morishita edited comment on CASSANDRA-5424 at 4/16/13 8:01 PM:


v3 attached.

- NTS now uses LinkedHashSet in calculateNaturalEndpoint to preserve insertion 
order while eliminating duplicates.

- I think it is unsafe to use cached endpoints through getNaturalEndpoints 
since tokenMetadata cannot be consistent inside getPrimaryRangesForEndpoint, so 
I stick with impl from v2.

- fix sampleKeyRange. I think the problem is that the name 
tokenMetadata.getPrimaryRangeFor is confusing. Probably we should rename that 
to just getRangeFor.

- Added test for getPrimaryRangesForEndpoint to StorageServiceServerTest.


  was (Author: yukim):
v3 attached.

- NTS now uses LinkedHashSet in calculateNaturalEndpoint to preserve insertion 
order while eliminating duplicates.

- I think it is unsafe to use cached endpoints through getNaturalEndpoints 
since tokenMetadata cannot be consistent inside getPrimaryRangesForEndpoint, so 
I stick with impl from v2.

- fix sampleKeyRange. I think the problem is the nome of the method 
tokenMetadata.getPrimaryRangeFor is confusing. Probably we should rename that 
to just getRangeFor.

- Added test for getPrimaryRangesForEndpoint to StorageServiceServerTest.

  
 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 --

 Key: CASSANDRA-5424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.7
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Critical
 Fix For: 1.2.5

 Attachments: 5424-1.1.txt, 5424-v2-1.2.txt, 5424-v3-1.2.txt


 nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
 isn't in all DC's
 Commands follow, but the TL;DR of it, range 
 (127605887595351923798765477786913079296,0] doesn't get repaired between .38 
 node and .236 node until I run a repair, no -pr, on .38
 It seems like primary arnge calculation doesn't take schema into account, but 
 deciding who to ask for merkle tree's from does.
 {noformat}
 Address DC  RackStatus State   LoadOwns   
  Token   
   
  127605887595351923798765477786913079296 
 10.72.111.225   Cassandra   rack1   Up Normal  455.87 KB   25.00% 
  0   
 10.2.29.38  Analytics   rack1   Up Normal  40.74 MB25.00% 
  42535295865117307932921825928971026432  
 10.46.113.236   Analytics   rack1   Up Normal  20.65 MB50.00% 
  127605887595351923798765477786913079296 
 create keyspace Keyspace1
   with placement_strategy = 'NetworkTopologyStrategy'
   and strategy_options = {Analytics : 2}
   and durable_writes = true;
 ---
 # nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1
 [2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for 
 keyspace Keyspace1
 [2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e 
 for range (0,42535295865117307932921825928971026432] finished
 [2013-04-03 15:47:00,881] Repair command #1 finished
 root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
 /var/log/cassandra/system.log
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java 
 (line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will 
 sync a1/10.2.29.38, /10.46.113.236 on range 
 (0,42535295865117307932921825928971026432] for Keyspace1.[Standard1]
  INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java 
 (line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle 
 trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38])
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from /10.46.113.236
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java 
 (line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle 
 tree for Standard1 from a1/10.2.29.38
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java 
 (line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints 
 /10.46.113.236 and a1/10.2.29.38 are consistent for Standard1
  INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 

[jira] [Commented] (CASSANDRA-5469) Race condition between index building and scrubDirectories() at startup

2013-04-16 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633293#comment-13633293
 ] 

Yuki Morishita commented on CASSANDRA-5469:
---

Probably related to CASSANDRA-5350?
MeteredFlusher could open ColumnFamilyStore before scrubDirectories.

 Race condition between index building and scrubDirectories() at startup
 ---

 Key: CASSANDRA-5469
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5469
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.12, 1.1.10, 1.2.4
Reporter: amorton

 From user group 
 http://www.mail-archive.com/user@cassandra.apache.org/msg29207.html
 In CassandraDaemon.setup() the call to SystemTable.checkHealth() results in 
 the CFS's being created. As part of their creation they kick of async 
 secondary index build if the index is not marked as built 
 (SecondaryIndexManager.addIndexedColumn()). Later in CD.setup() the call is 
 made to scrub the data dirs and this can race with the tmp files created by 
 the index rebuild. The result is an error that prevents the node starting.
 Should we delay rebuilding secondary indexes until after startup has 
 completed or rebuild them synchronously ? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4734) Move CQL3 consistency to protocol

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633298#comment-13633298
 ] 

Jonathan Ellis commented on CASSANDRA-4734:
---

Nothing here denies clients the ability to set CL on a per-operation basis.

 Move CQL3 consistency to protocol
 -

 Key: CASSANDRA-4734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4734
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Move-consistency-level-to-the-protocol-level-2.txt, 
 0001-Move-consistency-level-to-the-protocol-level-3.txt, 
 0001-Move-consistency-level-to-the-protocol-level.txt, 
 0002-Remove-remains-of-4448-3.txt, 0002-Remove-remains-of-4448.txt, 
 0002-Thrift-generated-file-diffs-2.txt, 
 0003-Thrift-generated-file-diffs-3.txt, 0003-Thrift-generated-file-diffs.txt


 Currently, in CQL3, you set the consistency level of an operation in
 the language, eg 'SELECT * FROM foo USING CONSISTENCY QUORUM'.  It now
 looks like this was a mistake, and that consistency should be set at
 the protocol level, i.e. as a separate parameter along with the query.
 The reasoning is that the CL applies to the guarantee provided by the
 operation being successful, not to the query itself.  Specifically,
 having the CL being part of the language means that CL is opaque to
 low level client libraries without themselves parsing the CQL, which
 we want to avoid.  Thus,
 - Those libraries can't implement automatic retries policy, where a query 
 would be retried with a smaller CL.  (I'm aware that this is often a Bad 
 Idea, but it does have legitimate uses and not having that available is seen 
 as a regression from the Thrift api.)
 - We had to introduce CASSANDRA-4448 to allow the client to configure some  
 form of default CL since the library can't handle that anymore, which is  
 hackish.
 - Executing prepared statements with different CL requires preparing multiple 
 statements.
 - CL only makes sense for BATCH operations as a whole, not the sub-statements 
 within the batch. Currently CQL3 fixes that by validating the given CLs 
 match, but it would be much more clear if the CL was on the protocol side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5483) Repair tracing

2013-04-16 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-5483:
-

 Summary: Repair tracing
 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor


I think it would be nice to log repair stats and results like query tracing 
stores traces to system keyspace. With it, you don't have to lookup each log 
file to see what was the status and how it performed the repair you invoked. 
Instead, you can query the repair log with session ID to see the state and 
stats of all nodes involved in that repair session.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4734) Move CQL3 consistency to protocol

2013-04-16 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633309#comment-13633309
 ] 

Edward Capriolo commented on CASSANDRA-4734:


@Jonathan. Yes. I agree with you, and understand that. I was furthering your 
point.

{quote}
Fundamentally I don't think I buy that per-CF is the right way to think about CL
{quote}


 Move CQL3 consistency to protocol
 -

 Key: CASSANDRA-4734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4734
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Move-consistency-level-to-the-protocol-level-2.txt, 
 0001-Move-consistency-level-to-the-protocol-level-3.txt, 
 0001-Move-consistency-level-to-the-protocol-level.txt, 
 0002-Remove-remains-of-4448-3.txt, 0002-Remove-remains-of-4448.txt, 
 0002-Thrift-generated-file-diffs-2.txt, 
 0003-Thrift-generated-file-diffs-3.txt, 0003-Thrift-generated-file-diffs.txt


 Currently, in CQL3, you set the consistency level of an operation in
 the language, eg 'SELECT * FROM foo USING CONSISTENCY QUORUM'.  It now
 looks like this was a mistake, and that consistency should be set at
 the protocol level, i.e. as a separate parameter along with the query.
 The reasoning is that the CL applies to the guarantee provided by the
 operation being successful, not to the query itself.  Specifically,
 having the CL being part of the language means that CL is opaque to
 low level client libraries without themselves parsing the CQL, which
 we want to avoid.  Thus,
 - Those libraries can't implement automatic retries policy, where a query 
 would be retried with a smaller CL.  (I'm aware that this is often a Bad 
 Idea, but it does have legitimate uses and not having that available is seen 
 as a regression from the Thrift api.)
 - We had to introduce CASSANDRA-4448 to allow the client to configure some  
 form of default CL since the library can't handle that anymore, which is  
 hackish.
 - Executing prepared statements with different CL requires preparing multiple 
 statements.
 - CL only makes sense for BATCH operations as a whole, not the sub-statements 
 within the batch. Currently CQL3 fixes that by validating the given CLs 
 match, but it would be much more clear if the CL was on the protocol side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5483) Repair tracing

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633321#comment-13633321
 ] 

Jonathan Ellis commented on CASSANDRA-5483:
---

Good idea!

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
  Labels: repair

 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4734) Move CQL3 consistency to protocol

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633325#comment-13633325
 ] 

Jonathan Ellis commented on CASSANDRA-4734:
---

Ah, perfect.  Carry on. :)

 Move CQL3 consistency to protocol
 -

 Key: CASSANDRA-4734
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4734
 Project: Cassandra
  Issue Type: Task
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.0 beta 2

 Attachments: 0001-Move-consistency-level-to-the-protocol-level-2.txt, 
 0001-Move-consistency-level-to-the-protocol-level-3.txt, 
 0001-Move-consistency-level-to-the-protocol-level.txt, 
 0002-Remove-remains-of-4448-3.txt, 0002-Remove-remains-of-4448.txt, 
 0002-Thrift-generated-file-diffs-2.txt, 
 0003-Thrift-generated-file-diffs-3.txt, 0003-Thrift-generated-file-diffs.txt


 Currently, in CQL3, you set the consistency level of an operation in
 the language, eg 'SELECT * FROM foo USING CONSISTENCY QUORUM'.  It now
 looks like this was a mistake, and that consistency should be set at
 the protocol level, i.e. as a separate parameter along with the query.
 The reasoning is that the CL applies to the guarantee provided by the
 operation being successful, not to the query itself.  Specifically,
 having the CL being part of the language means that CL is opaque to
 low level client libraries without themselves parsing the CQL, which
 we want to avoid.  Thus,
 - Those libraries can't implement automatic retries policy, where a query 
 would be retried with a smaller CL.  (I'm aware that this is often a Bad 
 Idea, but it does have legitimate uses and not having that available is seen 
 as a regression from the Thrift api.)
 - We had to introduce CASSANDRA-4448 to allow the client to configure some  
 form of default CL since the library can't handle that anymore, which is  
 hackish.
 - Executing prepared statements with different CL requires preparing multiple 
 statements.
 - CL only makes sense for BATCH operations as a whole, not the sub-statements 
 within the batch. Currently CQL3 fixes that by validating the given CLs 
 match, but it would be much more clear if the CL was on the protocol side.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: comments

2013-04-16 Thread jbellis
Updated Branches:
  refs/heads/trunk 42810a5be - 67f5d6f1c


comments


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67f5d6f1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67f5d6f1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67f5d6f1

Branch: refs/heads/trunk
Commit: 67f5d6f1c9398cc8b69b244c583ed64b16f19431
Parents: 42810a5
Author: Jonathan Ellis jbel...@apache.org
Authored: Tue Apr 16 15:30:08 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Tue Apr 16 15:30:08 2013 -0500

--
 .../apache/cassandra/net/sink/IMessageSink.java|   12 
 1 files changed, 12 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67f5d6f1/src/java/org/apache/cassandra/net/sink/IMessageSink.java
--
diff --git a/src/java/org/apache/cassandra/net/sink/IMessageSink.java 
b/src/java/org/apache/cassandra/net/sink/IMessageSink.java
index 721360c..d6b6496 100644
--- a/src/java/org/apache/cassandra/net/sink/IMessageSink.java
+++ b/src/java/org/apache/cassandra/net/sink/IMessageSink.java
@@ -24,7 +24,19 @@ import org.apache.cassandra.net.MessageOut;
 
 public interface IMessageSink
 {
+/**
+ * Transform or drop an outgoing message
+ *
+ * @return null if the message is dropped, or the transformed message to 
send, which may be just
+ * the original message
+ */
 public MessageOut handleMessage(MessageOut message, int id, InetAddress 
to);
 
+/**
+ * Transform or drop an incoming message
+ *
+ * @return null if the message is dropped, or the transformed message to 
receive, which may be just
+ * the original message
+ */
 public MessageIn handleMessage(MessageIn message, int id, InetAddress to);
 }



[jira] [Commented] (CASSANDRA-5469) Race condition between index building and scrubDirectories() at startup

2013-04-16 Thread amorton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633376#comment-13633376
 ] 

amorton commented on CASSANDRA-5469:


I think this is a different problem. 

When the CFS's are created they will start rebuilding secondary indexes async. 
Flushing the secondary indexes SSTbles is not done by the MeteredFlusher, see 
SecondaryIndex.buildIndexBlocking(). Then when scrubDirectories() runs it will 
delete those -tmp- files out from under the index rebuild. 

 Race condition between index building and scrubDirectories() at startup
 ---

 Key: CASSANDRA-5469
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5469
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.12, 1.1.10, 1.2.4
Reporter: amorton

 From user group 
 http://www.mail-archive.com/user@cassandra.apache.org/msg29207.html
 In CassandraDaemon.setup() the call to SystemTable.checkHealth() results in 
 the CFS's being created. As part of their creation they kick of async 
 secondary index build if the index is not marked as built 
 (SecondaryIndexManager.addIndexedColumn()). Later in CD.setup() the call is 
 made to scrub the data dirs and this can race with the tmp files created by 
 the index rebuild. The result is an error that prevents the node starting.
 Should we delay rebuilding secondary indexes until after startup has 
 completed or rebuild them synchronously ? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5462) Ant code coverage with unit and dtests

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633389#comment-13633389
 ] 

Jonathan Ellis commented on CASSANDRA-5462:
---

Hmm, did you attach the patch to CASSANDRA-5326 instead?

 Ant code coverage with unit and dtests
 --

 Key: CASSANDRA-5462
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5462
 Project: Cassandra
  Issue Type: New Feature
Reporter: Ryan McGuire
Assignee: Brandon Williams

 This is a patch to our build.xml to integrate a cobertura code coverage 
 report across both the unit tests and the dtests. I've had this working for 
 awhile, but it's rather unwieldy: it takes over 7 hours for it to run on my 
 i5 based laptop. This is because it runs through the entire dtest suite 
 twice, once without vnodes turned on, and once with. It does work repeatably 
 though so although it's a monster, it's probably worth including.
 See http://static.enigmacurry.com/tmp/cobertura-report-4-with-vnodes/ for 
 sample output, run against trunk today.
 Once applied, you just need to run '*ant codecoverage*'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5484) Support custom secondary indexes in CQL

2013-04-16 Thread Benjamin Coverston (JIRA)
Benjamin Coverston created CASSANDRA-5484:
-

 Summary: Support custom secondary indexes in CQL
 Key: CASSANDRA-5484
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Benjamin Coverston


Through thrift users can add custom secondary indexes to the column metadata.

The following syntax is used in PLSQL, and I think we could use something 
similar.

CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) [PARAMETERS 
(PARAM[, PARAM])]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5484) Support custom secondary indexes in CQL

2013-04-16 Thread Benjamin Coverston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Coverston updated CASSANDRA-5484:
--

Description: 
Through thrift users can add custom secondary indexes to the column metadata.

The following syntax is used in PLSQL, and I think we could use something 
similar.

CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) 
[PARAMETERS (PARAM[, PARAM])]

  was:
Through thrift users can add custom secondary indexes to the column metadata.

The following syntax is used in PLSQL, and I think we could use something 
similar.

CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) [PARAMETERS 
(PARAM[, PARAM])]


 Support custom secondary indexes in CQL
 ---

 Key: CASSANDRA-5484
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Benjamin Coverston

 Through thrift users can add custom secondary indexes to the column metadata.
 The following syntax is used in PLSQL, and I think we could use something 
 similar.
 CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) 
 [PARAMETERS (PARAM[, PARAM])]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5484) Support custom secondary indexes in CQL

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633405#comment-13633405
 ] 

Jonathan Ellis commented on CASSANDRA-5484:
---

I have a mild preference for PostgreSQL syntax, which adds no additional 
keywords (we already have USING and WITH): {{CREATE INDEX [name] ON table 
[USING method] (columns) [WITH parameters]}}

 Support custom secondary indexes in CQL
 ---

 Key: CASSANDRA-5484
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Benjamin Coverston

 Through thrift users can add custom secondary indexes to the column metadata.
 The following syntax is used in PLSQL, and I think we could use something 
 similar.
 CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) 
 [PARAMETERS (PARAM[, PARAM])]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5474) failure when passing null parameter to prepared statement

2013-04-16 Thread Pierre Chalamet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633406#comment-13633406
 ] 

Pierre Chalamet commented on CASSANDRA-5474:


You are totally right. Seems I did a huge mistake after applying patch 
CASSANDRA-5468. Copied the wrong jar :(
Thanks for your patience!

 failure when passing null parameter to prepared statement
 -

 Key: CASSANDRA-5474
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5474
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: windows 8 x64, 1.7.0_11-b21 x64
Reporter: Pierre Chalamet

 I have a failure when passing a null parameter to the prepared statement 
 bellow when going through the cql 3 bin protocol:
 {code}
 Exec: CREATE KEYSPACE Tests WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor' : 1}
 Exec: CREATE TABLE Tests.AllTypes (a int, b int, primary key (a))
 Prepare: insert into Tests.AllTypes (a, b) values (?, ?)
 {code}
 Passing a=1 and b=null cause the following error:
 {code}
 DEBUG 23:07:23,315 Responding: RESULT PREPARED 
 59b3d6baed67d5c0a3ced29ebb4277c5 [a(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type][b(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type]
 DEBUG 23:07:23,292 Compaction buckets are []
 DEBUG 23:07:23,336 Received: EXECUTE 59b3d6baed67d5c0a3ced29ebb4277c5 with 2 
 values at consistency QUORUM
 ERROR 23:07:23,338 Unexpected exception during request
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.marshal.Int32Type.validate(Int32Type.java:95)
 at 
 org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:257)
 at 
 org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:282)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.mutationForKey(UpdateStatement.java:250)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.getMutations(UpdateStatement.java:133)
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:92)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:254)
 at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:122)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:287)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:45)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:69)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 DEBUG 23:07:23,337 No tasks available
 DEBUG 23:07:23,341 request complete
 DEBUG 23:07:23,343 Responding: ERROR SERVER_ERROR: 
 java.lang.NullPointerException
 {code}
 When serializing value for b, a bytes array of len -1 is transmitted 
 (accordingly to the spec):
 {code}
 [bytes] A [int] n, followed by n bytes if n = 0. If n  0,
 no byte should follow and the value represented is `null`.
 {code}
 CASSANDRA-5081 added support for null params. Am I doing something wrong 
 there ? Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5484) Support custom secondary indexes in CQL

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633405#comment-13633405
 ] 

Jonathan Ellis edited comment on CASSANDRA-5484 at 4/16/13 9:40 PM:


I have a mild preference for PostgreSQL syntax, which adds no additional 
keywords (we already have USING and WITH): 

{{CREATE INDEX [name] ON table [USING method] (columns) [WITH 
parameters]}}

  was (Author: jbellis):
I have a mild preference for PostgreSQL syntax, which adds no additional 
keywords (we already have USING and WITH): {{CREATE INDEX [name] ON table 
[USING method] (columns) [WITH parameters]}}
  
 Support custom secondary indexes in CQL
 ---

 Key: CASSANDRA-5484
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Benjamin Coverston

 Through thrift users can add custom secondary indexes to the column metadata.
 The following syntax is used in PLSQL, and I think we could use something 
 similar.
 CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) 
 [PARAMETERS (PARAM[, PARAM])]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5474) failure when passing null parameter to prepared statement

2013-04-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633413#comment-13633413
 ] 

Aleksey Yeschenko commented on CASSANDRA-5474:
--

That happens. Thanks for opening the issue anyway (and for CASSANDRA-5468, too).

 failure when passing null parameter to prepared statement
 -

 Key: CASSANDRA-5474
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5474
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: windows 8 x64, 1.7.0_11-b21 x64
Reporter: Pierre Chalamet

 I have a failure when passing a null parameter to the prepared statement 
 bellow when going through the cql 3 bin protocol:
 {code}
 Exec: CREATE KEYSPACE Tests WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor' : 1}
 Exec: CREATE TABLE Tests.AllTypes (a int, b int, primary key (a))
 Prepare: insert into Tests.AllTypes (a, b) values (?, ?)
 {code}
 Passing a=1 and b=null cause the following error:
 {code}
 DEBUG 23:07:23,315 Responding: RESULT PREPARED 
 59b3d6baed67d5c0a3ced29ebb4277c5 [a(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type][b(tests, alltypes), 
 org.apache.cassandra.db.marshal.Int32Type]
 DEBUG 23:07:23,292 Compaction buckets are []
 DEBUG 23:07:23,336 Received: EXECUTE 59b3d6baed67d5c0a3ced29ebb4277c5 with 2 
 values at consistency QUORUM
 ERROR 23:07:23,338 Unexpected exception during request
 java.lang.NullPointerException
 at 
 org.apache.cassandra.db.marshal.Int32Type.validate(Int32Type.java:95)
 at 
 org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:257)
 at 
 org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:282)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.mutationForKey(UpdateStatement.java:250)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.getMutations(UpdateStatement.java:133)
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:92)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:254)
 at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:122)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:287)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:45)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:69)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 DEBUG 23:07:23,337 No tasks available
 DEBUG 23:07:23,341 request complete
 DEBUG 23:07:23,343 Responding: ERROR SERVER_ERROR: 
 java.lang.NullPointerException
 {code}
 When serializing value for b, a bytes array of len -1 is transmitted 
 (accordingly to the spec):
 {code}
 [bytes] A [int] n, followed by n bytes if n = 0. If n  0,
 no byte should follow and the value represented is `null`.
 {code}
 CASSANDRA-5081 added support for null params. Am I doing something wrong 
 there ? Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5432) AntiEntropy Repair Freezing on 1.2.3

2013-04-16 Thread Arya Goudarzi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633414#comment-13633414
 ] 

Arya Goudarzi commented on CASSANDRA-5432:
--

I narrowed this down to JMX port range, and they must be opened on the public 
IPs. Here are the steps to reproduce:

This is a working configuration:
Cassandra 1.1.10 Cluster with 12 nodes in us-east-1 and 12 nodes in us-west-2
Using Ec2MultiRegionSnitch and SSL enabled for DC_ONLY and 
NetworkTopologyStrategy with strategy_options: us-east-1:3;us-west-2:3;
C* instances have a security group called 'cluster1'
security group 'cluster1' in each region is configured as such
Allow TCP:
7199 from cluster1 (JMX)
1024 - 65535 from cluster1 (JMX Random Ports)
7100 from cluster1 (Configured Normal Storage)
7103 from cluster1 (Configured SSL Storage)
9160 from cluster1 (Configured Thrift RPC Port)
9160 from client_group
foreach node's public IP we also have this rule set to enable cross region 
comminication:
7103 from public_ip

The above is a functioning and happy setup. You run repair, and it finishes 
successfully.

Broken Setup:

Upgrade to 1.2.4 without changing any of the above security group settings:

Run repair. The repair will not receive the Merkle Tree for itself. Thus 
hanging. See description. The test in description was done without having the 
other region up, but settings were exactly  the same.

Now for each public_ip add a security group rule as such to cluster1 security 
group:

Allow TCP: 1024 - 65535 from public_ip

Run repair. Things will magically work now. 

If nothing in terms of port and networking has changed in 1.2, then why the 
above is happening? I can constantly reproduce it. 

This also affects gossip. If you don't have the JMX Ports open on public ips, 
then gossip would not see any node except itself after a snap restart of all 
nodes all at once. 



 AntiEntropy Repair Freezing on 1.2.3
 

 Key: CASSANDRA-5432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: Ubuntu 10.04.1 LTS
 C* 1.2.3
 Sun Java 6 u43
 JNA Enabled
 Not using VNodes
Reporter: Arya Goudarzi
Priority: Critical

 Since I have upgraded our sandbox cluster, I am unable to run repair on any 
 node and I am reaching our gc_grace seconds this weekend. Please help. So 
 far, I have tried the following suggestions:
 - nodetool scrub
 - offline scrub
 - running repair on each CF separately. Didn't matter. All got stuck the same 
 way.
 The repair command just gets stuck and the machine is idling. Only the 
 following logs are printed for repair job:
  INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
 Starting repair command #4, repairing 1 ranges for keyspace 
 cardspring_production
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java 
 (line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will 
 sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range 
 (1808575600,42535295865117307932921825930779602032] for 
 keyspace_production.[comma separated list of CFs]
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,790 AntiEntropyService.java 
 (line 858) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] requesting merkle 
 trees for BusinessConnectionIndicesEntries (to [/X.X.X.43, /X.X.X.56, 
 /X.X.X.190])
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,086 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.43
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,147 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.56
 Please advise. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5432) AntiEntropy Repair Freezing on 1.2.3

2013-04-16 Thread Arya Goudarzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arya Goudarzi updated CASSANDRA-5432:
-

Affects Version/s: (was: 1.2.3)
   1.2.4

 AntiEntropy Repair Freezing on 1.2.3
 

 Key: CASSANDRA-5432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Ubuntu 10.04.1 LTS
 C* 1.2.3
 Sun Java 6 u43
 JNA Enabled
 Not using VNodes
Reporter: Arya Goudarzi
Priority: Critical

 Since I have upgraded our sandbox cluster, I am unable to run repair on any 
 node and I am reaching our gc_grace seconds this weekend. Please help. So 
 far, I have tried the following suggestions:
 - nodetool scrub
 - offline scrub
 - running repair on each CF separately. Didn't matter. All got stuck the same 
 way.
 The repair command just gets stuck and the machine is idling. Only the 
 following logs are printed for repair job:
  INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
 Starting repair command #4, repairing 1 ranges for keyspace 
 cardspring_production
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java 
 (line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will 
 sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range 
 (1808575600,42535295865117307932921825930779602032] for 
 keyspace_production.[comma separated list of CFs]
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,790 AntiEntropyService.java 
 (line 858) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] requesting merkle 
 trees for BusinessConnectionIndicesEntries (to [/X.X.X.43, /X.X.X.56, 
 /X.X.X.190])
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,086 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.43
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,147 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.56
 Please advise. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5484) Support custom secondary indexes in CQL

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633416#comment-13633416
 ] 

Jonathan Ellis commented on CASSANDRA-5484:
---

Actually if it wouldn't confuse the parser too much I'd prefer no keyword at 
all for the index implementation: 

{{CREATE [method] INDEX [name] ON table (columns) [WITH parameters]}}

 Support custom secondary indexes in CQL
 ---

 Key: CASSANDRA-5484
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Benjamin Coverston

 Through thrift users can add custom secondary indexes to the column metadata.
 The following syntax is used in PLSQL, and I think we could use something 
 similar.
 CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) 
 [PARAMETERS (PARAM[, PARAM])]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4210) Support for variadic parameters list for in clause in prepared cql query

2013-04-16 Thread Pierre Chalamet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633424#comment-13633424
 ] 

Pierre Chalamet commented on CASSANDRA-4210:


This is still a problem when trying to bind an IN parameter for prepared 
statement even in 1.2.4. For what I've seen, the column spec returned after 
preparing 
{code}
select * from Town where key in (?)
{code}

just tells that the parameter is of type 'key', not a set of type 'key'.

This would be really nice for binary protocol driver to know they could bind a 
set of value for such parameter (and I'm pretty sure this info is known when 
the statement is prepared).

 Support for variadic parameters list for in clause in prepared cql query
 --

 Key: CASSANDRA-4210
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4210
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.0
 Environment: prepared cql queries
Reporter: Pierre Chalamet
Priority: Minor

 This query
 {code}
 select * from Town where key in (?)
 {code}
 only allows one parameter for '?'.
 This means querying for 'Paris' and 'London' can't be executed in one step 
 with this prepared statement.
 Current workarounds are:
 * either execute the prepared query 2 times with 'Paris' then 'London'
 * or prepare a new query {{select * from Town where key in (?, ?)}} and bind 
 the 2 parameters
 Having a support for variadic parameters list with in clause could improve 
 performance:
 * single hop to get the data
 * // fetching server side

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5484) Support custom secondary indexes in CQL

2013-04-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633436#comment-13633436
 ] 

Jonathan Ellis commented on CASSANDRA-5484:
---

Let's also split out parameterization to another ticket -- I don't think we 
support that yet for any index type.

 Support custom secondary indexes in CQL
 ---

 Key: CASSANDRA-5484
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Benjamin Coverston

 Through thrift users can add custom secondary indexes to the column metadata.
 The following syntax is used in PLSQL, and I think we could use something 
 similar.
 CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) 
 [PARAMETERS (PARAM[, PARAM])]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5484) Support custom secondary indexes in CQL

2013-04-16 Thread Benjamin Coverston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633442#comment-13633442
 ] 

Benjamin Coverston commented on CASSANDRA-5484:
---

wfm

 Support custom secondary indexes in CQL
 ---

 Key: CASSANDRA-5484
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Benjamin Coverston

 Through thrift users can add custom secondary indexes to the column metadata.
 The following syntax is used in PLSQL, and I think we could use something 
 similar.
 CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) 
 [PARAMETERS (PARAM[, PARAM])]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5432) Repair Freeze/Gossip Invisibility Issues 1.2.4

2013-04-16 Thread Arya Goudarzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arya Goudarzi updated CASSANDRA-5432:
-

Summary: Repair Freeze/Gossip Invisibility Issues 1.2.4  (was: AntiEntropy 
Repair Freezing on 1.2.3)

 Repair Freeze/Gossip Invisibility Issues 1.2.4
 --

 Key: CASSANDRA-5432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Ubuntu 10.04.1 LTS
 C* 1.2.3
 Sun Java 6 u43
 JNA Enabled
 Not using VNodes
Reporter: Arya Goudarzi
Priority: Critical

 Since I have upgraded our sandbox cluster, I am unable to run repair on any 
 node and I am reaching our gc_grace seconds this weekend. Please help. So 
 far, I have tried the following suggestions:
 - nodetool scrub
 - offline scrub
 - running repair on each CF separately. Didn't matter. All got stuck the same 
 way.
 The repair command just gets stuck and the machine is idling. Only the 
 following logs are printed for repair job:
  INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
 Starting repair command #4, repairing 1 ranges for keyspace 
 cardspring_production
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java 
 (line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will 
 sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range 
 (1808575600,42535295865117307932921825930779602032] for 
 keyspace_production.[comma separated list of CFs]
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,790 AntiEntropyService.java 
 (line 858) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] requesting merkle 
 trees for BusinessConnectionIndicesEntries (to [/X.X.X.43, /X.X.X.56, 
 /X.X.X.190])
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,086 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.43
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,147 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.56
 Please advise. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5484) Support custom secondary indexes in CQL

2013-04-16 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5484:
--

Assignee: Aleksey Yeschenko

 Support custom secondary indexes in CQL
 ---

 Key: CASSANDRA-5484
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5484
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Benjamin Coverston
Assignee: Aleksey Yeschenko

 Through thrift users can add custom secondary indexes to the column metadata.
 The following syntax is used in PLSQL, and I think we could use something 
 similar.
 CREATE INDEX NAME ON TABLE (COLUMN) [INDEXTYPE IS (TYPENAME) 
 [PARAMETERS (PARAM[, PARAM])]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5432) Repair Freeze/Gossip Invisibility Issues 1.2.4

2013-04-16 Thread Arya Goudarzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arya Goudarzi updated CASSANDRA-5432:
-

Description: 
Read comment 6. This description summarizes the repair issue only, but I 
believe there is a bigger problem going on with networking as described on that 
comment. 


Since I have upgraded our sandbox cluster, I am unable to run repair on any 
node and I am reaching our gc_grace seconds this weekend. Please help. So far, 
I have tried the following suggestions:

- nodetool scrub
- offline scrub
- running repair on each CF separately. Didn't matter. All got stuck the same 
way.

The repair command just gets stuck and the machine is idling. Only the 
following logs are printed for repair job:

 INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
Starting repair command #4, repairing 1 ranges for keyspace 
cardspring_production
 INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java 
(line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will 
sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range 
(1808575600,42535295865117307932921825930779602032] for 
keyspace_production.[comma separated list of CFs]
 INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,790 AntiEntropyService.java 
(line 858) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] requesting merkle 
trees for BusinessConnectionIndicesEntries (to [/X.X.X.43, /X.X.X.56, 
/X.X.X.190])
 INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,086 AntiEntropyService.java 
(line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle tree 
for ColumnFamilyName from /X.X.X.43
 INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,147 AntiEntropyService.java 
(line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle tree 
for ColumnFamilyName from /X.X.X.56

Please advise. 

  was:
Since I have upgraded our sandbox cluster, I am unable to run repair on any 
node and I am reaching our gc_grace seconds this weekend. Please help. So far, 
I have tried the following suggestions:

- nodetool scrub
- offline scrub
- running repair on each CF separately. Didn't matter. All got stuck the same 
way.

The repair command just gets stuck and the machine is idling. Only the 
following logs are printed for repair job:

 INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
Starting repair command #4, repairing 1 ranges for keyspace 
cardspring_production
 INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java 
(line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will 
sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range 
(1808575600,42535295865117307932921825930779602032] for 
keyspace_production.[comma separated list of CFs]
 INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,790 AntiEntropyService.java 
(line 858) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] requesting merkle 
trees for BusinessConnectionIndicesEntries (to [/X.X.X.43, /X.X.X.56, 
/X.X.X.190])
 INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,086 AntiEntropyService.java 
(line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle tree 
for ColumnFamilyName from /X.X.X.43
 INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,147 AntiEntropyService.java 
(line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle tree 
for ColumnFamilyName from /X.X.X.56

Please advise. 


 Repair Freeze/Gossip Invisibility Issues 1.2.4
 --

 Key: CASSANDRA-5432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Ubuntu 10.04.1 LTS
 C* 1.2.3
 Sun Java 6 u43
 JNA Enabled
 Not using VNodes
Reporter: Arya Goudarzi
Priority: Critical

 Read comment 6. This description summarizes the repair issue only, but I 
 believe there is a bigger problem going on with networking as described on 
 that comment. 
 Since I have upgraded our sandbox cluster, I am unable to run repair on any 
 node and I am reaching our gc_grace seconds this weekend. Please help. So 
 far, I have tried the following suggestions:
 - nodetool scrub
 - offline scrub
 - running repair on each CF separately. Didn't matter. All got stuck the same 
 way.
 The repair command just gets stuck and the machine is idling. Only the 
 following logs are printed for repair job:
  INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
 Starting repair command #4, repairing 1 ranges for keyspace 
 cardspring_production
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java 
 (line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will 
 sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range 
 

[jira] [Comment Edited] (CASSANDRA-5432) Repair Freeze/Gossip Invisibility Issues 1.2.4

2013-04-16 Thread Arya Goudarzi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633414#comment-13633414
 ] 

Arya Goudarzi edited comment on CASSANDRA-5432 at 4/16/13 10:22 PM:


I narrowed this down to JMX port range, and they must be opened on the public 
IPs. Here are the steps to reproduce:

This is a working configuration:
Cassandra 1.1.10 Cluster with 12 nodes in us-east-1 and 12 nodes in us-west-2
Using Ec2MultiRegionSnitch and SSL enabled for DC_ONLY and 
NetworkTopologyStrategy with strategy_options: us-east-1:3;us-west-2:3;
C* instances have a security group called 'cluster1'
security group 'cluster1' in each region is configured as such
Allow TCP:
7199 from cluster1 (JMX)
1024 - 65535 from cluster1 (JMX Random Ports)
7100 from cluster1 (Configured Normal Storage)
7103 from cluster1 (Configured SSL Storage)
9160 from cluster1 (Configured Thrift RPC Port)
9160 from client_group
foreach node's public IP we also have this rule set to enable cross region 
comminication:
7103 from public_ip

The above is a functioning and happy setup. You run repair, and it finishes 
successfully.

Broken Setup:

Upgrade to 1.2.4 without changing any of the above security group settings:

Run repair. The repair will not receive the Merkle Tree for itself. Thus 
hanging. See description. The test in description was done with one region with 
strategy of us-east-1:3, but other settings were exactly the same.

Now for each public_ip add a security group rule as such to cluster1 security 
group:

Allow TCP: 1024 - 65535 from public_ip

Run repair. Things will magically work now. 

If nothing in terms of port and networking has changed in 1.2, then why the 
above is happening? I can constantly reproduce it. 

This also affects gossip. If you don't have the JMX Ports open on public ips, 
then gossip would not see any node except itself after a snap restart of all 
nodes all at once. 



  was (Author: arya):
I narrowed this down to JMX port range, and they must be opened on the 
public IPs. Here are the steps to reproduce:

This is a working configuration:
Cassandra 1.1.10 Cluster with 12 nodes in us-east-1 and 12 nodes in us-west-2
Using Ec2MultiRegionSnitch and SSL enabled for DC_ONLY and 
NetworkTopologyStrategy with strategy_options: us-east-1:3;us-west-2:3;
C* instances have a security group called 'cluster1'
security group 'cluster1' in each region is configured as such
Allow TCP:
7199 from cluster1 (JMX)
1024 - 65535 from cluster1 (JMX Random Ports)
7100 from cluster1 (Configured Normal Storage)
7103 from cluster1 (Configured SSL Storage)
9160 from cluster1 (Configured Thrift RPC Port)
9160 from client_group
foreach node's public IP we also have this rule set to enable cross region 
comminication:
7103 from public_ip

The above is a functioning and happy setup. You run repair, and it finishes 
successfully.

Broken Setup:

Upgrade to 1.2.4 without changing any of the above security group settings:

Run repair. The repair will not receive the Merkle Tree for itself. Thus 
hanging. See description. The test in description was done without having the 
other region up, but settings were exactly  the same.

Now for each public_ip add a security group rule as such to cluster1 security 
group:

Allow TCP: 1024 - 65535 from public_ip

Run repair. Things will magically work now. 

If nothing in terms of port and networking has changed in 1.2, then why the 
above is happening? I can constantly reproduce it. 

This also affects gossip. If you don't have the JMX Ports open on public ips, 
then gossip would not see any node except itself after a snap restart of all 
nodes all at once. 


  
 Repair Freeze/Gossip Invisibility Issues 1.2.4
 --

 Key: CASSANDRA-5432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Ubuntu 10.04.1 LTS
 C* 1.2.3
 Sun Java 6 u43
 JNA Enabled
 Not using VNodes
Reporter: Arya Goudarzi
Priority: Critical

 Read comment 6. This description summarizes the repair issue only, but I 
 believe there is a bigger problem going on with networking as described on 
 that comment. 
 Since I have upgraded our sandbox cluster, I am unable to run repair on any 
 node and I am reaching our gc_grace seconds this weekend. Please help. So 
 far, I have tried the following suggestions:
 - nodetool scrub
 - offline scrub
 - running repair on each CF separately. Didn't matter. All got stuck the same 
 way.
 The repair command just gets stuck and the machine is idling. Only the 
 following logs are printed for repair job:
  INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
 Starting 

[jira] [Resolved] (CASSANDRA-5432) Repair Freeze/Gossip Invisibility Issues 1.2.4

2013-04-16 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5432.
---

Resolution: Invalid

Gossip does not touch JMX.  JMX is not used internally at all; it's only there 
to let nodetool invoke methods.

Please see the user mailing list for troubleshooting help, Jira is not a good 
place for that.

 Repair Freeze/Gossip Invisibility Issues 1.2.4
 --

 Key: CASSANDRA-5432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Ubuntu 10.04.1 LTS
 C* 1.2.3
 Sun Java 6 u43
 JNA Enabled
 Not using VNodes
Reporter: Arya Goudarzi
Priority: Critical

 Read comment 6. This description summarizes the repair issue only, but I 
 believe there is a bigger problem going on with networking as described on 
 that comment. 
 Since I have upgraded our sandbox cluster, I am unable to run repair on any 
 node and I am reaching our gc_grace seconds this weekend. Please help. So 
 far, I have tried the following suggestions:
 - nodetool scrub
 - offline scrub
 - running repair on each CF separately. Didn't matter. All got stuck the same 
 way.
 The repair command just gets stuck and the machine is idling. Only the 
 following logs are printed for repair job:
  INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
 Starting repair command #4, repairing 1 ranges for keyspace 
 cardspring_production
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java 
 (line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will 
 sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range 
 (1808575600,42535295865117307932921825930779602032] for 
 keyspace_production.[comma separated list of CFs]
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,790 AntiEntropyService.java 
 (line 858) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] requesting merkle 
 trees for BusinessConnectionIndicesEntries (to [/X.X.X.43, /X.X.X.56, 
 /X.X.X.190])
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,086 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.43
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,147 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.56
 Please advise. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5432) Repair Freeze/Gossip Invisibility Issues 1.2.4

2013-04-16 Thread Arya Goudarzi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633492#comment-13633492
 ] 

Arya Goudarzi commented on CASSANDRA-5432:
--

I have used the IRC channel already. It was suggested to me to open a JIRA 
ticket as no one could help.

 Repair Freeze/Gossip Invisibility Issues 1.2.4
 --

 Key: CASSANDRA-5432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Ubuntu 10.04.1 LTS
 C* 1.2.3
 Sun Java 6 u43
 JNA Enabled
 Not using VNodes
Reporter: Arya Goudarzi
Priority: Critical

 Read comment 6. This description summarizes the repair issue only, but I 
 believe there is a bigger problem going on with networking as described on 
 that comment. 
 Since I have upgraded our sandbox cluster, I am unable to run repair on any 
 node and I am reaching our gc_grace seconds this weekend. Please help. So 
 far, I have tried the following suggestions:
 - nodetool scrub
 - offline scrub
 - running repair on each CF separately. Didn't matter. All got stuck the same 
 way.
 The repair command just gets stuck and the machine is idling. Only the 
 following logs are printed for repair job:
  INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
 Starting repair command #4, repairing 1 ranges for keyspace 
 cardspring_production
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java 
 (line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will 
 sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range 
 (1808575600,42535295865117307932921825930779602032] for 
 keyspace_production.[comma separated list of CFs]
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,790 AntiEntropyService.java 
 (line 858) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] requesting merkle 
 trees for BusinessConnectionIndicesEntries (to [/X.X.X.43, /X.X.X.56, 
 /X.X.X.190])
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,086 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.43
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,147 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.56
 Please advise. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5432) Repair Freeze/Gossip Invisibility Issues 1.2.4

2013-04-16 Thread Arya Goudarzi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633502#comment-13633502
 ] 

Arya Goudarzi commented on CASSANDRA-5432:
--

I added a correction. It is not JMX Jonathan, you are right. It is opening the 
non-ssl storage port on public IPs that fixes it. We didn't have to do this on 
1.1.10.

 Repair Freeze/Gossip Invisibility Issues 1.2.4
 --

 Key: CASSANDRA-5432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Ubuntu 10.04.1 LTS
 C* 1.2.3
 Sun Java 6 u43
 JNA Enabled
 Not using VNodes
Reporter: Arya Goudarzi
Priority: Critical

 Read comment 6. This description summarizes the repair issue only, but I 
 believe there is a bigger problem going on with networking as described on 
 that comment. 
 Since I have upgraded our sandbox cluster, I am unable to run repair on any 
 node and I am reaching our gc_grace seconds this weekend. Please help. So 
 far, I have tried the following suggestions:
 - nodetool scrub
 - offline scrub
 - running repair on each CF separately. Didn't matter. All got stuck the same 
 way.
 The repair command just gets stuck and the machine is idling. Only the 
 following logs are printed for repair job:
  INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
 Starting repair command #4, repairing 1 ranges for keyspace 
 cardspring_production
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,789 AntiEntropyService.java 
 (line 652) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] new session: will 
 sync /X.X.X.190, /X.X.X.43, /X.X.X.56 on range 
 (1808575600,42535295865117307932921825930779602032] for 
 keyspace_production.[comma separated list of CFs]
  INFO [AntiEntropySessions:7] 2013-04-05 23:30:27,790 AntiEntropyService.java 
 (line 858) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] requesting merkle 
 trees for BusinessConnectionIndicesEntries (to [/X.X.X.43, /X.X.X.56, 
 /X.X.X.190])
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,086 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.43
  INFO [AntiEntropyStage:1] 2013-04-05 23:30:28,147 AntiEntropyService.java 
 (line 214) [repair #cc5a9aa0-9e48-11e2-98ba-11bde7670242] Received merkle 
 tree for ColumnFamilyName from /X.X.X.56
 Please advise. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5432) Repair Freeze/Gossip Invisibility Issues 1.2.4

2013-04-16 Thread Arya Goudarzi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633414#comment-13633414
 ] 

Arya Goudarzi edited comment on CASSANDRA-5432 at 4/16/13 10:38 PM:


I narrowed this down to non-ssl storage port, and they must be opened on the 
public IPs. Here are the steps to reproduce:

This is a working configuration:
Cassandra 1.1.10 Cluster with 12 nodes in us-east-1 and 12 nodes in us-west-2
Using Ec2MultiRegionSnitch and SSL enabled for DC_ONLY and 
NetworkTopologyStrategy with strategy_options: us-east-1:3;us-west-2:3;
C* instances have a security group called 'cluster1'
security group 'cluster1' in each region is configured as such
Allow TCP:
7199 from cluster1 (JMX)
1024 - 65535 from cluster1 (JMX Random Ports)
7100 from cluster1 (Configured Normal Storage)
7103 from cluster1 (Configured SSL Storage)
9160 from cluster1 (Configured Thrift RPC Port)
9160 from client_group
foreach node's public IP we also have this rule set to enable cross region 
comminication:
7103 from public_ip

The above is a functioning and happy setup. You run repair, and it finishes 
successfully.

Broken Setup:

Upgrade to 1.2.4 without changing any of the above security group settings:

Run repair. The repair will not receive the Merkle Tree for itself. Thus 
hanging. See description. The test in description was done with one region with 
strategy of us-east-1:3, but other settings were exactly the same.

Now for each public_ip add a security group rule as such to cluster1 security 
group:

Allow TCP: 7100 from public_ip

Run repair. Things will magically work now. 

If nothing in terms of port and networking has changed in 1.2, then why the 
above is happening? I can constantly reproduce it. 

This also affects gossip. If you don't have the JMX Ports open on public ips, 
then gossip would not see any node except itself after a snap restart of all 
nodes all at once. 



  was (Author: arya):
I narrowed this down to JMX port range, and they must be opened on the 
public IPs. Here are the steps to reproduce:

This is a working configuration:
Cassandra 1.1.10 Cluster with 12 nodes in us-east-1 and 12 nodes in us-west-2
Using Ec2MultiRegionSnitch and SSL enabled for DC_ONLY and 
NetworkTopologyStrategy with strategy_options: us-east-1:3;us-west-2:3;
C* instances have a security group called 'cluster1'
security group 'cluster1' in each region is configured as such
Allow TCP:
7199 from cluster1 (JMX)
1024 - 65535 from cluster1 (JMX Random Ports)
7100 from cluster1 (Configured Normal Storage)
7103 from cluster1 (Configured SSL Storage)
9160 from cluster1 (Configured Thrift RPC Port)
9160 from client_group
foreach node's public IP we also have this rule set to enable cross region 
comminication:
7103 from public_ip

The above is a functioning and happy setup. You run repair, and it finishes 
successfully.

Broken Setup:

Upgrade to 1.2.4 without changing any of the above security group settings:

Run repair. The repair will not receive the Merkle Tree for itself. Thus 
hanging. See description. The test in description was done with one region with 
strategy of us-east-1:3, but other settings were exactly the same.

Now for each public_ip add a security group rule as such to cluster1 security 
group:

Allow TCP: 1024 - 65535 from public_ip

Run repair. Things will magically work now. 

If nothing in terms of port and networking has changed in 1.2, then why the 
above is happening? I can constantly reproduce it. 

This also affects gossip. If you don't have the JMX Ports open on public ips, 
then gossip would not see any node except itself after a snap restart of all 
nodes all at once. 


  
 Repair Freeze/Gossip Invisibility Issues 1.2.4
 --

 Key: CASSANDRA-5432
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5432
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Ubuntu 10.04.1 LTS
 C* 1.2.3
 Sun Java 6 u43
 JNA Enabled
 Not using VNodes
Reporter: Arya Goudarzi
Priority: Critical

 Read comment 6. This description summarizes the repair issue only, but I 
 believe there is a bigger problem going on with networking as described on 
 that comment. 
 Since I have upgraded our sandbox cluster, I am unable to run repair on any 
 node and I am reaching our gc_grace seconds this weekend. Please help. So 
 far, I have tried the following suggestions:
 - nodetool scrub
 - offline scrub
 - running repair on each CF separately. Didn't matter. All got stuck the same 
 way.
 The repair command just gets stuck and the machine is idling. Only the 
 following logs are printed for repair job:
  INFO [Thread-42214] 2013-04-05 23:30:27,785 StorageService.java (line 2379) 
 

[jira] [Updated] (CASSANDRA-5481) CQLSH exception handling could leave a session in a bad state

2013-04-16 Thread Jordan Pittier (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan Pittier updated CASSANDRA-5481:
--

Attachment: 5481.diff

Patch attached.

It also removes the need to escape the keyspace name since it is now handled by 
the driver

 CQLSH exception handling could leave a session in a bad state
 -

 Key: CASSANDRA-5481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5481
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.4
 Environment: cqlsh 2.3.0 | Cassandra 1.2.4 | CQL spec 3.0.0 | Thrift 
 protocol 19.35.0
Reporter: Jordan Pittier
Priority: Minor
 Attachments: 5481.diff, CQLSession.png


 Playing with CTRL+C in a cqlsh session can leave the (Thrift|Native) 
 connection in a bad state.
 To reproduce :
 1) Run a long running COPY FROM command (COPY test (k, v) FROM 
 '/tmp/test.csv')
 2) Interrupt the importer with CTRL+C
 Repeat step 1 and 2 until you start seeing weird things in the cql shell (see 
 attached screenshot)
 The reason is, I believe, the connection (and the cursor) is not correclty 
 closed and reopened on interruption.
 I am working to propose a fix.
 Jordan

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: updated for 1.1.11 release

2013-04-16 Thread eevans
Updated Branches:
  refs/heads/cassandra-1.1 6db8ac389 - d939a0c95


updated for 1.1.11 release


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d939a0c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d939a0c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d939a0c9

Branch: refs/heads/cassandra-1.1
Commit: d939a0c958d36a3debfc63364a3fa569aa632c6e
Parents: 6db8ac3
Author: Eric Evans eev...@apache.org
Authored: Tue Apr 16 19:08:40 2013 -0500
Committer: Eric Evans eev...@apache.org
Committed: Tue Apr 16 19:08:40 2013 -0500

--
 build.xml|2 +-
 debian/changelog |6 ++
 2 files changed, 7 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d939a0c9/build.xml
--
diff --git a/build.xml b/build.xml
index 6fa5172..c81af1f 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=1.1.10/
+property name=base.version value=1.1.11/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d939a0c9/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 46d1781..739e38a 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (1.1.11) unstable; urgency=low
+
+  * New release
+
+ -- Eric Evans eev...@apache.org, 16 Apr 2013 18:56:03 -0500
+
 cassandra (1.1.10) unstable; urgency=low
 
   * New release



Git Push Summary

2013-04-16 Thread eevans
Updated Tags:  refs/tags/1.1.11-tentative [created] d939a0c95


git commit: replace measureDeep in key cache with custom calculation patch by Vijay; reviewed by Jonathan Ellis for CASSANDRA-4860

2013-04-16 Thread vijay
Updated Branches:
  refs/heads/cassandra-1.2 40e7aba6b - da93a1cfe


replace measureDeep in key cache with custom calculation
patch by Vijay; reviewed by Jonathan Ellis for CASSANDRA-4860


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/da93a1cf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/da93a1cf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/da93a1cf

Branch: refs/heads/cassandra-1.2
Commit: da93a1cfe483a1522b2c149d287279a74e43a8a9
Parents: 40e7aba
Author: Vijay Parthasarathy vijay2...@gmail.com
Authored: Tue Apr 16 18:24:11 2013 -0700
Committer: Vijay Parthasarathy vijay2...@gmail.com
Committed: Tue Apr 16 18:24:11 2013 -0700

--
 src/java/org/apache/cassandra/cache/CacheKey.java  |2 +-
 .../cassandra/cache/ConcurrentLinkedHashCache.java |   11 +-
 .../apache/cassandra/cache/IMeasurableMemory.java  |6 +
 .../org/apache/cassandra/cache/IRowCacheEntry.java |2 +-
 .../org/apache/cassandra/cache/KeyCacheKey.java|7 +
 .../org/apache/cassandra/cache/RowCacheKey.java|7 +
 .../apache/cassandra/cache/RowCacheSentinel.java   |8 +
 src/java/org/apache/cassandra/db/ColumnFamily.java |5 +
 src/java/org/apache/cassandra/db/DeletionInfo.java |9 +
 src/java/org/apache/cassandra/db/DeletionTime.java |7 +
 .../org/apache/cassandra/db/RowIndexEntry.java |   19 ++-
 .../apache/cassandra/io/sstable/IndexHelper.java   |6 +
 .../org/apache/cassandra/utils/ObjectSizes.java|  204 +++
 .../apache/cassandra/cache/CacheProviderTest.java  |   33 ++-
 .../org/apache/cassandra/cache/ObjectSizeTest.java |   83 ++
 15 files changed, 390 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/da93a1cf/src/java/org/apache/cassandra/cache/CacheKey.java
--
diff --git a/src/java/org/apache/cassandra/cache/CacheKey.java 
b/src/java/org/apache/cassandra/cache/CacheKey.java
index 5743dfc..aa9f5f6 100644
--- a/src/java/org/apache/cassandra/cache/CacheKey.java
+++ b/src/java/org/apache/cassandra/cache/CacheKey.java
@@ -19,7 +19,7 @@ package org.apache.cassandra.cache;
 
 import org.apache.cassandra.utils.Pair;
 
-public interface CacheKey
+public interface CacheKey extends IMeasurableMemory
 {
 /**
  * @return The keyspace and ColumnFamily names to which this key belongs

http://git-wip-us.apache.org/repos/asf/cassandra/blob/da93a1cf/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java
--
diff --git a/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java 
b/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java
index 0f992d3..30cb958 100644
--- a/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java
+++ b/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java
@@ -19,18 +19,15 @@ package org.apache.cassandra.cache;
 
 import java.util.Set;
 
-import org.github.jamm.MemoryMeter;
-
 import com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap;
 import com.googlecode.concurrentlinkedhashmap.EntryWeigher;
 
 /** Wrapper so CLHM can implement ICache interface.
  *  (this is what you get for making library classes final.) */
-public class ConcurrentLinkedHashCacheK, V implements ICacheK, V
+public class ConcurrentLinkedHashCacheK extends IMeasurableMemory, V extends 
IMeasurableMemory implements ICacheK, V
 {
 public static final int DEFAULT_CONCURENCY_LEVEL = 64;
 private final ConcurrentLinkedHashMapK, V map;
-private static final MemoryMeter meter = new 
MemoryMeter().omitSharedBufferOverhead();
 
 private ConcurrentLinkedHashCache(ConcurrentLinkedHashMapK, V map)
 {
@@ -40,7 +37,7 @@ public class ConcurrentLinkedHashCacheK, V implements 
ICacheK, V
 /**
  * Initialize a cache with initial capacity with weightedCapacity
  */
-public static K, V ConcurrentLinkedHashCacheK, V create(long 
weightedCapacity, EntryWeigherK, V entryWeiger)
+public static K extends IMeasurableMemory, V extends IMeasurableMemory 
ConcurrentLinkedHashCacheK, V create(long weightedCapacity, EntryWeigherK, 
V entryWeiger)
 {
 ConcurrentLinkedHashMapK, V map = new 
ConcurrentLinkedHashMap.BuilderK, V()
 .weigher(entryWeiger)
@@ -51,13 +48,13 @@ public class ConcurrentLinkedHashCacheK, V implements 
ICacheK, V
 return new ConcurrentLinkedHashCacheK, V(map);
 }
 
-public static K, V ConcurrentLinkedHashCacheK, V create(long 
weightedCapacity)
+public static K extends IMeasurableMemory, V extends IMeasurableMemory 
ConcurrentLinkedHashCacheK, V create(long weightedCapacity)
 {
 return 

[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-04-16 Thread vijay
Merge branch 'cassandra-1.2' into trunk

Conflicts:
src/java/org/apache/cassandra/db/ColumnFamily.java
src/java/org/apache/cassandra/db/RowIndexEntry.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7a6fbc1b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7a6fbc1b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7a6fbc1b

Branch: refs/heads/trunk
Commit: 7a6fbc1b22a099abe368b786e9933fb5d980c329
Parents: 67f5d6f da93a1c
Author: Vijay Parthasarathy vijay2...@gmail.com
Authored: Tue Apr 16 18:42:53 2013 -0700
Committer: Vijay Parthasarathy vijay2...@gmail.com
Committed: Tue Apr 16 18:42:53 2013 -0700

--
 src/java/org/apache/cassandra/cache/CacheKey.java  |2 +-
 .../cassandra/cache/ConcurrentLinkedHashCache.java |   11 +-
 .../apache/cassandra/cache/IMeasurableMemory.java  |6 +
 .../org/apache/cassandra/cache/IRowCacheEntry.java |2 +-
 .../org/apache/cassandra/cache/KeyCacheKey.java|7 +
 .../org/apache/cassandra/cache/RowCacheKey.java|7 +
 .../apache/cassandra/cache/RowCacheSentinel.java   |8 +
 src/java/org/apache/cassandra/db/ColumnFamily.java |5 +
 src/java/org/apache/cassandra/db/DeletionInfo.java |9 +
 src/java/org/apache/cassandra/db/DeletionTime.java |7 +
 .../org/apache/cassandra/db/RowIndexEntry.java |   19 ++-
 .../apache/cassandra/io/sstable/IndexHelper.java   |6 +
 .../org/apache/cassandra/utils/ObjectSizes.java|  204 +++
 test/data/serialization/2.0/db.RowMutation.bin |  Bin 3599 - 3599 bytes
 .../apache/cassandra/cache/CacheProviderTest.java  |   33 ++-
 .../org/apache/cassandra/cache/ObjectSizeTest.java |   83 ++
 16 files changed, 390 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a6fbc1b/src/java/org/apache/cassandra/db/ColumnFamily.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamily.java
index e50e396,6164900..fffdfe4
--- a/src/java/org/apache/cassandra/db/ColumnFamily.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamily.java
@@@ -286,6 -266,22 +286,11 @@@ public abstract class ColumnFamily impl
  return null;
  }
  
 -/** the size of user-provided data, not including internal overhead */
 -int dataSize()
 -{
 -int size = deletionInfo().dataSize();
 -for (IColumn column : columns)
 -{
 -size += column.dataSize();
 -}
 -return size;
 -}
 -
+ public long memorySize()
+ {
+ return ObjectSizes.measureDeep(this);
+ }
+ 
  public long maxTimestamp()
  {
  long maxTimestamp = deletionInfo().maxTimestamp();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a6fbc1b/src/java/org/apache/cassandra/db/DeletionInfo.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a6fbc1b/src/java/org/apache/cassandra/db/DeletionTime.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a6fbc1b/src/java/org/apache/cassandra/db/RowIndexEntry.java
--
diff --cc src/java/org/apache/cassandra/db/RowIndexEntry.java
index 2a8dcfe,a831498..aeaec44
--- a/src/java/org/apache/cassandra/db/RowIndexEntry.java
+++ b/src/java/org/apache/cassandra/db/RowIndexEntry.java
@@@ -27,8 -28,11 +28,9 @@@ import org.apache.cassandra.cache.IMeas
  import org.apache.cassandra.io.sstable.Descriptor;
  import org.apache.cassandra.io.sstable.IndexHelper;
  import org.apache.cassandra.io.util.FileUtils;
 -import org.apache.cassandra.utils.IFilter;
 -import org.apache.cassandra.utils.FilterFactory;
+ import org.apache.cassandra.utils.ObjectSizes;
  
- public class RowIndexEntry
+ public class RowIndexEntry implements IMeasurableMemory
  {
  public static final Serializer serializer = new Serializer();
  
@@@ -64,21 -68,33 +66,27 @@@
  
  public ListIndexHelper.IndexInfo columnsIndex()
  {
 -return Collections.IndexHelper.IndexInfoemptyList();
 -}
 -
 -public IFilter bloomFilter()
 -{
 -throw new UnsupportedOperationException();
 +return Collections.emptyList();
  }
  
+ public long memorySize()
+ {
+ long fields = TypeSizes.NATIVE.sizeof(position) + 
ObjectSizes.getReferenceSize(); 
+ return ObjectSizes.getFieldSize(fields);
+ }
+ 
  public static class Serializer
  {
 -public void serialize(RowIndexEntry rie, DataOutput dos) throws 
IOException
 +public void serialize(RowIndexEntry rie, DataOutput out) throws 
IOException
  {
 -

[1/2] git commit: replace measureDeep in key cache with custom calculation patch by Vijay; reviewed by Jonathan Ellis for CASSANDRA-4860

2013-04-16 Thread vijay
Updated Branches:
  refs/heads/trunk 67f5d6f1c - 7a6fbc1b2


replace measureDeep in key cache with custom calculation
patch by Vijay; reviewed by Jonathan Ellis for CASSANDRA-4860


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/da93a1cf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/da93a1cf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/da93a1cf

Branch: refs/heads/trunk
Commit: da93a1cfe483a1522b2c149d287279a74e43a8a9
Parents: 40e7aba
Author: Vijay Parthasarathy vijay2...@gmail.com
Authored: Tue Apr 16 18:24:11 2013 -0700
Committer: Vijay Parthasarathy vijay2...@gmail.com
Committed: Tue Apr 16 18:24:11 2013 -0700

--
 src/java/org/apache/cassandra/cache/CacheKey.java  |2 +-
 .../cassandra/cache/ConcurrentLinkedHashCache.java |   11 +-
 .../apache/cassandra/cache/IMeasurableMemory.java  |6 +
 .../org/apache/cassandra/cache/IRowCacheEntry.java |2 +-
 .../org/apache/cassandra/cache/KeyCacheKey.java|7 +
 .../org/apache/cassandra/cache/RowCacheKey.java|7 +
 .../apache/cassandra/cache/RowCacheSentinel.java   |8 +
 src/java/org/apache/cassandra/db/ColumnFamily.java |5 +
 src/java/org/apache/cassandra/db/DeletionInfo.java |9 +
 src/java/org/apache/cassandra/db/DeletionTime.java |7 +
 .../org/apache/cassandra/db/RowIndexEntry.java |   19 ++-
 .../apache/cassandra/io/sstable/IndexHelper.java   |6 +
 .../org/apache/cassandra/utils/ObjectSizes.java|  204 +++
 .../apache/cassandra/cache/CacheProviderTest.java  |   33 ++-
 .../org/apache/cassandra/cache/ObjectSizeTest.java |   83 ++
 15 files changed, 390 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/da93a1cf/src/java/org/apache/cassandra/cache/CacheKey.java
--
diff --git a/src/java/org/apache/cassandra/cache/CacheKey.java 
b/src/java/org/apache/cassandra/cache/CacheKey.java
index 5743dfc..aa9f5f6 100644
--- a/src/java/org/apache/cassandra/cache/CacheKey.java
+++ b/src/java/org/apache/cassandra/cache/CacheKey.java
@@ -19,7 +19,7 @@ package org.apache.cassandra.cache;
 
 import org.apache.cassandra.utils.Pair;
 
-public interface CacheKey
+public interface CacheKey extends IMeasurableMemory
 {
 /**
  * @return The keyspace and ColumnFamily names to which this key belongs

http://git-wip-us.apache.org/repos/asf/cassandra/blob/da93a1cf/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java
--
diff --git a/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java 
b/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java
index 0f992d3..30cb958 100644
--- a/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java
+++ b/src/java/org/apache/cassandra/cache/ConcurrentLinkedHashCache.java
@@ -19,18 +19,15 @@ package org.apache.cassandra.cache;
 
 import java.util.Set;
 
-import org.github.jamm.MemoryMeter;
-
 import com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap;
 import com.googlecode.concurrentlinkedhashmap.EntryWeigher;
 
 /** Wrapper so CLHM can implement ICache interface.
  *  (this is what you get for making library classes final.) */
-public class ConcurrentLinkedHashCacheK, V implements ICacheK, V
+public class ConcurrentLinkedHashCacheK extends IMeasurableMemory, V extends 
IMeasurableMemory implements ICacheK, V
 {
 public static final int DEFAULT_CONCURENCY_LEVEL = 64;
 private final ConcurrentLinkedHashMapK, V map;
-private static final MemoryMeter meter = new 
MemoryMeter().omitSharedBufferOverhead();
 
 private ConcurrentLinkedHashCache(ConcurrentLinkedHashMapK, V map)
 {
@@ -40,7 +37,7 @@ public class ConcurrentLinkedHashCacheK, V implements 
ICacheK, V
 /**
  * Initialize a cache with initial capacity with weightedCapacity
  */
-public static K, V ConcurrentLinkedHashCacheK, V create(long 
weightedCapacity, EntryWeigherK, V entryWeiger)
+public static K extends IMeasurableMemory, V extends IMeasurableMemory 
ConcurrentLinkedHashCacheK, V create(long weightedCapacity, EntryWeigherK, 
V entryWeiger)
 {
 ConcurrentLinkedHashMapK, V map = new 
ConcurrentLinkedHashMap.BuilderK, V()
 .weigher(entryWeiger)
@@ -51,13 +48,13 @@ public class ConcurrentLinkedHashCacheK, V implements 
ICacheK, V
 return new ConcurrentLinkedHashCacheK, V(map);
 }
 
-public static K, V ConcurrentLinkedHashCacheK, V create(long 
weightedCapacity)
+public static K extends IMeasurableMemory, V extends IMeasurableMemory 
ConcurrentLinkedHashCacheK, V create(long weightedCapacity)
 {
 return create(weightedCapacity, new 

[jira] [Resolved] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)

2013-04-16 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay resolved CASSANDRA-4860.
--

Resolution: Fixed

Wow that sounds lot better than my cryptic explanation :)

Committed to 1.2 and trunk, Thanks Ryan and Jonathan!

 Estimated Row Cache Entry size incorrect (always 24?)
 -

 Key: CASSANDRA-4860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0, 1.2.3, 2.0
Reporter: Chris Burroughs
Assignee: Vijay
 Fix For: 1.2.0 beta 3

 Attachments: 0001-4860-v2.patch, 0001-4860-v3.patch, 
 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, 
 4860-perf-test.zip, trunk-4860-revert.patch


 After running for several hours the RowCacheSize was suspicious low (ie 70 
 something MB)  I used  CASSANDRA-4859 to measure the size and number of 
 entries on a node:
 In [3]: 1560504./65021
 Out[3]: 24.0
 In [4]: 2149464./89561
 Out[4]: 24.0
 In [6]: 7216096./300785
 Out[6]: 23.990877204647838
 That's RowCacheSize/RowCacheNumEntires  .  Just to prove I don't have crazy 
 small rows the mean size of the row *keys* in the saved cache is 67 and 
 Compacted row mean size: 355.  No jamm errors in the log
 Config notes:
 row_cache_provider: ConcurrentLinkedHashCacheProvider
 row_cache_size_in_mb: 2048
 Version info:
  * C*: 1.1.6
  * centos 2.6.32-220.13.1.el6.x86_64
  * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5469) Race condition between index building and scrubDirectories() at startup

2013-04-16 Thread Boris Yen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633672#comment-13633672
 ] 

Boris Yen commented on CASSANDRA-5469:
--

It looks like the patch for CASSANDRA 5350 might work for this issue also. As 
long as the MeteredFluster is scheduled after the scrub. There should be no 
race condition. As least, the main thread is not deleting the file while the 
sstables are opened.

 Race condition between index building and scrubDirectories() at startup
 ---

 Key: CASSANDRA-5469
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5469
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.12, 1.1.10, 1.2.4
Reporter: amorton

 From user group 
 http://www.mail-archive.com/user@cassandra.apache.org/msg29207.html
 In CassandraDaemon.setup() the call to SystemTable.checkHealth() results in 
 the CFS's being created. As part of their creation they kick of async 
 secondary index build if the index is not marked as built 
 (SecondaryIndexManager.addIndexedColumn()). Later in CD.setup() the call is 
 made to scrub the data dirs and this can race with the tmp files created by 
 the index rebuild. The result is an error that prevents the node starting.
 Should we delay rebuilding secondary indexes until after startup has 
 completed or rebuild them synchronously ? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira