[jira] [Created] (CASSANDRA-8420) Application getting time to open on other node when one node goes down

2014-12-04 Thread Shamim Khan (JIRA)
Shamim Khan created CASSANDRA-8420:
--

 Summary: Application getting time to open on other node when one 
node goes down
 Key: CASSANDRA-8420
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8420
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Shamim Khan


Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:152) 
~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:122) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
~[na:1.7.0_51]
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
 ~[hector-client_3.5.1.jar:na]
... 45 common frames omitted
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}

2014-12-04 13:59:45.850 WARN  m.p.c.connection.HConnectionManager - Could not 
fullfill request on this host CassandraClienthost1_priv:1123456-1231
2014-12-04 13:59:45.852 WARN  m.p.c.connection.HConnectionManager - Exception:
me.prettyprint.hector.api.exceptions.HTimedOutException: 
org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
at 
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:37)
 ~[hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265)
 ~[hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:132)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl.getSlice(KeyspaceServiceImpl.java:290)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:53)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:49)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:101)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery.execute(ThriftSliceQuery.java:48)
 [hector-client_3.5.1.jar:na]
at 
com.ericsson.rm.cassandra.xa.keyspace.row.KeyedRowQuery.execute(KeyedRowQuery.java:88)
 [cassandra.xa_3.5.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8421) Cassandra 2.1.1 UDT not returning value for LIST type as UDT

2014-12-04 Thread madheswaran (JIRA)
madheswaran created CASSANDRA-8421:
--

 Summary: Cassandra 2.1.1 UDT not returning value for LIST type as 
UDT
 Key: CASSANDRA-8421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: single node cassandra 
Reporter: madheswaran
 Fix For: 3.0, 2.1.2, 2.1 rc6




I using List and its data type is UDT.

UDT:

CREATE TYPE

fieldmap (

 key text,
 value text
);

TABLE:

CREATE TABLE entity (

  entity_id uuid PRIMARY KEY,
  begining int,
  domain text,
  domain_type text,
  entity_template_name text,
  field_values listfieldmap,
  global_entity_type text,
  revision_time timeuuid,
  status_key int,
  status_name text,
  uuid timeuuid
  ) 
INDEX:

CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);

CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);

CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );

QUERY

SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
'S4_1017.abc.com' allow filtering;

The above query return value for some row and not for many rows but those rows 
and data's are exist.

Observation:
If I execute query with other than field_maps, then it returns value. I suspect 
the problem with LIST with UDT.

I have single node cassadra DB. Please let me know why this strange behavior 
from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8422) cassandra won't start up due to Unable to gossip with any seeds on the decommissioned node

2014-12-04 Thread Masashi Ozawa (JIRA)
Masashi Ozawa created CASSANDRA-8422:


 Summary: cassandra won't start up due to Unable to gossip with 
any seeds on the decommissioned node
 Key: CASSANDRA-8422
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8422
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Masashi Ozawa


- 2-node
  * nodeA - seed
  * nodeB

1. decommission nodeB from the cluster with nodetool
   when it's finished, kill cassandra process on nodeB
2. delete data from commit/cache/data directories on nodeB
3. try to start cassandra on nodeB (first time)
   = FAILED with Unable to gossip with any seeds
4. try to start cassandra on nodeB (second time)
  = OK

It was not a one-time shot. I tried it a several times and encountered the same 
issue for some reason.

ERROR [main] 2014-11-27 18:44:55,017 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1211)
at 
org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:445)
at 
org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:659)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 INFO [StorageServiceShutdownHook] 2014-11-27 18:44:55,076 Gossiper.java (line 
1307) Announcing shutdown


For the first time, it looks like that the recommission node(goo184) did not 
receive a GossipDigestAckMessage from a seed(test130) ? Don't know why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8422) cassandra won't start up due to Unable to gossip with any seeds on the decommissioned node

2014-12-04 Thread Masashi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234085#comment-14234085
 ] 

Masashi Ozawa commented on CASSANDRA-8422:
--

The similar issue has been reported in CASSANDRA-7350 and it says that this has 
been fixed in CASSANDRA-6523 in 2.0.9. However it's still happening on my env 
for some reason.

 cassandra won't start up due to Unable to gossip with any seeds on the 
 decommissioned node
 

 Key: CASSANDRA-8422
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8422
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Masashi Ozawa

 - 2-node
   * nodeA - seed
   * nodeB
 1. decommission nodeB from the cluster with nodetool
when it's finished, kill cassandra process on nodeB
 2. delete data from commit/cache/data directories on nodeB
 3. try to start cassandra on nodeB (first time)
= FAILED with Unable to gossip with any seeds
 4. try to start cassandra on nodeB (second time)
   = OK
 It was not a one-time shot. I tried it a several times and encountered the 
 same issue for some reason.
 ERROR [main] 2014-11-27 18:44:55,017 CassandraDaemon.java (line 513) 
 Exception encountered during startup
 java.lang.RuntimeException: Unable to gossip with any seeds
 at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1211)
 at 
 org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:445)
 at 
 org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:659)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
  INFO [StorageServiceShutdownHook] 2014-11-27 18:44:55,076 Gossiper.java 
 (line 1307) Announcing shutdown
 For the first time, it looks like that the recommission node(goo184) did not 
 receive a GossipDigestAckMessage from a seed(test130) ? Don't know why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8420) Application getting time to open on other node when one node goes down

2014-12-04 Thread Shamim Khan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shamim Khan updated CASSANDRA-8420:
---
Description: 
I have 3 node cluster my application is deployed on 3 node. When i put down one 
node then my application is taking around 5 mins to take up on other node and 
throwing below error.

Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:152) 
~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:122) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
~[na:1.7.0_51]
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
 ~[hector-client_3.5.1.jar:na]
... 45 common frames omitted
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}

2014-12-04 13:59:45.850 WARN  m.p.c.connection.HConnectionManager - Could not 
fullfill request on this host CassandraClienthost1_priv:1123456-1231
2014-12-04 13:59:45.852 WARN  m.p.c.connection.HConnectionManager - Exception:
me.prettyprint.hector.api.exceptions.HTimedOutException: 
org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
at 
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:37)
 ~[hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265)
 ~[hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:132)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl.getSlice(KeyspaceServiceImpl.java:290)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:53)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:49)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:101)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery.execute(ThriftSliceQuery.java:48)
 [hector-client_3.5.1.jar:na]
at 
com.ericsson.rm.cassandra.xa.keyspace.row.KeyedRowQuery.execute(KeyedRowQuery.java:88)
 [cassandra.xa_3.5.1.jar:na]

  was:
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:152) 
~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:122) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
~[na:1.7.0_51]
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
 ~[hector-client_3.5.1.jar:na]
... 45 common frames omitted
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}

2014-12-04 13:59:45.850 WARN  m.p.c.connection.HConnectionManager - Could not 
fullfill request on this host CassandraClienthost1_priv:1123456-1231
2014-12-04 13:59:45.852 WARN  m.p.c.connection.HConnectionManager - Exception:
me.prettyprint.hector.api.exceptions.HTimedOutException: 
org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
at 
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:37)
 ~[hector-client_3.5.1.jar:na]
at 

[jira] [Commented] (CASSANDRA-8346) Paxos operation can use stale data during multiple range movements

2014-12-04 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234140#comment-14234140
 ] 

sankalp kohli commented on CASSANDRA-8346:
--

Let me review it. 

 Paxos operation can use stale data during multiple range movements
 --

 Key: CASSANDRA-8346
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8346
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.12

 Attachments: 8346.txt


 Paxos operations correctly account for pending ranges for all operation 
 pertaining to the Paxos state, but those pending ranges are not taken into 
 account when reading the data to check for the conditions or during a serial 
 read. It's thus possible to break the LWT guarantees by reading a stale 
 value.  This require 2 node movements (on the same token range) to be a 
 problem though.
 Basically, we have {{RF}} replicas + {{P}} pending nodes. For the Paxos 
 prepare/propose phases, the number of required participants (the Paxos 
 QUORUM) is {{(RF + P + 1) / 2}} ({{SP.getPaxosParticipants}}), but the read 
 done to check conditions or for serial reads is done at a normal QUORUM (or 
 LOCAL_QUORUM), and so a weaker {{(RF + 1) / 2}}. We have a problem if it's 
 possible that said read can read only from nodes that were not part of the 
 paxos participants, and so we have a problem if:
 {noformat}
 normal quorum == (RF + 1) / 2 = (RF + P) - ((RF + P + 1) / 2) == 
 participants considered - blocked for
 {noformat}
 We're good if {{P = 0}} or {{P = 1}} since this inequality gives us 
 respectively {{RF + 1 = RF - 1}} and {{RF + 1 = RF}}, both of which are 
 impossible. But at {{P = 2}} (2 pending nodes), this inequality is equivalent 
 to {{RF = RF}} and so we might read stale data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8405) Is there a way to override the current MAX_TTL value from 20 yrs to a value 20 yrs.

2014-12-04 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234205#comment-14234205
 ] 

Philip Thompson commented on CASSANDRA-8405:


Would it be possible to add an extra timestamp column that contains the date of 
the distant future time?

 Is there a way to override the current MAX_TTL value from 20 yrs to a value  
 20 yrs.
 -

 Key: CASSANDRA-8405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8405
 Project: Cassandra
  Issue Type: Wish
  Components: Core
 Environment: Linux(RH)
Reporter: Parth Setya
Priority: Blocker
  Labels: MAX_TTL, date, expiration, ttl

 We are migrating data from Oracle to C*.
 The expiration date for a certain column was set to 90 years in Oracle.
 Here we are not able to make that value go beyond 20 years.
 Could reccomend a way to override this value?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8215) Empty IN Clause still returns data

2014-12-04 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-8215:
-

Assignee: Benjamin Lerer  (was: Tyler Hobbs)

 Empty IN Clause still returns data
 --

 Key: CASSANDRA-8215
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8215
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
 Fix For: 3.0


 The dtest cql_tests.py:TestCQL.empty_in_test is failing on trunk HEAD but not 
 on 2.1-HEAD.
 The test uses the following table: {code} CREATE TABLE test (k1 int, k2 int, 
 v int, PRIMARY KEY (k1, k2)) {code} then performs a number of inserts.
 The test then asserts that {code} SELECT v FROM test WHERE k1 = 0 AND k2 IN 
 () {code} returns no data, however it is returning everywhere k1 = 0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8422) cassandra won't start up due to Unable to gossip with any seeds on the decommissioned node

2014-12-04 Thread Masashi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masashi Ozawa updated CASSANDRA-8422:
-
Description: 
- 2-node
  * nodeA - seed
  * nodeB

1. decommission nodeB from the cluster with nodetool
   when it's finished, kill cassandra process on nodeB
2. delete data from commit/cache/data directories on nodeB
3. try to start cassandra on nodeB (first time)
   = FAILED with Unable to gossip with any seeds
4. try to start cassandra on nodeB (second time)
  = OK

It was not a one-time shot. I tried it a several times and encountered the same 
issue for some reason.

ERROR [main] 2014-11-27 18:44:55,017 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1211)
at 
org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:445)
at 
org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:659)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 INFO [StorageServiceShutdownHook] 2014-11-27 18:44:55,076 Gossiper.java (line 
1307) Announcing shutdown


  was:
- 2-node
  * nodeA - seed
  * nodeB

1. decommission nodeB from the cluster with nodetool
   when it's finished, kill cassandra process on nodeB
2. delete data from commit/cache/data directories on nodeB
3. try to start cassandra on nodeB (first time)
   = FAILED with Unable to gossip with any seeds
4. try to start cassandra on nodeB (second time)
  = OK

It was not a one-time shot. I tried it a several times and encountered the same 
issue for some reason.

ERROR [main] 2014-11-27 18:44:55,017 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1211)
at 
org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:445)
at 
org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:659)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 INFO [StorageServiceShutdownHook] 2014-11-27 18:44:55,076 Gossiper.java (line 
1307) Announcing shutdown


For the first time, it looks like that the recommission node(goo184) did not 
receive a GossipDigestAckMessage from a seed(test130) ? Don't know why.


 cassandra won't start up due to Unable to gossip with any seeds on the 
 decommissioned node
 

 Key: CASSANDRA-8422
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8422
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Masashi Ozawa

 - 2-node
   * nodeA - seed
   * nodeB
 1. decommission nodeB from the cluster with nodetool
when it's finished, kill cassandra process on nodeB
 2. delete data from commit/cache/data directories on nodeB
 3. try to start cassandra on nodeB (first time)
= FAILED with Unable to gossip with any seeds
 4. try to start cassandra on nodeB (second time)
   = OK
 It was not a one-time shot. I tried it a several times and encountered the 
 same issue for some reason.
 ERROR [main] 2014-11-27 18:44:55,017 CassandraDaemon.java (line 513) 
 Exception encountered during startup
 java.lang.RuntimeException: Unable to gossip with any seeds
 at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1211)
 at 
 org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:445)
 at 
 org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:659)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
 at 
 

[jira] [Commented] (CASSANDRA-8418) Query now requiring allow filtering after refactoring

2014-12-04 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234224#comment-14234224
 ] 

Benjamin Lerer commented on CASSANDRA-8418:
---

The test was in fact wrong. The query required ALLOW FILTERING that was I 
changed the behavior during the refactoring.
I have changed the DTest.

 Query now requiring allow filtering after refactoring
 -

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8418) Query now requiring allow filtering after refactoring

2014-12-04 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer resolved CASSANDRA-8418.
---
Resolution: Fixed

 Query now requiring allow filtering after refactoring
 -

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7069) Prevent operator mistakes due to simultaneous bootstrap

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7069:
---
Tester: Philip Thompson

 Prevent operator mistakes due to simultaneous bootstrap
 ---

 Key: CASSANDRA-7069
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7069
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.1.1, 3.0

 Attachments: 7069.txt


 Cassandra has always had the '2 minute rule' between beginning topology 
 changes to ensure the range announcement is known to all nodes before the 
 next one begins.  Trying to bootstrap a bunch of nodes simultaneously is a 
 common mistake and seems to be on the rise as of late.
 We can prevent users from shooting themselves in the foot this way by looking 
 for other joining nodes in the shadow round, then comparing their generation 
 against our own and if there isn't a large enough difference, bail out or 
 sleep until it is large enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-12-04 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajanarayanan Thottuvaikkatumana updated CASSANDRA-7124:

Attachment: cassandra-trunk-compact-7124.txt

Patch for compact

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: 7124-wip.txt, cassandra-trunk-cleanup-7124.txt, 
 cassandra-trunk-compact-7124.txt


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-12-04 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajanarayanan Thottuvaikkatumana updated CASSANDRA-7124:

Attachment: (was: cassandra-trunk-cleanup-7124.txt)

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: 7124-wip.txt, cassandra-trunk-compact-7124.txt


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8418) Query now requiring allow filtering after refactoring

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8418:
---
Issue Type: Test  (was: Bug)

Did all of those queries need ALLOW FILTERING? I see you added it to ~5.

 Query now requiring allow filtering after refactoring
 -

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7705) Safer Resource Management

2014-12-04 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7705:
-
Reviewer: Marcus Eriksson  (was: Aleksey Yeschenko)

 Safer Resource Management
 -

 Key: CASSANDRA-7705
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7705
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


 We've had a spate of bugs recently with bad reference counting. these can 
 have potentially dire consequences, generally either randomly deleting data 
 or giving us infinite loops. 
 Since in 2.1 we only reference count resources that are relatively expensive 
 and infrequently managed (or in places where this safety is probably not as 
 necessary, e.g. SerializingCache), we could without any negative consequences 
 (and only slight code complexity) introduce a safer resource management 
 scheme for these more expensive/infrequent actions.
 Basically, I propose when we want to acquire a resource we allocate an object 
 that manages the reference. This can only be released once; if it is released 
 twice, we fail immediately at the second release, reporting where the bug is 
 (rather than letting it continue fine until the next correct release corrupts 
 the count). The reference counter remains the same, but we obtain guarantees 
 that the reference count itself is never badly maintained, although code 
 using it could mistakenly release its own handle early (typically this is 
 only an issue when cleaning up after a failure, in which case under the new 
 scheme this would be an innocuous error)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-12-04 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234236#comment-14234236
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124:
-

[~yukim], Thanks for the suggestions on the cleanup code. I have attached a 
patch for the compact with this. Please have a look at it. In this, I have not 
blocked the execution anywhere. Like the CleanupResult you have implemented, I 
have not implemented the corresponding version in the compact. I am not sure 
what are the metrics to be captured in the result object. If that is applicable 
to compaction, please let me know and I can implement the same. 

I have tested the code, made changes to the test cases and all the tests 
related to the compaction are passing. Apart from that I have run the 
{{./bin/nodetool -h localhost compact}} and that is also working. Checked the 
JMX console and made sure that the notifications are coming properly. Here are 
the unit tests where I have made changes and made sure that the tests are 
passing with the code changes
{code}
ant test -Dtest.name=CompactionsTest
ant test -Dtest.name=OneCompactionsTest
ant test -Dtest.name=RangeTombstoneTest
ant test -Dtest.name=KeyspaceTest
ant test -Dtest.name=SSTableReaderTest
ant test -Dtest.name=LongCompactionsTest
{code}

Please let me know if you have any suggestions on this. Thanks

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: 7124-wip.txt, cassandra-trunk-compact-7124.txt


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7705) Safer Resource Management

2014-12-04 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234293#comment-14234293
 ] 

Ariel Weisberg commented on CASSANDRA-7705:
---

I didn't review I just read through because it looked interesting.

Maybe not a good fit here, but one of things I found useful in a smart 
pointer/container was to use conditional compilation to support a debug build 
where the stack of the allocator, deallocator, and mistaken extra deallocation 
were all stored in the reference so it was a little easier to debug because 
errors could log that information. You could store the thread name as well.

If references are not always shared AKA wrapper references does it make sense 
to have a wrapper report an error if it is released twice?

Is using reference queues any better than just using finalization?

AstractRefCounted.State.refs appears to be unused?

Out of scope for this ticket, but are there resources requiring deallocation 
that don't use reference counting or that are very high traffic (once or more 
per request?). 



 Safer Resource Management
 -

 Key: CASSANDRA-7705
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7705
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


 We've had a spate of bugs recently with bad reference counting. these can 
 have potentially dire consequences, generally either randomly deleting data 
 or giving us infinite loops. 
 Since in 2.1 we only reference count resources that are relatively expensive 
 and infrequently managed (or in places where this safety is probably not as 
 necessary, e.g. SerializingCache), we could without any negative consequences 
 (and only slight code complexity) introduce a safer resource management 
 scheme for these more expensive/infrequent actions.
 Basically, I propose when we want to acquire a resource we allocate an object 
 that manages the reference. This can only be released once; if it is released 
 twice, we fail immediately at the second release, reporting where the bug is 
 (rather than letting it continue fine until the next correct release corrupts 
 the count). The reference counter remains the same, but we obtain guarantees 
 that the reference count itself is never badly maintained, although code 
 using it could mistakenly release its own handle early (typically this is 
 only an issue when cleaning up after a failure, in which case under the new 
 scheme this would be an innocuous error)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7705) Safer Resource Management

2014-12-04 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234314#comment-14234314
 ] 

Joshua McKenzie commented on CASSANDRA-7705:


+1 for conditional compilation for alloc/de-alloc stacks - would be nice to 
delete the hack-patches I have lying around to do that locally. :)

 Safer Resource Management
 -

 Key: CASSANDRA-7705
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7705
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


 We've had a spate of bugs recently with bad reference counting. these can 
 have potentially dire consequences, generally either randomly deleting data 
 or giving us infinite loops. 
 Since in 2.1 we only reference count resources that are relatively expensive 
 and infrequently managed (or in places where this safety is probably not as 
 necessary, e.g. SerializingCache), we could without any negative consequences 
 (and only slight code complexity) introduce a safer resource management 
 scheme for these more expensive/infrequent actions.
 Basically, I propose when we want to acquire a resource we allocate an object 
 that manages the reference. This can only be released once; if it is released 
 twice, we fail immediately at the second release, reporting where the bug is 
 (rather than letting it continue fine until the next correct release corrupts 
 the count). The reference counter remains the same, but we obtain guarantees 
 that the reference count itself is never badly maintained, although code 
 using it could mistakenly release its own handle early (typically this is 
 only an issue when cleaning up after a failure, in which case under the new 
 scheme this would be an innocuous error)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8312) Use live sstables in snapshot repair if possible

2014-12-04 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234319#comment-14234319
 ] 

Yuki Morishita commented on CASSANDRA-8312:
---

Sorry for delay.
Patch looks good to me, and version 2.1 and above also can get benefit from 
this.
I will commit to 2.0 and above.

Thanks!

 Use live sstables in snapshot repair if possible
 

 Key: CASSANDRA-8312
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8312
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jimmy Mårdell
Assignee: Jimmy Mårdell
Priority: Minor
 Attachments: cassandra-2.0-8312-1.txt


 Snapshot repair can be very much slower than parallel repairs because of the 
 overhead of opening the SSTables in the snapshot. This is particular true 
 when using LCS, as you typically have many smaller SSTables then.
 I compared parallel and sequential repair on a small range on one of our 
 clusters (2*3 replicas). With parallel repair, this took 22 seconds. With 
 sequential repair (default in 2.0), the same range took 330 seconds! This is 
 an overhead of 330-22*6 = 198 seconds, just opening SSTables (there were 
 1000+ sstables). Also, opening 1000 sstables for many smaller rangers surely 
 causes lots of memory churning.
 The idea would be to list the sstables in the snapshot, but use the 
 corresponding sstables in the live set if it's still available. For almost 
 all sstables, the original one should still exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8418) Query now requiring allow filtering after refactoring

2014-12-04 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234325#comment-14234325
 ] 

Benjamin Lerer commented on CASSANDRA-8418:
---

Yes. Allow filtering is required if your query will perform filtering. If you 
have 2 relations or more like in those queries and a secondary index is use, 
one relation will be used to query the index and the results returned will then 
be filtered using the remaining relations.

 Query now requiring allow filtering after refactoring
 -

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8418) Query now requiring allow filtering after refactoring

2014-12-04 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234329#comment-14234329
 ] 

Benjamin Lerer commented on CASSANDRA-8418:
---

Now, I realizes than my change will probably break 2.1 or 2.0. I need to check 
why allow filtering is not required for those versions.

 Query now requiring allow filtering after refactoring
 -

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8321) SStablesplit behavior changed

2014-12-04 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234334#comment-14234334
 ] 

Joshua McKenzie commented on CASSANDRA-8321:


+1, though we might want to add in a test for sstablesplit w/abort() path 
instead of finish() just to be safe.  abort() code looks like it shouldn't give 
us any problems while isOffline but may as well while we're here.

 SStablesplit behavior changed
 -

 Key: CASSANDRA-8321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8321
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1.3

 Attachments: 0001-ccm-fix-file-finding.patch, 
 0001-remove-tmplink-for-offline-compactions.patch


 The dtest sstablesplit_test.py has begun failing due to an incorrect number 
 of sstables being created after running sstablesplit.
 http://cassci.datastax.com/job/cassandra-2.1_dtest/559/changes#detail1
 is the run where the failure began.
 In 2.1.x, the test expects 7 sstables to be created after split, but instead 
 12 are being created. All of the data is there, and the sstables add up to 
 the expected size, so this simply may be a change in default behavior. The 
 test runs sstablesplit without the --size argument, and the default has not 
 changed, so it is unexpected that the behavior would change in a minor point 
 release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8422) cassandra won't start up due to Unable to gossip with any seeds on the decommissioned node

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8422:
---
Description: 
- 2-node
  * nodeA - seed
  * nodeB

1. decommission nodeB from the cluster with nodetool
   when it's finished, kill cassandra process on nodeB
2. delete data from commit/cache/data directories on nodeB
3. try to start cassandra on nodeB (first time)
   = FAILED with Unable to gossip with any seeds
4. try to start cassandra on nodeB (second time)
  = OK

It was not a one-time shot. I tried it a several times and encountered the same 
issue for some reason.
{code}
ERROR [main] 2014-11-27 18:44:55,017 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1211)
at 
org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:445)
at 
org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:659)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 INFO [StorageServiceShutdownHook] 2014-11-27 18:44:55,076 Gossiper.java (line 
1307) Announcing shutdown
{code}

  was:
- 2-node
  * nodeA - seed
  * nodeB

1. decommission nodeB from the cluster with nodetool
   when it's finished, kill cassandra process on nodeB
2. delete data from commit/cache/data directories on nodeB
3. try to start cassandra on nodeB (first time)
   = FAILED with Unable to gossip with any seeds
4. try to start cassandra on nodeB (second time)
  = OK

It was not a one-time shot. I tried it a several times and encountered the same 
issue for some reason.

ERROR [main] 2014-11-27 18:44:55,017 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1211)
at 
org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:445)
at 
org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:659)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 INFO [StorageServiceShutdownHook] 2014-11-27 18:44:55,076 Gossiper.java (line 
1307) Announcing shutdown



 cassandra won't start up due to Unable to gossip with any seeds on the 
 decommissioned node
 

 Key: CASSANDRA-8422
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8422
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Masashi Ozawa

 - 2-node
   * nodeA - seed
   * nodeB
 1. decommission nodeB from the cluster with nodetool
when it's finished, kill cassandra process on nodeB
 2. delete data from commit/cache/data directories on nodeB
 3. try to start cassandra on nodeB (first time)
= FAILED with Unable to gossip with any seeds
 4. try to start cassandra on nodeB (second time)
   = OK
 It was not a one-time shot. I tried it a several times and encountered the 
 same issue for some reason.
 {code}
 ERROR [main] 2014-11-27 18:44:55,017 CassandraDaemon.java (line 513) 
 Exception encountered during startup
 java.lang.RuntimeException: Unable to gossip with any seeds
 at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1211)
 at 
 org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:445)
 at 
 org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:659)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 

[jira] [Updated] (CASSANDRA-8421) Cassandra 2.1.1 UDT not returning value for LIST type as UDT

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8421:
---
  Description: 

I using List and its data type is UDT.

UDT:

CREATE TYPE

fieldmap (

 key text,
 value text
);

TABLE:

CREATE TABLE entity (

  entity_id uuid PRIMARY KEY,
  begining int,
  domain text,
  domain_type text,
  entity_template_name text,
  field_values listfieldmap,
  global_entity_type text,
  revision_time timeuuid,
  status_key int,
  status_name text,
  uuid timeuuid
  ) 
INDEX:

CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);

CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);

CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );

QUERY

SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
'S4_1017.abc.com' allow filtering;

The above query return value for some row and not for many rows but those rows 
and data's are exist.

Observation:
If I execute query with other than field_maps, then it returns value. I suspect 
the problem with LIST with UDT.

I have single node cassadra DB. Please let me know why this strange behavior 
from cassandra.

  was:


I using List and its data type is UDT.

UDT:

CREATE TYPE

fieldmap (

 key text,
 value text
);

TABLE:

CREATE TABLE entity (

  entity_id uuid PRIMARY KEY,
  begining int,
  domain text,
  domain_type text,
  entity_template_name text,
  field_values listfieldmap,
  global_entity_type text,
  revision_time timeuuid,
  status_key int,
  status_name text,
  uuid timeuuid
  ) 
INDEX:

CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);

CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);

CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );

QUERY

SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
'S4_1017.abc.com' allow filtering;

The above query return value for some row and not for many rows but those rows 
and data's are exist.

Observation:
If I execute query with other than field_maps, then it returns value. I suspect 
the problem with LIST with UDT.

I have single node cassadra DB. Please let me know why this strange behavior 
from cassandra.

Fix Version/s: (was: 2.1.2)
   (was: 2.1 rc6)
   2.1.3

 Cassandra 2.1.1 UDT not returning value for LIST type as UDT
 

 Key: CASSANDRA-8421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: single node cassandra 
Reporter: madheswaran
 Fix For: 3.0, 2.1.3


 I using List and its data type is UDT.
 UDT:
 CREATE TYPE
 fieldmap (
  key text,
  value text
 );
 TABLE:
 CREATE TABLE entity (
   entity_id uuid PRIMARY KEY,
   begining int,
   domain text,
   domain_type text,
   entity_template_name text,
   field_values listfieldmap,
   global_entity_type text,
   revision_time timeuuid,
   status_key int,
   status_name text,
   uuid timeuuid
   ) 
 INDEX:
 CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
 CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
 CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
 QUERY
 SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
 'S4_1017.abc.com' allow filtering;
 The above query return value for some row and not for many rows but those 
 rows and data's are exist.
 Observation:
 If I execute query with other than field_maps, then it returns value. I 
 suspect the problem with LIST with UDT.
 I have single node cassadra DB. Please let me know why this strange behavior 
 from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8421) Cassandra 2.1.1 UDT not returning value for LIST type as UDT

2014-12-04 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234341#comment-14234341
 ] 

Philip Thompson commented on CASSANDRA-8421:


Which Cassandra version are you using?

 Cassandra 2.1.1 UDT not returning value for LIST type as UDT
 

 Key: CASSANDRA-8421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: single node cassandra 
Reporter: madheswaran
 Fix For: 3.0, 2.1.3


 I using List and its data type is UDT.
 UDT:
 CREATE TYPE
 fieldmap (
  key text,
  value text
 );
 TABLE:
 CREATE TABLE entity (
   entity_id uuid PRIMARY KEY,
   begining int,
   domain text,
   domain_type text,
   entity_template_name text,
   field_values listfieldmap,
   global_entity_type text,
   revision_time timeuuid,
   status_key int,
   status_name text,
   uuid timeuuid
   ) 
 INDEX:
 CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
 CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
 CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
 QUERY
 SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
 'S4_1017.abc.com' allow filtering;
 The above query return value for some row and not for many rows but those 
 rows and data's are exist.
 Observation:
 If I execute query with other than field_maps, then it returns value. I 
 suspect the problem with LIST with UDT.
 I have single node cassadra DB. Please let me know why this strange behavior 
 from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8421) Cassandra 2.1.1 UDT not returning value for LIST type as UDT

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8421:
---
Description: 
I using List and its data type is UDT.

UDT:
{code}
CREATE TYPE

fieldmap (

 key text,
 value text
);
{code}
TABLE:
{code}
CREATE TABLE entity (

  entity_id uuid PRIMARY KEY,
  begining int,
  domain text,
  domain_type text,
  entity_template_name text,
  field_values listfieldmap,
  global_entity_type text,
  revision_time timeuuid,
  status_key int,
  status_name text,
  uuid timeuuid
  ) {code}
INDEX:
{code}
CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);

CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);

CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
{code}
QUERY
{code}
SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
'S4_1017.abc.com' allow filtering;
{code}
The above query return value for some row and not for many rows but those rows 
and data's are exist.

Observation:
If I execute query with other than field_maps, then it returns value. I suspect 
the problem with LIST with UDT.

I have single node cassadra DB. Please let me know why this strange behavior 
from cassandra.

  was:

I using List and its data type is UDT.

UDT:

CREATE TYPE

fieldmap (

 key text,
 value text
);

TABLE:

CREATE TABLE entity (

  entity_id uuid PRIMARY KEY,
  begining int,
  domain text,
  domain_type text,
  entity_template_name text,
  field_values listfieldmap,
  global_entity_type text,
  revision_time timeuuid,
  status_key int,
  status_name text,
  uuid timeuuid
  ) 
INDEX:

CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);

CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);

CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );

QUERY

SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
'S4_1017.abc.com' allow filtering;

The above query return value for some row and not for many rows but those rows 
and data's are exist.

Observation:
If I execute query with other than field_maps, then it returns value. I suspect 
the problem with LIST with UDT.

I have single node cassadra DB. Please let me know why this strange behavior 
from cassandra.


 Cassandra 2.1.1 UDT not returning value for LIST type as UDT
 

 Key: CASSANDRA-8421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: single node cassandra 
Reporter: madheswaran
 Fix For: 3.0, 2.1.3


 I using List and its data type is UDT.
 UDT:
 {code}
 CREATE TYPE
 fieldmap (
  key text,
  value text
 );
 {code}
 TABLE:
 {code}
 CREATE TABLE entity (
   entity_id uuid PRIMARY KEY,
   begining int,
   domain text,
   domain_type text,
   entity_template_name text,
   field_values listfieldmap,
   global_entity_type text,
   revision_time timeuuid,
   status_key int,
   status_name text,
   uuid timeuuid
   ) {code}
 INDEX:
 {code}
 CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
 CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
 CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
 {code}
 QUERY
 {code}
 SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
 'S4_1017.abc.com' allow filtering;
 {code}
 The above query return value for some row and not for many rows but those 
 rows and data's are exist.
 Observation:
 If I execute query with other than field_maps, then it returns value. I 
 suspect the problem with LIST with UDT.
 I have single node cassadra DB. Please let me know why this strange behavior 
 from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8421) Cassandra 2.1.1 UDT not returning value for LIST type as UDT

2014-12-04 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234341#comment-14234341
 ] 

Philip Thompson edited comment on CASSANDRA-8421 at 12/4/14 4:53 PM:
-

Which Cassandra version are you using? Nevermind, I see in the title now.


was (Author: philipthompson):
Which Cassandra version are you using?

 Cassandra 2.1.1 UDT not returning value for LIST type as UDT
 

 Key: CASSANDRA-8421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: single node cassandra 
Reporter: madheswaran
 Fix For: 3.0, 2.1.3


 I using List and its data type is UDT.
 UDT:
 {code}
 CREATE TYPE
 fieldmap (
  key text,
  value text
 );
 {code}
 TABLE:
 {code}
 CREATE TABLE entity (
   entity_id uuid PRIMARY KEY,
   begining int,
   domain text,
   domain_type text,
   entity_template_name text,
   field_values listfieldmap,
   global_entity_type text,
   revision_time timeuuid,
   status_key int,
   status_name text,
   uuid timeuuid
   ) {code}
 INDEX:
 {code}
 CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
 CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
 CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
 {code}
 QUERY
 {code}
 SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
 'S4_1017.abc.com' allow filtering;
 {code}
 The above query return value for some row and not for many rows but those 
 rows and data's are exist.
 Observation:
 If I execute query with other than field_maps, then it returns value. I 
 suspect the problem with LIST with UDT.
 I have single node cassadra DB. Please let me know why this strange behavior 
 from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-7186) alter table add column not always propogating

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-7186.

   Resolution: Cannot Reproduce
Reproduced In: 2.0.9, 2.0.6  (was: 2.0.6, 2.0.9)

I have successfully injected a split schema environment into a 3 DC cluster on 
2.0-HEAD. Gossip is resolving the issue within 1 minute every time. If someone 
has Cassandra system logs, preferably DEBUG, where the problem occurs, feel 
free to re-open.

For those with affected clusters, take the normal steps to resolve a split 
schema.

 alter table add column not always propogating
 -

 Key: CASSANDRA-7186
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7186
 Project: Cassandra
  Issue Type: Bug
Reporter: Martin Meyer
Assignee: Philip Thompson
 Fix For: 2.0.12


 I've been many times in Cassandra 2.0.6 that adding columns to existing 
 tables seems to not fully propagate to our entire cluster. We add an extra 
 column to various tables maybe 0-2 times a week, and so far many of these 
 ALTERs have resulted in at least one node showing the old table description a 
 pretty long time (~30 mins) after the original ALTER command was issued.
 We originally identified this issue when a connected clients would complain 
 that a column it issued a SELECT for wasn't a known column, at which point we 
 have to ask each node to describe the most recently altered table. One of 
 them will not know about the newly added field. Issuing the original ALTER 
 statement on that node makes everything work correctly.
 We have seen this issue on multiple tables (we don't always alter the same 
 one). It has affected various nodes in the cluster (not always the same one 
 is not getting the mutation propagated). No new nodes have been added to the 
 cluster recently. All nodes are homogenous (hardware and software), running 
 2.0.6. We don't see any particular errors or exceptions on the node that 
 didn't get the schema update, only the later error from a Java client about 
 asking for an unknown column in a SELECT. We have to check each node manually 
 to find the offender. The tables he have seen this on are under fairly heavy 
 read and write load, but we haven't altered any tables that are not, so that 
 might not be important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8421) Cassandra 2.1.1 UDT not returning value for LIST type as UDT

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-8421:
--

Assignee: Philip Thompson

 Cassandra 2.1.1 UDT not returning value for LIST type as UDT
 

 Key: CASSANDRA-8421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: single node cassandra 
Reporter: madheswaran
Assignee: Philip Thompson
 Fix For: 3.0, 2.1.3


 I using List and its data type is UDT.
 UDT:
 {code}
 CREATE TYPE
 fieldmap (
  key text,
  value text
 );
 {code}
 TABLE:
 {code}
 CREATE TABLE entity (
   entity_id uuid PRIMARY KEY,
   begining int,
   domain text,
   domain_type text,
   entity_template_name text,
   field_values listfieldmap,
   global_entity_type text,
   revision_time timeuuid,
   status_key int,
   status_name text,
   uuid timeuuid
   ) {code}
 INDEX:
 {code}
 CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
 CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
 CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
 {code}
 QUERY
 {code}
 SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
 'S4_1017.abc.com' allow filtering;
 {code}
 The above query return value for some row and not for many rows but those 
 rows and data's are exist.
 Observation:
 If I execute query with other than field_maps, then it returns value. I 
 suspect the problem with LIST with UDT.
 I have single node cassadra DB. Please let me know why this strange behavior 
 from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8421) Cassandra 2.1.1 UDT not returning value for LIST type as UDT

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8421:
---
Reproduced In: 2.1.1

 Cassandra 2.1.1 UDT not returning value for LIST type as UDT
 

 Key: CASSANDRA-8421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: single node cassandra 
Reporter: madheswaran
 Fix For: 3.0, 2.1.3


 I using List and its data type is UDT.
 UDT:
 {code}
 CREATE TYPE
 fieldmap (
  key text,
  value text
 );
 {code}
 TABLE:
 {code}
 CREATE TABLE entity (
   entity_id uuid PRIMARY KEY,
   begining int,
   domain text,
   domain_type text,
   entity_template_name text,
   field_values listfieldmap,
   global_entity_type text,
   revision_time timeuuid,
   status_key int,
   status_name text,
   uuid timeuuid
   ) {code}
 INDEX:
 {code}
 CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
 CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
 CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
 {code}
 QUERY
 {code}
 SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
 'S4_1017.abc.com' allow filtering;
 {code}
 The above query return value for some row and not for many rows but those 
 rows and data's are exist.
 Observation:
 If I execute query with other than field_maps, then it returns value. I 
 suspect the problem with LIST with UDT.
 I have single node cassadra DB. Please let me know why this strange behavior 
 from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8420) Application getting time to open on other node when one node goes down

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8420:
---
Description: 
I have 3 node cluster my application is deployed on 3 node. When i put down one 
node then my application is taking around 5 mins to take up on other node and 
throwing below error.
{code}
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:152) 
~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:122) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
~[na:1.7.0_51]
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
 ~[hector-client_3.5.1.jar:na]
... 45 common frames omitted
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}

2014-12-04 13:59:45.850 WARN  m.p.c.connection.HConnectionManager - Could not 
fullfill request on this host CassandraClienthost1_priv:1123456-1231
2014-12-04 13:59:45.852 WARN  m.p.c.connection.HConnectionManager - Exception:
me.prettyprint.hector.api.exceptions.HTimedOutException: 
org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
at 
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:37)
 ~[hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265)
 ~[hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:132)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl.getSlice(KeyspaceServiceImpl.java:290)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:53)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:49)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:101)
 [hector-client_3.5.1.jar:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery.execute(ThriftSliceQuery.java:48)
 [hector-client_3.5.1.jar:na]
at 
com.ericsson.rm.cassandra.xa.keyspace.row.KeyedRowQuery.execute(KeyedRowQuery.java:88)
 [cassandra.xa_3.5.1.jar:na]{code}

  was:
I have 3 node cluster my application is deployed on 3 node. When i put down one 
node then my application is taking around 5 mins to take up on other node and 
throwing below error.

Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:152) 
~[na:1.7.0_51]
at java.net.SocketInputStream.read(SocketInputStream.java:122) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) 
~[na:1.7.0_51]
at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
~[na:1.7.0_51]
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
 ~[hector-client_3.5.1.jar:na]
... 45 common frames omitted
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}
2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
exception in re-opening client in release on 
ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}

2014-12-04 13:59:45.850 WARN  m.p.c.connection.HConnectionManager - Could not 
fullfill request on this host CassandraClienthost1_priv:1123456-1231
2014-12-04 13:59:45.852 WARN  m.p.c.connection.HConnectionManager - Exception:
me.prettyprint.hector.api.exceptions.HTimedOutException: 
org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
at 

[jira] [Commented] (CASSANDRA-8421) Cassandra 2.1.1 UDT not returning value for LIST type as UDT

2014-12-04 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234358#comment-14234358
 ] 

Philip Thompson commented on CASSANDRA-8421:


With 2.1.1, when creating the table as defined, I see the following error 
{code}code=2200 [Invalid query] message=Non-frozen User-Defined types are not 
supported, please use frozen{code}. With the current 2.1-HEAD code, I see 
{code}code=2200 [Invalid query] message=Non-frozen collections are not allowed 
inside collections: listfieldmap{code}.

So the problem is definitely that you have a UDT in a list. I am confused as to 
how you are creating the table though. Are you using the queries exactly as 
specified on C* 2.1.1? What driver or client are you using?

 Cassandra 2.1.1 UDT not returning value for LIST type as UDT
 

 Key: CASSANDRA-8421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8421
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: single node cassandra 
Reporter: madheswaran
Assignee: Philip Thompson
 Fix For: 3.0, 2.1.3


 I using List and its data type is UDT.
 UDT:
 {code}
 CREATE TYPE
 fieldmap (
  key text,
  value text
 );
 {code}
 TABLE:
 {code}
 CREATE TABLE entity (
   entity_id uuid PRIMARY KEY,
   begining int,
   domain text,
   domain_type text,
   entity_template_name text,
   field_values listfieldmap,
   global_entity_type text,
   revision_time timeuuid,
   status_key int,
   status_name text,
   uuid timeuuid
   ) {code}
 INDEX:
 {code}
 CREATE INDEX entity_domain_idx_1 ON galaxy_dev.entity (domain);
 CREATE INDEX entity_field_values_idx_1 ON galaxy_dev.entity (field_values);
 CREATE INDEX entity_global_entity_type_idx_1 ON galaxy_dev.entity (gen_type );
 {code}
 QUERY
 {code}
 SELECT * FROM entity WHERE status_key  3 and field_values contains {key: 
 'userName', value: 'Sprint5_22'} and gen_type = 'USER' and domain = 
 'S4_1017.abc.com' allow filtering;
 {code}
 The above query return value for some row and not for many rows but those 
 rows and data's are exist.
 Observation:
 If I execute query with other than field_maps, then it returns value. I 
 suspect the problem with LIST with UDT.
 I have single node cassadra DB. Please let me know why this strange behavior 
 from cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8420) Application getting time to open on other node when one node goes down

2014-12-04 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234359#comment-14234359
 ] 

Philip Thompson commented on CASSANDRA-8420:


What Cassandra version is this?

 Application getting time to open on other node when one node goes down
 --

 Key: CASSANDRA-8420
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8420
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Shamim Khan
  Labels: performance

 I have 3 node cluster my application is deployed on 3 node. When i put down 
 one node then my application is taking around 5 mins to take up on other node 
 and throwing below error.
 {code}
 Caused by: java.net.SocketTimeoutException: Read timed out
 at java.net.SocketInputStream.socketRead0(Native Method) 
 ~[na:1.7.0_51]
 at java.net.SocketInputStream.read(SocketInputStream.java:152) 
 ~[na:1.7.0_51]
 at java.net.SocketInputStream.read(SocketInputStream.java:122) 
 ~[na:1.7.0_51]
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) 
 ~[na:1.7.0_51]
 at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) 
 ~[na:1.7.0_51]
 at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
 ~[na:1.7.0_51]
 at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
  ~[hector-client_3.5.1.jar:na]
 ... 45 common frames omitted
 2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
 exception in re-opening client in release on 
 ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}
 2014-12-04 13:59:41.179 ERROR m.p.c.c.ConcurrentHClientPool - Transport 
 exception in re-opening client in release on 
 ConcurrentCassandraClientPoolByHost:{host1_priv(12345):1234}
 2014-12-04 13:59:45.850 WARN  m.p.c.connection.HConnectionManager - Could not 
 fullfill request on this host CassandraClienthost1_priv:1123456-1231
 2014-12-04 13:59:45.852 WARN  m.p.c.connection.HConnectionManager - Exception:
 me.prettyprint.hector.api.exceptions.HTimedOutException: 
 org.apache.thrift.transport.TTransportException: 
 java.net.SocketTimeoutException: Read timed out
 at 
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:37)
  ~[hector-client_3.5.1.jar:na]
 at 
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265)
  ~[hector-client_3.5.1.jar:na]
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:132)
  [hector-client_3.5.1.jar:na]
 at 
 me.prettyprint.cassandra.service.KeyspaceServiceImpl.getSlice(KeyspaceServiceImpl.java:290)
  [hector-client_3.5.1.jar:na]
 at 
 me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:53)
  [hector-client_3.5.1.jar:na]
 at 
 me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:49)
  [hector-client_3.5.1.jar:na]
 at 
 me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
  [hector-client_3.5.1.jar:na]
 at 
 me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:101)
  [hector-client_3.5.1.jar:na]
 at 
 me.prettyprint.cassandra.model.thrift.ThriftSliceQuery.execute(ThriftSliceQuery.java:48)
  [hector-client_3.5.1.jar:na]
 at 
 com.ericsson.rm.cassandra.xa.keyspace.row.KeyedRowQuery.execute(KeyedRowQuery.java:88)
  [cassandra.xa_3.5.1.jar:na]{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8346) Paxos operation can use stale data during multiple range movements

2014-12-04 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234375#comment-14234375
 ] 

sankalp kohli commented on CASSANDRA-8346:
--

You want to pass the msg like this
super(ExceptionCode.UNAVAILABLE, Cannot achieve consistency level  + 
consistency);
should be
super(ExceptionCode.UNAVAILABLE, msg); 

Apart from that looks good. 

 Paxos operation can use stale data during multiple range movements
 --

 Key: CASSANDRA-8346
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8346
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0.12

 Attachments: 8346.txt


 Paxos operations correctly account for pending ranges for all operation 
 pertaining to the Paxos state, but those pending ranges are not taken into 
 account when reading the data to check for the conditions or during a serial 
 read. It's thus possible to break the LWT guarantees by reading a stale 
 value.  This require 2 node movements (on the same token range) to be a 
 problem though.
 Basically, we have {{RF}} replicas + {{P}} pending nodes. For the Paxos 
 prepare/propose phases, the number of required participants (the Paxos 
 QUORUM) is {{(RF + P + 1) / 2}} ({{SP.getPaxosParticipants}}), but the read 
 done to check conditions or for serial reads is done at a normal QUORUM (or 
 LOCAL_QUORUM), and so a weaker {{(RF + 1) / 2}}. We have a problem if it's 
 possible that said read can read only from nodes that were not part of the 
 paxos participants, and so we have a problem if:
 {noformat}
 normal quorum == (RF + 1) / 2 = (RF + P) - ((RF + P + 1) / 2) == 
 participants considered - blocked for
 {noformat}
 We're good if {{P = 0}} or {{P = 1}} since this inequality gives us 
 respectively {{RF + 1 = RF - 1}} and {{RF + 1 = RF}}, both of which are 
 impossible. But at {{P = 2}} (2 pending nodes), this inequality is equivalent 
 to {{RF = RF}} and so we might read stale data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8423) Error during start up on windows

2014-12-04 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8423:
--

 Summary: Error during start up on windows
 Key: CASSANDRA-8423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8423
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Joshua McKenzie
 Fix For: 2.1.3


While using ccm with the current C* 2.1-HEAD code on Windows, I frequently see 
this exception.
{code}[node1 ERROR] Exception calling BeginConnect with 4 argument(s): The 
requested address 
is not valid in its context
At 
D:\jenkins\workspace\cassandra-2.1_dtest_win32\cassandra\bin\cassandra.ps1:358 
char:9
+ $connect = $tcpobject.BeginConnect($listenAddress, $port, $null, 
$null)
+ 
~~~
+ CategoryInfo  : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : SocketException
 
You cannot call a method on a null-valued expression.
At 
D:\jenkins\workspace\cassandra-2.1_dtest_win32\cassandra\bin\cassandra.ps1:359 
char:9
+ $wait = $connect.AsyncWaitHandle.WaitOne(25, $false)
+ 
+ CategoryInfo  : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : InvokeMethodOnNull{code}

I have not yet seen this exception when psutil is not installed, but that may 
not be relevant, as I dont know how that could possibly matter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7523) add date and time types

2014-12-04 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234410#comment-14234410
 ] 

Joshua McKenzie commented on CASSANDRA-7523:


Nits fixed, along with a tiny bit of polish:
* reverted TypeSerializer change
* fixed seconds not bounds-checking w/out period
* Added unit tests to check for that
* changed joda license file from CRLF line-endings to LF
* rebased both java and python branches
* fixed unit test for simple date type in python (hadn't updated for new 2^31 
== epoch centered byte-order-comparable)

My original plan was to get our house in order on this side, then open a 
ticket/PR for the python driver changes once we've stabilized.  We can then 
open another ticket for updating the python driver that's packaged with C*, 
since we can commit the java changes w/out necessarily having driver support.  
Alternatively we can just sit on this ticket until the python changes get 
merged in and then push this through; the rebase was clean on the java code.


 add date and time types
 ---

 Key: CASSANDRA-7523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7523
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Joshua McKenzie
Priority: Minor
  Labels: docs
 Fix For: 2.1.3


 http://www.postgresql.org/docs/9.1/static/datatype-datetime.html
 (we already have timestamp; interval is out of scope for now, and see 
 CASSANDRA-6350 for discussion on timestamp-with-time-zone.  but date/time 
 should be pretty easy to add.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7523) add date and time types

2014-12-04 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234410#comment-14234410
 ] 

Joshua McKenzie edited comment on CASSANDRA-7523 at 12/4/14 5:57 PM:
-

Nits fixed, along with a tiny bit of polish:
* reverted TypeSerializer change
* fixed seconds not bounds-checking w/out period
* Added unit tests to check for that
* changed joda license file from CRLF line-endings to LF
* rebased both java and python branches
* fixed unit test for simple date type in python (hadn't updated for new 2^31 
== epoch centered byte-order-comparable)

My original plan was to get our house in order on this side, then open a 
ticket/PR for the python driver changes once we've stabilized.  We can then 
open another ticket for updating the python driver that's packaged with C*, 
since we can commit the java changes w/out necessarily having driver support.  
Alternatively we can just sit on this ticket until the python changes get 
merged in and then push this through; the rebase was clean on the java code.

edit: [Java 
branch|https://github.com/josh-mckenzie/cassandra/compare/7523_squashed] and 
[python 
branch|https://github.com/josh-mckenzie/python-driver/compare/7523_squashed] 
for convenience.


was (Author: joshuamckenzie):
Nits fixed, along with a tiny bit of polish:
* reverted TypeSerializer change
* fixed seconds not bounds-checking w/out period
* Added unit tests to check for that
* changed joda license file from CRLF line-endings to LF
* rebased both java and python branches
* fixed unit test for simple date type in python (hadn't updated for new 2^31 
== epoch centered byte-order-comparable)

My original plan was to get our house in order on this side, then open a 
ticket/PR for the python driver changes once we've stabilized.  We can then 
open another ticket for updating the python driver that's packaged with C*, 
since we can commit the java changes w/out necessarily having driver support.  
Alternatively we can just sit on this ticket until the python changes get 
merged in and then push this through; the rebase was clean on the java code.


 add date and time types
 ---

 Key: CASSANDRA-7523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7523
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Joshua McKenzie
Priority: Minor
  Labels: docs
 Fix For: 2.1.3


 http://www.postgresql.org/docs/9.1/static/datatype-datetime.html
 (we already have timestamp; interval is out of scope for now, and see 
 CASSANDRA-6350 for discussion on timestamp-with-time-zone.  but date/time 
 should be pretty easy to add.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8382) Procedure to Change IP Address without Data streaming is Missing in Cassandra Documentation

2014-12-04 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234430#comment-14234430
 ] 

Brandon Williams commented on CASSANDRA-8382:
-

bq. So, our questions is : What’s the standard procedure for changing IP 
address of Cassandra nodes

It's pretty simple: just change them (and bounce C* obviously)

 Procedure to Change IP Address without Data streaming is Missing in Cassandra 
 Documentation
 ---

 Key: CASSANDRA-8382
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8382
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
 Environment: Red Hat Linux , Cassandra 2.0.3
Reporter: Anuj

 Use Case: 
 We have a Geo-Red setup with 2 DCs (DC1 and DC2) having 3 nodes each. Listen 
 address and seeds of all nodes are Public IPs while rpc addresses are private 
 IPs.  Now, we want to decommission a DC2 and change public IPs in listen 
 address/seeds of DC1 nodes to private IPs as it will be a single DC setup.
 Issue: 
 Cassandra doesn’t provide any standard procedure for changing IP address of 
 nodes in a cluster. We can bring down nodes, one by one, change their IP 
 address and perform the procedure mentioned in “ Replacing a Dead Node” at 
 http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_replace_node_t.html
   by mentioning public IP of the node in replace_address option. But 
 procedure recommends that you must set the auto_bootstrap option to true.  We 
 don’t want any bootstrap and data streaming to happen as data is already 
 there on nodes. So, our questions is : What’s the standard procedure for 
 changing IP address of Cassandra nodes while making sure that no data 
 streaming occurs and gossip state is not corrupted.
 We are using vnodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7523) add date and time types

2014-12-04 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234439#comment-14234439
 ] 

Carl Yeksigian commented on CASSANDRA-7523:
---

The original plan sounds good to me -- we just need to get everything in order 
before 2.1.3 goes out. +1 to the new changeset.

 add date and time types
 ---

 Key: CASSANDRA-7523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7523
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Joshua McKenzie
Priority: Minor
  Labels: docs
 Fix For: 2.1.3


 http://www.postgresql.org/docs/9.1/static/datatype-datetime.html
 (we already have timestamp; interval is out of scope for now, and see 
 CASSANDRA-6350 for discussion on timestamp-with-time-zone.  but date/time 
 should be pretty easy to add.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2014-12-04 Thread Vishy Kasar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishy Kasar updated CASSANDRA-8194:
---
Attachment: 8194.patch

I have attached a straight forward patch. This resolution will help many of our 
cassandra users who are experiencing latency increase/time-outs when security 
is enabled. I request it be included in the next 2.0.X release as opposed to 
3.0. 

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Priority: Minor
 Fix For: 3.0

 Attachments: 8194.patch


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:105)
   at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:943)
   at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:828)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:140)
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:245)
   ... 28 more
 ERROR [Thrift:17232] 2014-10-24 

[jira] [Commented] (CASSANDRA-7523) add date and time types

2014-12-04 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234481#comment-14234481
 ] 

Joshua McKenzie commented on CASSANDRA-7523:


Created [jira ticket|https://datastax-oss.atlassian.net/browse/PYTHON-190] for 
python driver changes.

 add date and time types
 ---

 Key: CASSANDRA-7523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7523
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Joshua McKenzie
Priority: Minor
  Labels: docs
 Fix For: 2.1.3


 http://www.postgresql.org/docs/9.1/static/datatype-datetime.html
 (we already have timestamp; interval is out of scope for now, and see 
 CASSANDRA-6350 for discussion on timestamp-with-time-zone.  but date/time 
 should be pretty easy to add.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8424) Secondary index on column not working when using PK

2014-12-04 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-8424:
--

 Summary: Secondary index on column not working when using PK
 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
protocol v3]
Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius


I can do queries for collection keys/values as detailed in 
http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
having a secondary index on the collection it will work (with {{ALLOW 
FILTERING}}) but only as long as the query is performed through a *secondary* 
index. If you go through PK it won't. Of course full-scan filtering query is 
not allowed.

As an example, I created this table:

{code:SQL}
CREATE TABLE test.uloc9 (
usr int,
type int,
gb ascii,
gb_q ascii,
info mapascii, text,
lat float,
lng float,
q int,
traits setascii,
ts timestamp,
PRIMARY KEY (usr, type)
);
CREATE INDEX uloc9_gb ON test.uloc9 (gb);
CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
CREATE INDEX uloc9_traits ON test.uloc9 (traits);
{code}
then added some data and queried:
{code}
cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
allow filtering;

 usr | type | gb  | gb_q  | info | lat  
| lng  | q | traits | ts
-+--+-+---+--+--+--+---++--
   1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
-40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 18:20:29-0300
   1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
-40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 18:20:29-0300

(2 rows)
cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' allow 
filtering;
code=2200 [Invalid query] message=No indexed columns present in by-columns 
clause with Equal operator
cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
'argentina' allow filtering;
code=2200 [Invalid query] message=No indexed columns present in by-columns 
clause with Equal operator
{code}

Maybe I got things wrong, but I don't see any reasons why collection filtering 
should fail when using PK while it succeeds using any secondary index (related 
or otherwise).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Secondary index on column not working when using PK

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8424:
---
Reproduced In: 2.1.2
Fix Version/s: 2.1.3

 Secondary index on column not working when using PK
 ---

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
 Fix For: 2.1.3


 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8424) Secondary index on column not working when using PK

2014-12-04 Thread Lex Lythius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234571#comment-14234571
 ] 

Lex Lythius commented on CASSANDRA-8424:


By the way, if this happens to be an uninteded filtering capability rather than 
a bug, it is a very useful feature indeed.

 Secondary index on column not working when using PK
 ---

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
 Fix For: 2.1.3


 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8424:
---
Summary: Collection filtering not working when using PK  (was: Secondary 
index on column not working when using PK)

 Collection filtering not working when using PK
 --

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
 Fix For: 2.1.3


 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Lex Lythius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lex Lythius updated CASSANDRA-8424:
---
Labels: collections  (was: )

 Collection filtering not working when using PK
 --

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
  Labels: collections
 Fix For: 2.1.3


 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8316) Did not get positive replies from all endpoints error on incremental repair

2014-12-04 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234581#comment-14234581
 ] 

Alan Boudreault commented on CASSANDRA-8316:


[~krummas] [~yukim] As mentionned on IRC, the patch dont' to fix the issue in 
cassandra-2.1. In trunk (3.0), the following commit fixed the did not get 
positive error:

https://github.com/apache/cassandra/commit/06f626acd27b051222616c0c91f7dd8d556b8d45

but this one is already in branch cassandra-2.1 and there are many additional 
major changes related to repair in 3.0. 

Any suggestion at this point ?

  Did not get positive replies from all endpoints error on incremental repair
 --

 Key: CASSANDRA-8316
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8316
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.2
Reporter: Loic Lambiel
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-patch.patch, 
 CassandraDaemon-2014-11-25-2.snapshot.tar.gz, test.sh


 Hi,
 I've got an issue with incremental repairs on our production 15 nodes 2.1.2 
 (new cluster, not yet loaded, RF=3)
 After having successfully performed an incremental repair (-par -inc) on 3 
 nodes, I started receiving Repair failed with error Did not get positive 
 replies from all endpoints. from nodetool on all remaining nodes :
 [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges 
 for keyspace  (seq=false, full=false)
 [2014-11-14 09:12:47,919] Repair failed with error Did not get positive 
 replies from all endpoints.
 All the nodes are up and running and the local system log shows that the 
 repair commands got started and that's it.
 I've also noticed that soon after the repair, several nodes started having 
 more cpu load indefinitely without any particular reason (no tasks / queries, 
 nothing in the logs). I then restarted C* on these nodes and retried the 
 repair on several nodes, which were successful until facing the issue again.
 I tried to repro on our 3 nodes preproduction cluster without success
 It looks like I'm not the only one having this issue: 
 http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html
 Any idea?
 Thanks
 Loic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8424:
---
Assignee: Benjamin Lerer

 Collection filtering not working when using PK
 --

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
Assignee: Benjamin Lerer
  Labels: collections
 Fix For: 2.1.3


 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-12-04 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234598#comment-14234598
 ] 

Philip Thompson commented on CASSANDRA-8365:


I can reproduce this inconsistent behavior through the python driver, so it may 
not stricly be a cqlsh problem.

 CamelCase name is used as index name instead of lowercase
 -

 Key: CASSANDRA-8365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre Laporte
Priority: Minor
  Labels: cqlsh
 Fix For: 2.1.3


 In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
 name is used as index name, even though it is unquoted. Trying to quote the 
 index name results in a syntax error.
 However, when I try to delete the index, I have to quote the index name, 
 otherwise I get an invalid-query error telling me that the index (lowercase) 
 does not exist.
 This seems inconsistent.  Shouldn't the index name be lowercased before the 
 index is created ?
 Here is the code to reproduce the issue :
 {code}
 cqlsh:schemabuilderit CREATE TABLE IndexTest (a int primary key, b int);
 cqlsh:schemabuilderit CREATE INDEX FooBar on indextest (b);
 cqlsh:schemabuilderit DESCRIBE TABLE indextest ;
 CREATE TABLE schemabuilderit.indextest (
 a int PRIMARY KEY,
 b int
 ) ;
 CREATE INDEX FooBar ON schemabuilderit.indextest (b);
 cqlsh:schemabuilderit DROP INDEX FooBar;
 code=2200 [Invalid query] message=Index 'foobar' could not be found in any 
 of the tables of keyspace 'schemabuilderit'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8128) Exception when executing UPSERT

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8128:
---
Fix Version/s: 2.0.12

 Exception when executing UPSERT
 ---

 Key: CASSANDRA-8128
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8128
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jens Rantil
Priority: Critical
  Labels: cql3
 Fix For: 2.0.12


 I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
 for a single partition key with up to ~3000 clustering keys. I understand to 
 large upsert aren't recommended, but I wouldn't expect to be getting the 
 following exception anyway:
 {noformat}
 ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
 ErrorMessage.java (line 222) Unexpected exception during request
 java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
 at java.util.ArrayList.rangeCheck(ArrayList.java:635)
 at java.util.ArrayList.get(ArrayList.java:411)
 at 
 org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
 at 
 org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
 at 
 com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
 at 
 com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
 at 
 org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8285) OOME in Cassandra 2.0.11

2014-12-04 Thread Kishan Karunaratne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234620#comment-14234620
 ] 

Kishan Karunaratne commented on CASSANDRA-8285:
---

I tried 2.0 head with Aleksey's patch, and I get the following error upon 
startup:
http://aep.appspot.com/display/PX-eSYFV0e47OujZ8BzaC-5yNsM/

 OOME in Cassandra 2.0.11
 

 Key: CASSANDRA-8285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8285
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.11 + java-driver 2.0.8-SNAPSHOT
 Cassandra 2.0.11 + ruby-driver 1.0-beta
Reporter: Pierre Laporte
Assignee: Aleksey Yeschenko
 Attachments: 8285.txt, OOME_node_system.log, gc-1416849312.log.gz, 
 gc.log.gz, heap-usage-after-gc-zoom.png, heap-usage-after-gc.png, 
 system.log.gz


 We ran drivers 3-days endurance tests against Cassandra 2.0.11 and C* crashed 
 with an OOME.  This happened both with ruby-driver 1.0-beta and java-driver 
 2.0.8-snapshot.
 Attached are :
 | OOME_node_system.log | The system.log of one Cassandra node that crashed |
 | gc.log.gz | The GC log on the same node |
 | heap-usage-after-gc.png | The heap occupancy evolution after every GC cycle 
 |
 | heap-usage-after-gc-zoom.png | A focus on when things start to go wrong |
 Workload :
 Our test executes 5 CQL statements (select, insert, select, delete, select) 
 for a given unique id, during 3 days, using multiple threads.  There is not 
 change in the workload during the test.
 Symptoms :
 In the attached log, it seems something starts in Cassandra between 
 2014-11-06 10:29:22 and 2014-11-06 10:45:32.  This causes an allocation that 
 fills the heap.  We eventually get stuck in a Full GC storm and get an OOME 
 in the logs.
 I have run the java-driver tests against Cassandra 1.2.19 and 2.1.1.  The 
 error does not occur.  It seems specific to 2.0.11.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8425) Add full entry indexing capability for maps

2014-12-04 Thread Lex Lythius (JIRA)
Lex Lythius created CASSANDRA-8425:
--

 Summary: Add full entry indexing capability for maps
 Key: CASSANDRA-8425
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8425
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Lex Lythius
Priority: Minor


Since C* 2.1 we're able to index map keys or map values and query them using 
{{CONTAINS KEY}} and {{CONTAINS}} respectively.

However, some use cases require being able to filter for specific key/value 
combination. Syntax might be something along the lines of 
{code:sql}
SELECT * FROM table WHERE map['country'] = 'usa';
{code}
or
{code:sql}
SELECT * FROM table WHERE map CONTAINS ENTRY { 'country': 'usa' };
{code}

Of course, right now we can have the client refine the results from
{code:sql}
SELECT * FROM table WHERE map CONTAINS { 'usa' };
{code}
or
{code:sql}
SELECT * FROM table WHERE map CONTAINS KEY { 'country' };
{code}
but I believe this would add a good deal of flexibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2014-12-04 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234696#comment-14234696
 ] 

Robert Stupp commented on CASSANDRA-7438:
-

Just pushed some OHC additions to github:
* key-iterator (used by CacheService class to invalidate column families)
* (de)serialization of cache content to disk using direct I/O from off-heap.  
Means that the row cache content does not need to go though the heap for 
serialization and deserialization. Compression should also be possible in 
off-heap using the static methods in Snappy class since these expect direct 
buffers so there's nearly no pressure for that on the heap. Background: the 
implementation basically lies the address and length of the hash entry into 
DirectByteBuffer class so FileChannel is able to read into it/write from it.


 Serializing Row cache alternative (Fully off heap)
 --

 Key: CASSANDRA-7438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Linux
Reporter: Vijay
Assignee: Vijay
  Labels: performance
 Fix For: 3.0

 Attachments: 0001-CASSANDRA-7438.patch, tests.zip


 Currently SerializingCache is partially off heap, keys are still stored in 
 JVM heap as BB, 
 * There is a higher GC costs for a reasonably big cache.
 * Some users have used the row cache efficiently in production for better 
 results, but this requires careful tunning.
 * Overhead in Memory for the cache entries are relatively high.
 So the proposal for this ticket is to move the LRU cache logic completely off 
 heap and use JNI to interact with cache. We might want to ensure that the new 
 implementation match the existing API's (ICache), and the implementation 
 needs to have safe memory access, low overhead in memory and less memcpy's 
 (As much as possible).
 We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2014-12-04 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234696#comment-14234696
 ] 

Robert Stupp edited comment on CASSANDRA-7438 at 12/4/14 10:20 PM:
---

Just pushed some OHC additions to github:
* key-iterator (used by CacheService class to invalidate column families)
* (de)serialization of cache content to disk using direct I/O from off-heap.  
Means that the row cache content does not need to go though the heap for 
serialization and deserialization. Compression should also be possible in 
off-heap using the static methods in Snappy class since these expect direct 
buffers so there's nearly no pressure for that on the heap. Background: the 
implementation basically lies the address and length of the cache entry into 
DirectByteBuffer class so FileChannel is able to read into it/write from it.

edit: s/hash/cache/


was (Author: snazy):
Just pushed some OHC additions to github:
* key-iterator (used by CacheService class to invalidate column families)
* (de)serialization of cache content to disk using direct I/O from off-heap.  
Means that the row cache content does not need to go though the heap for 
serialization and deserialization. Compression should also be possible in 
off-heap using the static methods in Snappy class since these expect direct 
buffers so there's nearly no pressure for that on the heap. Background: the 
implementation basically lies the address and length of the hash entry into 
DirectByteBuffer class so FileChannel is able to read into it/write from it.


 Serializing Row cache alternative (Fully off heap)
 --

 Key: CASSANDRA-7438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Linux
Reporter: Vijay
Assignee: Vijay
  Labels: performance
 Fix For: 3.0

 Attachments: 0001-CASSANDRA-7438.patch, tests.zip


 Currently SerializingCache is partially off heap, keys are still stored in 
 JVM heap as BB, 
 * There is a higher GC costs for a reasonably big cache.
 * Some users have used the row cache efficiently in production for better 
 results, but this requires careful tunning.
 * Overhead in Memory for the cache entries are relatively high.
 So the proposal for this ticket is to move the LRU cache logic completely off 
 heap and use JNI to interact with cache. We might want to ensure that the new 
 implementation match the existing API's (ICache), and the implementation 
 needs to have safe memory access, low overhead in memory and less memcpy's 
 (As much as possible).
 We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8418) Query now requiring allow filtering after refactoring

2014-12-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reopened CASSANDRA-8418:
-

 Query now requiring allow filtering after refactoring
 -

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8418) Query now requiring allow filtering after refactoring

2014-12-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234753#comment-14234753
 ] 

Sylvain Lebresne commented on CASSANDRA-8418:
-

Let's not just fix tests without having check all currently maintained version 
and acted appropriatly.

I agree that those queries should have required {{ALLOW FILTERING}} however, so 
let's understand what's wrong with 2.0/2.1 and maybe fix it there.

 Query now requiring allow filtering after refactoring
 -

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8424:

Priority: Minor  (was: Major)

 Collection filtering not working when using PK
 --

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
Assignee: Benjamin Lerer
Priority: Minor
  Labels: collections
 Fix For: 2.1.3


 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8424:

Issue Type: Improvement  (was: Bug)

 Collection filtering not working when using PK
 --

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
Assignee: Benjamin Lerer
Priority: Minor
  Labels: collections
 Fix For: 2.1.3


 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-4476) Support 2ndary index queries with only inequality clauses (LT, LTE, GT, GTE)

2014-12-04 Thread Oded Peer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234764#comment-14234764
 ] 

Oded Peer commented on CASSANDRA-4476:
--

I understand. I created a test that demonstrates the issue.
That's a really good catch on your behalf.

I can't see a good way to query an index range and return the result in token 
order for paging.
It might be done by fetching the entire table into memory and sorting all the 
rows by token value, but that's just wrong.
Is it OK to close the issue as won't fix?

 Support 2ndary index queries with only inequality clauses (LT, LTE, GT, GTE)
 

 Key: CASSANDRA-4476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4476
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Sylvain Lebresne
Assignee: Oded Peer
Priority: Minor
  Labels: cql
 Fix For: 3.0

 Attachments: 4476-2.patch, 4476-3.patch, 4476-5.patch, 
 cassandra-trunk-4476.patch


 Currently, a query that uses 2ndary indexes must have at least one EQ clause 
 (on an indexed column). Given that indexed CFs are local (and use 
 LocalPartitioner that order the row by the type of the indexed column), we 
 should extend 2ndary indexes to allow querying indexed columns even when no 
 EQ clause is provided.
 As far as I can tell, the main problem to solve for this is to update 
 KeysSearcher.highestSelectivityPredicate(). I.e. how do we estimate the 
 selectivity of non-EQ clauses? I note however that if we can do that estimate 
 reasonably accurately, this might provide better performance even for index 
 queries that both EQ and non-EQ clauses, because some non-EQ clauses may have 
 a much better selectivity than EQ ones (say you index both the user country 
 and birth date, for SELECT * FROM users WHERE country = 'US' AND birthdate  
 'Jan 2009' AND birtdate  'July 2009', you'd better use the birthdate index 
 first).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8424:

Fix Version/s: (was: 2.1.3)

 Collection filtering not working when using PK
 --

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
Priority: Minor
  Labels: collections

 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8424:

Assignee: (was: Benjamin Lerer)

 Collection filtering not working when using PK
 --

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
Priority: Minor
  Labels: collections

 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234773#comment-14234773
 ] 

Sylvain Lebresne commented on CASSANDRA-8424:
-

This not a bug in that those are know to not be handled by the code. The reason 
(for the first query at least) is what I explained in CASSANDRA-6377: it's hard 
to do it without a good internal refactoring because of some silliness of the 
current internals. As it happens, we have CASSANDRA-8099 for such a 
refactoring, so this should be solved in 3.0, but it will have to wait on that.

The 2nd query, where the primary key is given in full, would be easier to deal 
with, but that would still also some special casing, and I'm not convinced that 
it's useful enough in practice that it's worth adding a special case just for 
that (but this should be handled without special casing post-CASSANDRA-8099).

 Collection filtering not working when using PK
 --

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
Assignee: Benjamin Lerer
Priority: Minor
  Labels: collections

 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8424) Collection filtering not working when using PK

2014-12-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234776#comment-14234776
 ] 

Sylvain Lebresne commented on CASSANDRA-8424:
-

Or to sum up what's up, I don't think the added complexity of adding this on 
top of the current code is worth the benefits, and I suggest just waiting on 
CASSANDRA-8099.

 Collection filtering not working when using PK
 --

 Key: CASSANDRA-8424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8424
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native 
 protocol v3]
 Ubuntu 14.04.5 64-bit
Reporter: Lex Lythius
Priority: Minor
  Labels: collections

 I can do queries for collection keys/values as detailed in 
 http://www.datastax.com/dev/blog/cql-in-2-1 without problems. Even without 
 having a secondary index on the collection it will work (with {{ALLOW 
 FILTERING}}) but only as long as the query is performed through a *secondary* 
 index. If you go through PK it won't. Of course full-scan filtering query is 
 not allowed.
 As an example, I created this table:
 {code:SQL}
 CREATE TABLE test.uloc9 (
 usr int,
 type int,
 gb ascii,
 gb_q ascii,
 info mapascii, text,
 lat float,
 lng float,
 q int,
 traits setascii,
 ts timestamp,
 PRIMARY KEY (usr, type)
 );
 CREATE INDEX uloc9_gb ON test.uloc9 (gb);
 CREATE INDEX uloc9_gb_q ON test.uloc9 (gb_q);
 CREATE INDEX uloc9_traits ON test.uloc9 (traits);
 {code}
 then added some data and queried:
 {code}
 cqlsh:test select * from uloc9 where gb='/nw' and info contains 'argentina' 
 allow filtering;
  usr | type | gb  | gb_q  | info | lat
   | lng  | q | traits | ts
 -+--+-+---+--+--+--+---++--
1 |0 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.74000168 | -65.8305 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
1 |1 | /nw | /nw:1 | {'ci': 'san antonio', 'co': 'argentina'} | 
 -40.75799942 | -66.00800323 | 1 | {'r:photographer'} | 2014-11-04 
 18:20:29-0300
 (2 rows)
 cqlsh:test select * from uloc9 where usr=1 and info contains 'argentina' 
 allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 cqlsh:test select * from uloc9 where usr=1 and type=0 and info contains 
 'argentina' allow filtering;
 code=2200 [Invalid query] message=No indexed columns present in by-columns 
 clause with Equal operator
 {code}
 Maybe I got things wrong, but I don't see any reasons why collection 
 filtering should fail when using PK while it succeeds using any secondary 
 index (related or otherwise).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-12-04 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8365:

Assignee: Benjamin Lerer

 CamelCase name is used as index name instead of lowercase
 -

 Key: CASSANDRA-8365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cqlsh
 Fix For: 2.1.3


 In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
 name is used as index name, even though it is unquoted. Trying to quote the 
 index name results in a syntax error.
 However, when I try to delete the index, I have to quote the index name, 
 otherwise I get an invalid-query error telling me that the index (lowercase) 
 does not exist.
 This seems inconsistent.  Shouldn't the index name be lowercased before the 
 index is created ?
 Here is the code to reproduce the issue :
 {code}
 cqlsh:schemabuilderit CREATE TABLE IndexTest (a int primary key, b int);
 cqlsh:schemabuilderit CREATE INDEX FooBar on indextest (b);
 cqlsh:schemabuilderit DESCRIBE TABLE indextest ;
 CREATE TABLE schemabuilderit.indextest (
 a int PRIMARY KEY,
 b int
 ) ;
 CREATE INDEX FooBar ON schemabuilderit.indextest (b);
 cqlsh:schemabuilderit DROP INDEX FooBar;
 code=2200 [Invalid query] message=Index 'foobar' could not be found in any 
 of the tables of keyspace 'schemabuilderit'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-4987) Support more queries when ALLOW FILTERING is used.

2014-12-04 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234781#comment-14234781
 ] 

Sylvain Lebresne commented on CASSANDRA-4987:
-

Yes, that's the goal.

 Support more queries when ALLOW FILTERING is used.
 --

 Key: CASSANDRA-4987
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4987
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
  Labels: cql
 Fix For: 3.0


 Even after CASSANDRA-4915, there is still a bunch of queries that we don't 
 support even if {{ALLOW FILTERING}} is used. Typically, pretty much any 
 queries with restriction on a non-primary-key column unless we have one of 
 those restriction that is an EQ on an indexed column.
 If {{ALLOW FILTERING}} is used, we could allow those queries out of 
 convenience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/6] cassandra git commit: Use live sstables in snapshot repair if possible

2014-12-04 Thread yukim
Use live sstables in snapshot repair if possible

patch by Jimmy MÃ¥rdell; reviewed by yukim for CASSANDRA-8312


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ceed3a20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ceed3a20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ceed3a20

Branch: refs/heads/trunk
Commit: ceed3a20ef78b402a7a734e63d758aff105fa2de
Parents: 4030088
Author: Jimmy MÃ¥rdell ya...@spotify.com
Authored: Thu Dec 4 09:59:34 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Dec 4 17:00:53 2014 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 36 ++--
 .../db/compaction/CompactionManager.java| 13 +++
 3 files changed, 38 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ceed3a20/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dc3896d..79c2d81 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -30,6 +30,7 @@
  * Fix totalDiskSpaceUsed calculation (CASSANDRA-8205)
  * Add DC-aware sequential repair (CASSANDRA-8193)
  * Improve JBOD disk utilization (CASSANDRA-7386)
+ * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ceed3a20/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 6cdf9e9..b5c6c98 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1840,10 +1840,40 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public ListSSTableReader getSnapshotSSTableReader(String tag) throws 
IOException
 {
+MapInteger, SSTableReader active = new HashMap();
+for (SSTableReader sstable : data.getView().sstables)
+active.put(sstable.descriptor.generation, sstable);
 MapDescriptor, SetComponent snapshots = 
directories.sstableLister().snapshots(tag).list();
-ListSSTableReader readers = new 
ArrayListSSTableReader(snapshots.size());
-for (Map.EntryDescriptor, SetComponent entries : 
snapshots.entrySet())
-readers.add(SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner));
+ListSSTableReader readers = new ArrayList(snapshots.size());
+try
+{
+for (Map.EntryDescriptor, SetComponent entries : 
snapshots.entrySet())
+{
+// Try acquire reference to an active sstable instead of 
snapshot if it exists,
+// to avoid opening new sstables. If it fails, use the 
snapshot reference instead.
+SSTableReader sstable = 
active.get(entries.getKey().generation);
+if (sstable == null || !sstable.acquireReference())
+{
+if (logger.isDebugEnabled())
+logger.debug(using snapshot sstable  + 
entries.getKey());
+sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner);
+// This is technically not necessary since it's a snapshot 
but makes things easier
+sstable.acquireReference();
+}
+else if (logger.isDebugEnabled())
+{
+logger.debug(using active sstable  + entries.getKey());
+}
+readers.add(sstable);
+}
+}
+catch (IOException | RuntimeException e)
+{
+// In case one of the snapshot sstables fails to open,
+// we must release the references to the ones we opened so far
+SSTableReader.releaseReferences(readers);
+throw e;
+}
 return readers;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ceed3a20/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d298e72..19dedb0 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -765,8 +765,8 @@ public class CompactionManager implements 
CompactionManagerMBean
 sstables = cfs.getSnapshotSSTableReader(snapshotName);
 
   

[2/6] cassandra git commit: Use live sstables in snapshot repair if possible

2014-12-04 Thread yukim
Use live sstables in snapshot repair if possible

patch by Jimmy MÃ¥rdell; reviewed by yukim for CASSANDRA-8312


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ceed3a20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ceed3a20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ceed3a20

Branch: refs/heads/cassandra-2.1
Commit: ceed3a20ef78b402a7a734e63d758aff105fa2de
Parents: 4030088
Author: Jimmy MÃ¥rdell ya...@spotify.com
Authored: Thu Dec 4 09:59:34 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Dec 4 17:00:53 2014 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 36 ++--
 .../db/compaction/CompactionManager.java| 13 +++
 3 files changed, 38 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ceed3a20/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dc3896d..79c2d81 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -30,6 +30,7 @@
  * Fix totalDiskSpaceUsed calculation (CASSANDRA-8205)
  * Add DC-aware sequential repair (CASSANDRA-8193)
  * Improve JBOD disk utilization (CASSANDRA-7386)
+ * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ceed3a20/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 6cdf9e9..b5c6c98 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1840,10 +1840,40 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public ListSSTableReader getSnapshotSSTableReader(String tag) throws 
IOException
 {
+MapInteger, SSTableReader active = new HashMap();
+for (SSTableReader sstable : data.getView().sstables)
+active.put(sstable.descriptor.generation, sstable);
 MapDescriptor, SetComponent snapshots = 
directories.sstableLister().snapshots(tag).list();
-ListSSTableReader readers = new 
ArrayListSSTableReader(snapshots.size());
-for (Map.EntryDescriptor, SetComponent entries : 
snapshots.entrySet())
-readers.add(SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner));
+ListSSTableReader readers = new ArrayList(snapshots.size());
+try
+{
+for (Map.EntryDescriptor, SetComponent entries : 
snapshots.entrySet())
+{
+// Try acquire reference to an active sstable instead of 
snapshot if it exists,
+// to avoid opening new sstables. If it fails, use the 
snapshot reference instead.
+SSTableReader sstable = 
active.get(entries.getKey().generation);
+if (sstable == null || !sstable.acquireReference())
+{
+if (logger.isDebugEnabled())
+logger.debug(using snapshot sstable  + 
entries.getKey());
+sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner);
+// This is technically not necessary since it's a snapshot 
but makes things easier
+sstable.acquireReference();
+}
+else if (logger.isDebugEnabled())
+{
+logger.debug(using active sstable  + entries.getKey());
+}
+readers.add(sstable);
+}
+}
+catch (IOException | RuntimeException e)
+{
+// In case one of the snapshot sstables fails to open,
+// we must release the references to the ones we opened so far
+SSTableReader.releaseReferences(readers);
+throw e;
+}
 return readers;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ceed3a20/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d298e72..19dedb0 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -765,8 +765,8 @@ public class CompactionManager implements 
CompactionManagerMBean
 sstables = cfs.getSnapshotSSTableReader(snapshotName);
 

[4/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-12-04 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/db/compaction/CompactionManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b7a0cd9e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b7a0cd9e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b7a0cd9e

Branch: refs/heads/trunk
Commit: b7a0cd9e6037a0fb21a5fb64310c50cd39e35496
Parents: 587657d ceed3a2
Author: Yuki Morishita yu...@apache.org
Authored: Thu Dec 4 17:26:59 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Dec 4 17:26:59 2014 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 36 ++--
 2 files changed, 34 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b7a0cd9e/CHANGES.txt
--
diff --cc CHANGES.txt
index 041c1e1,79c2d81..145347b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -33,35 -16,7 +33,36 @@@ Merged from 2.0
   * Avoid overlap in L1 when L0 contains many nonoverlapping
 sstables (CASSANDRA-8211)
   * Improve PropertyFileSnitch logging (CASSANDRA-8183)
 - * Abort liveRatio calculation if the memtable is flushed (CASSANDRA-8164)
 + * Add DC-aware sequential repair (CASSANDRA-8193)
++ * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
 +
 +
 +2.1.2
 + * (cqlsh) parse_for_table_meta errors out on queries with undefined
 +   grammars (CASSANDRA-8262)
 + * (cqlsh) Fix SELECT ... TOKEN() function broken in C* 2.1.1 (CASSANDRA-8258)
 + * Fix Cassandra crash when running on JDK8 update 40 (CASSANDRA-8209)
 + * Optimize partitioner tokens (CASSANDRA-8230)
 + * Improve compaction of repaired/unrepaired sstables (CASSANDRA-8004)
 + * Make cache serializers pluggable (CASSANDRA-8096)
 + * Fix issues with CONTAINS (KEY) queries on secondary indexes
 +   (CASSANDRA-8147)
 + * Fix read-rate tracking of sstables for some queries (CASSANDRA-8239)
 + * Fix default timestamp in QueryOptions (CASSANDRA-8246)
 + * Set socket timeout when reading remote version (CASSANDRA-8188)
 + * Refactor how we track live size (CASSANDRA-7852)
 + * Make sure unfinished compaction files are removed (CASSANDRA-8124)
 + * Fix shutdown when run as Windows service (CASSANDRA-8136)
 + * Fix DESCRIBE TABLE with custom indexes (CASSANDRA-8031)
 + * Fix race in RecoveryManagerTest (CASSANDRA-8176)
 + * Avoid IllegalArgumentException while sorting sstables in
 +   IndexSummaryManager (CASSANDRA-8182)
 + * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)
 + * Add 'die' policy for commit log and disk failure (CASSANDRA-7927)
 + * Fix installing as service on Windows (CASSANDRA-8115)
 + * Fix CREATE TABLE for CQL2 (CASSANDRA-8144)
 + * Avoid boxing in ColumnStats min/max trackers (CASSANDRA-8109)
 +Merged from 2.0:
   * Correctly handle non-text column names in cql3 (CASSANDRA-8178)
   * Fix deletion for indexes on primary key columns (CASSANDRA-8206)
   * Add 'nodetool statusgossip' (CASSANDRA-8125)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b7a0cd9e/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 0507973,b5c6c98..be89318
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -2165,32 -1838,42 +2165,62 @@@ public class ColumnFamilyStore implemen
  }
  }
  
 +private void writeSnapshotManifest(final JSONArray filesJSONArr, final 
String snapshotName)
 +{
 +final File manifestFile = 
directories.getSnapshotManifestFile(snapshotName);
 +final JSONObject manifestJSON = new JSONObject();
 +manifestJSON.put(files, filesJSONArr);
 +
 +try
 +{
 +if (!manifestFile.getParentFile().exists())
 +manifestFile.getParentFile().mkdirs();
 +PrintStream out = new PrintStream(manifestFile);
 +out.println(manifestJSON.toJSONString());
 +out.close();
 +}
 +catch (IOException e)
 +{
 +throw new FSWriteError(e, manifestFile);
 +}
 +}
 +
  public ListSSTableReader getSnapshotSSTableReader(String tag) throws 
IOException
  {
+ MapInteger, SSTableReader active = new HashMap();
+ for (SSTableReader sstable : data.getView().sstables)
+ active.put(sstable.descriptor.generation, sstable);
  MapDescriptor, SetComponent snapshots = 
directories.sstableLister().snapshots(tag).list();
- ListSSTableReader readers = new 

[5/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-12-04 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/db/compaction/CompactionManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b7a0cd9e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b7a0cd9e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b7a0cd9e

Branch: refs/heads/cassandra-2.1
Commit: b7a0cd9e6037a0fb21a5fb64310c50cd39e35496
Parents: 587657d ceed3a2
Author: Yuki Morishita yu...@apache.org
Authored: Thu Dec 4 17:26:59 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Dec 4 17:26:59 2014 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 36 ++--
 2 files changed, 34 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b7a0cd9e/CHANGES.txt
--
diff --cc CHANGES.txt
index 041c1e1,79c2d81..145347b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -33,35 -16,7 +33,36 @@@ Merged from 2.0
   * Avoid overlap in L1 when L0 contains many nonoverlapping
 sstables (CASSANDRA-8211)
   * Improve PropertyFileSnitch logging (CASSANDRA-8183)
 - * Abort liveRatio calculation if the memtable is flushed (CASSANDRA-8164)
 + * Add DC-aware sequential repair (CASSANDRA-8193)
++ * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
 +
 +
 +2.1.2
 + * (cqlsh) parse_for_table_meta errors out on queries with undefined
 +   grammars (CASSANDRA-8262)
 + * (cqlsh) Fix SELECT ... TOKEN() function broken in C* 2.1.1 (CASSANDRA-8258)
 + * Fix Cassandra crash when running on JDK8 update 40 (CASSANDRA-8209)
 + * Optimize partitioner tokens (CASSANDRA-8230)
 + * Improve compaction of repaired/unrepaired sstables (CASSANDRA-8004)
 + * Make cache serializers pluggable (CASSANDRA-8096)
 + * Fix issues with CONTAINS (KEY) queries on secondary indexes
 +   (CASSANDRA-8147)
 + * Fix read-rate tracking of sstables for some queries (CASSANDRA-8239)
 + * Fix default timestamp in QueryOptions (CASSANDRA-8246)
 + * Set socket timeout when reading remote version (CASSANDRA-8188)
 + * Refactor how we track live size (CASSANDRA-7852)
 + * Make sure unfinished compaction files are removed (CASSANDRA-8124)
 + * Fix shutdown when run as Windows service (CASSANDRA-8136)
 + * Fix DESCRIBE TABLE with custom indexes (CASSANDRA-8031)
 + * Fix race in RecoveryManagerTest (CASSANDRA-8176)
 + * Avoid IllegalArgumentException while sorting sstables in
 +   IndexSummaryManager (CASSANDRA-8182)
 + * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)
 + * Add 'die' policy for commit log and disk failure (CASSANDRA-7927)
 + * Fix installing as service on Windows (CASSANDRA-8115)
 + * Fix CREATE TABLE for CQL2 (CASSANDRA-8144)
 + * Avoid boxing in ColumnStats min/max trackers (CASSANDRA-8109)
 +Merged from 2.0:
   * Correctly handle non-text column names in cql3 (CASSANDRA-8178)
   * Fix deletion for indexes on primary key columns (CASSANDRA-8206)
   * Add 'nodetool statusgossip' (CASSANDRA-8125)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b7a0cd9e/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 0507973,b5c6c98..be89318
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -2165,32 -1838,42 +2165,62 @@@ public class ColumnFamilyStore implemen
  }
  }
  
 +private void writeSnapshotManifest(final JSONArray filesJSONArr, final 
String snapshotName)
 +{
 +final File manifestFile = 
directories.getSnapshotManifestFile(snapshotName);
 +final JSONObject manifestJSON = new JSONObject();
 +manifestJSON.put(files, filesJSONArr);
 +
 +try
 +{
 +if (!manifestFile.getParentFile().exists())
 +manifestFile.getParentFile().mkdirs();
 +PrintStream out = new PrintStream(manifestFile);
 +out.println(manifestJSON.toJSONString());
 +out.close();
 +}
 +catch (IOException e)
 +{
 +throw new FSWriteError(e, manifestFile);
 +}
 +}
 +
  public ListSSTableReader getSnapshotSSTableReader(String tag) throws 
IOException
  {
+ MapInteger, SSTableReader active = new HashMap();
+ for (SSTableReader sstable : data.getView().sstables)
+ active.put(sstable.descriptor.generation, sstable);
  MapDescriptor, SetComponent snapshots = 
directories.sstableLister().snapshots(tag).list();
- ListSSTableReader readers = new 

[6/6] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-12-04 Thread yukim
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/db/ColumnFamilyStore.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a7208383
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a7208383
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a7208383

Branch: refs/heads/trunk
Commit: a7208383fba67fda025d354c66491c668887602a
Parents: f5866ca b7a0cd9
Author: Yuki Morishita yu...@apache.org
Authored: Thu Dec 4 17:30:09 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Dec 4 17:30:09 2014 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 34 ++--
 2 files changed, 33 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7208383/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7208383/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[1/6] cassandra git commit: Use live sstables in snapshot repair if possible

2014-12-04 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 4030088ec - ceed3a20e
  refs/heads/cassandra-2.1 587657d37 - b7a0cd9e6
  refs/heads/trunk f5866ca2b - a7208383f


Use live sstables in snapshot repair if possible

patch by Jimmy MÃ¥rdell; reviewed by yukim for CASSANDRA-8312


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ceed3a20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ceed3a20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ceed3a20

Branch: refs/heads/cassandra-2.0
Commit: ceed3a20ef78b402a7a734e63d758aff105fa2de
Parents: 4030088
Author: Jimmy MÃ¥rdell ya...@spotify.com
Authored: Thu Dec 4 09:59:34 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Dec 4 17:00:53 2014 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 36 ++--
 .../db/compaction/CompactionManager.java| 13 +++
 3 files changed, 38 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ceed3a20/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dc3896d..79c2d81 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -30,6 +30,7 @@
  * Fix totalDiskSpaceUsed calculation (CASSANDRA-8205)
  * Add DC-aware sequential repair (CASSANDRA-8193)
  * Improve JBOD disk utilization (CASSANDRA-7386)
+ * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ceed3a20/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 6cdf9e9..b5c6c98 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1840,10 +1840,40 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public ListSSTableReader getSnapshotSSTableReader(String tag) throws 
IOException
 {
+MapInteger, SSTableReader active = new HashMap();
+for (SSTableReader sstable : data.getView().sstables)
+active.put(sstable.descriptor.generation, sstable);
 MapDescriptor, SetComponent snapshots = 
directories.sstableLister().snapshots(tag).list();
-ListSSTableReader readers = new 
ArrayListSSTableReader(snapshots.size());
-for (Map.EntryDescriptor, SetComponent entries : 
snapshots.entrySet())
-readers.add(SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner));
+ListSSTableReader readers = new ArrayList(snapshots.size());
+try
+{
+for (Map.EntryDescriptor, SetComponent entries : 
snapshots.entrySet())
+{
+// Try acquire reference to an active sstable instead of 
snapshot if it exists,
+// to avoid opening new sstables. If it fails, use the 
snapshot reference instead.
+SSTableReader sstable = 
active.get(entries.getKey().generation);
+if (sstable == null || !sstable.acquireReference())
+{
+if (logger.isDebugEnabled())
+logger.debug(using snapshot sstable  + 
entries.getKey());
+sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner);
+// This is technically not necessary since it's a snapshot 
but makes things easier
+sstable.acquireReference();
+}
+else if (logger.isDebugEnabled())
+{
+logger.debug(using active sstable  + entries.getKey());
+}
+readers.add(sstable);
+}
+}
+catch (IOException | RuntimeException e)
+{
+// In case one of the snapshot sstables fails to open,
+// we must release the references to the ones we opened so far
+SSTableReader.releaseReferences(readers);
+throw e;
+}
 return readers;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ceed3a20/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d298e72..19dedb0 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ 

[jira] [Updated] (CASSANDRA-8418) Query now requiring allow filtering after refactoring

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8418:
---
Reproduced In:   (was: 3.0)
Since Version:   (was: 3.0)
Fix Version/s: (was: 3.0)
   2.1.3
   2.0.12
   Issue Type: Bug  (was: Test)

 Query now requiring allow filtering after refactoring
 -

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.0.12, 2.1.3


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8418) Query now requiring allow filtering after refactoring

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8418:
---
Reproduced In: 2.1.2, 2.0.11

 Query now requiring allow filtering after refactoring
 -

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.0.12, 2.1.3


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8418) Queries that require allow filtering are working without it

2014-12-04 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8418:
---
Summary: Queries that require allow filtering are working without it  (was: 
Query now requiring allow filtering after refactoring)

 Queries that require allow filtering are working without it
 ---

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.0.12, 2.1.3


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7350) Decommissioning nodes borks the seed node - can't add additional nodes

2014-12-04 Thread Masashi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234939#comment-14234939
 ] 

Masashi Ozawa commented on CASSANDRA-7350:
--

It's still happening in 2.0.11 for me. Filed CASSANDRA-8422.

 Decommissioning nodes borks the seed node - can't add additional nodes
 --

 Key: CASSANDRA-7350
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7350
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu using the auto-clustering AMI
Reporter: Steven Lowenthal
Assignee: Shawn Kumar
Priority: Minor
  Labels: qa-resolved
 Fix For: 2.0.9


 1) Launch a 4 node cluster - I used the auto-clustering AMI (you get nodes 
 0-3)
 2) decommission that last 2 nodes (nodes , leaving a 2 node cluster)
 3) wipe the data directories from node 2
 4) bootstrap node2 - it won't join unable to gossip with any seeds.
 If you bootstrap the node a second time, it will join.  However if you try to 
 bootstrap node 3, it will also fail.
 I discovered that bouncing the seed node fixes the problem.  I think it 
 cropped up in 2.0.7.
 Error:
 ERROR [main] 2014-06-03 21:52:46,649 CassandraDaemon.java (line 497) 
 Exception encountered during startup
 java.lang.RuntimeException: Unable to gossip with any seeds
   at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1193)
   at 
 org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:447)
   at 
 org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:656)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:612)
   at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:505)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:362)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:480)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:569)
 ERROR [StorageServiceShutdownHook] 2014-06-03 21:52:46,741 
 CassandraDaemon.java (line 198) Exception in thread Thread[StorageServi



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8405) Is there a way to override the current MAX_TTL value from 20 yrs to a value 20 yrs.

2014-12-04 Thread Parth Setya (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234984#comment-14234984
 ] 

Parth Setya commented on CASSANDRA-8405:


Thanks for response. Yes I think we can do that but then we will not be able to 
utilize the Auto Purge and Auto Deletion(Data is truncated automatically when 
the ttl is reached) property if we do that. 
Our Api has been made with the assumption that the data that is expired will be 
deleted automatically.


 Is there a way to override the current MAX_TTL value from 20 yrs to a value  
 20 yrs.
 -

 Key: CASSANDRA-8405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8405
 Project: Cassandra
  Issue Type: Wish
  Components: Core
 Environment: Linux(RH)
Reporter: Parth Setya
Priority: Blocker
  Labels: MAX_TTL, date, expiration, ttl

 We are migrating data from Oracle to C*.
 The expiration date for a certain column was set to 90 years in Oracle.
 Here we are not able to make that value go beyond 20 years.
 Could reccomend a way to override this value?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8418) Queries that require allow filtering are working without it

2014-12-04 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14235224#comment-14235224
 ] 

Benjamin Lerer commented on CASSANDRA-8418:
---

That was my plan but I did not find how to reopen the ticket from my phone. 
Sorry for that.

 Queries that require allow filtering are working without it
 ---

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.0.12, 2.1.3


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-8418) Queries that require allow filtering are working without it

2014-12-04 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8418:
--
Comment: was deleted

(was: That was my plan but I did not find how to reopen the ticket from my 
phone. Sorry for that.)

 Queries that require allow filtering are working without it
 ---

 Key: CASSANDRA-8418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8418
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 2.0.12, 2.1.3


 The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has 
 begun failing after the changes to CASSANDRA-7981. 
 With the schema {code}CREATE TABLE blogs (
 blog_id int,
 time1 int,
 time2 int,
 author text,
 content text,
 PRIMARY KEY (blog_id, time1, time2){code}
 and {code}CREATE INDEX ON blogs(author){code}, then the query
 {code}SELECT blog_id, content FROM blogs WHERE time1  0 AND 
 author='foo'{code} now requires ALLOW FILTERING, but did not before the 
 refactor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)