[jira] [Commented] (CASSANDRA-7875) Prepared statements using dropped indexes are not handled correctly

2015-03-03 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14344738#comment-14344738
 ] 

Aleksey Yeschenko commented on CASSANDRA-7875:
--

Thanks for looking into it. I'm leaning towards leaving 2.0 alone, but not 
sure. [~thobbs] ?

 Prepared statements using dropped indexes are not handled correctly
 ---

 Key: CASSANDRA-7875
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7875
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Stefania
Priority: Minor
 Fix For: 2.1.4

 Attachments: prepared_statements_test.py, repro.py


 When select statements are prepared, we verify that the column restrictions 
 use indexes (where necessary).  However, we don't perform a similar check 
 when the statement is executed, so it fails somewhere further down the line.  
 In this case, it hits an assertion:
 {noformat}
 java.lang.AssertionError: Sequential scan with filters is not supported (if 
 you just created an index, you need to wait for the creation to be propagated 
 to all nodes before querying it)
   at 
 org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.getExtraFilter(ExtendedFilter.java:259)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1759)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1709)
   at 
 org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:119)
   at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1394)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 {noformat}
 During execution, we should check that the indexes still exist and provide a 
 better error if they do not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8761) Make custom role options accessible from IRoleManager

2015-03-03 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8761:
---
Reviewer: Aleksey Yeschenko

 Make custom role options accessible from IRoleManager
 -

 Key: CASSANDRA-8761
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8761
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 3.0

 Attachments: 8761.txt


 IRoleManager implementations may support custom OPTIONS arguments to CREATE  
 ALTER ROLE. If supported, these custom options should be retrievable from the 
 IRoleManager and included in the results of LIST ROLES queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-8894:
---

 Summary: Our default buffer size for (uncompressed) buffered reads 
should be smaller, and based on the expected record size
 Key: CASSANDRA-8894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


A large contributor to slower buffered reads than mmapped is likely that we 
read a full 64Kb at once, when average record sizes may be as low as 140 bytes 
on our stress tests. The TLB has only 128 entries on a modern core, and each 
read will touch 16 of these, meaning we are unlikely to almost ever be hitting 
the TLB, and will be incurring at least 15 unnecessary misses each time (as 
well as the other costs of larger than necessary accesses). When working with 
an SSD there is little to no benefit reading more than 4Kb at once, and in 
either case reading more data than we need is wasteful. So, I propose selecting 
a buffer size that is the next larger power of 2 than our average record size 
(with a minimum of 4Kb), so that we expect to read in one operation. I also 
propose that we create a pool of these buffers up-front, and that we ensure 
they are all exactly aligned to a virtual page, so that the source and target 
operations each touch exactly one virtual page per 4Kb of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8894:

Description: A large contributor to slower buffered reads than mmapped is 
likely that we read a full 64Kb at once, when average record sizes may be as 
low as 140 bytes on our stress tests. The TLB has only 128 entries on a modern 
core, and each read will touch 32 of these, meaning we are unlikely to almost 
ever be hitting the TLB, and will be incurring at least 30 unnecessary misses 
each time (as well as the other costs of larger than necessary accesses). When 
working with an SSD there is little to no benefit reading more than 4Kb at 
once, and in either case reading more data than we need is wasteful. So, I 
propose selecting a buffer size that is the next larger power of 2 than our 
average record size (with a minimum of 4Kb), so that we expect to read in one 
operation. I also propose that we create a pool of these buffers up-front, and 
that we ensure they are all exactly aligned to a virtual page, so that the 
source and target operations each touch exactly one virtual page per 4Kb of 
expected record size.  (was: A large contributor to slower buffered reads than 
mmapped is likely that we read a full 64Kb at once, when average record sizes 
may be as low as 140 bytes on our stress tests. The TLB has only 128 entries on 
a modern core, and each read will touch 16 of these, meaning we are unlikely to 
almost ever be hitting the TLB, and will be incurring at least 15 unnecessary 
misses each time (as well as the other costs of larger than necessary 
accesses). When working with an SSD there is little to no benefit reading more 
than 4Kb at once, and in either case reading more data than we need is 
wasteful. So, I propose selecting a buffer size that is the next larger power 
of 2 than our average record size (with a minimum of 4Kb), so that we expect to 
read in one operation. I also propose that we create a pool of these buffers 
up-front, and that we ensure they are all exactly aligned to a virtual page, so 
that the source and target operations each touch exactly one virtual page per 
4Kb of expected record size.)

 Our default buffer size for (uncompressed) buffered reads should be smaller, 
 and based on the expected record size
 --

 Key: CASSANDRA-8894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


 A large contributor to slower buffered reads than mmapped is likely that we 
 read a full 64Kb at once, when average record sizes may be as low as 140 
 bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
 each read will touch 32 of these, meaning we are unlikely to almost ever be 
 hitting the TLB, and will be incurring at least 30 unnecessary misses each 
 time (as well as the other costs of larger than necessary accesses). When 
 working with an SSD there is little to no benefit reading more than 4Kb at 
 once, and in either case reading more data than we need is wasteful. So, I 
 propose selecting a buffer size that is the next larger power of 2 than our 
 average record size (with a minimum of 4Kb), so that we expect to read in one 
 operation. I also propose that we create a pool of these buffers up-front, 
 and that we ensure they are all exactly aligned to a virtual page, so that 
 the source and target operations each touch exactly one virtual page per 4Kb 
 of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8850) clean up options syntax for create/alter role

2015-03-03 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8850:
---
Attachment: 8850.txt

Attached patch removes the ordering constraints on options supplied to 
{{(CREATE|ALTER) ROLE}} statements. Also, the {{WITH}} and {{AND}} become 
optional, allowing for syntax like:

{code}
CREATE ROLE r WITH LOGIN AND PASSWORD = 'foo';
CREATE ROLE r WITH PASSWORD 'foo' AND LOGIN AND SUPERUSER;
CREATE ROLE r WITH SUPERUSER LOGIN PASSWORD = 'foo';
CREATE ROLE r NOLOGIN; // compatibility with existing syntax
CREATE ROLE r WITH PASSWORD = 'foo' LOGIN SUPERUSER;  // compatibility with 
existing syntax
{code}

All of the existing dtests in test_auth.py  test_auth_roles.py still pass and 
I added some unit tests to verify the various permutations of the syntax.

{{(CREATE|ALTER) USER}} remains as before. That is, only the following form is 
supported:

{code}
CREATE USER u WITH PASSWORD 'foo' SUPERUSER;
{code}

 clean up options syntax for create/alter role 
 --

 Key: CASSANDRA-8850
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8850
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 3.0

 Attachments: 8850.txt


 {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} 
 in a way more consistent with other statements.
 e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8895) Compressed sstables should only compress if the win is above a certain threshold, and should use a variable block size

2015-03-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-8895:
---

 Summary: Compressed sstables should only compress if the win is 
above a certain threshold, and should use a variable block size
 Key: CASSANDRA-8895
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8895
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


On performing a flush to disk, we should assess if the data we're flushing will 
actually be substantively compressed, and how large the page should be to get 
optimal compression ratio versus read latency. Decompressing 64Kb chunks is 
wasteful when reading small records.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8504) Stack trace is erroneously logged twice

2015-03-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345217#comment-14345217
 ] 

Philip Thompson commented on CASSANDRA-8504:


Yep, the test is now passing and that commit did fix it. Feel free to close 
this now.

 Stack trace is erroneously logged twice
 ---

 Key: CASSANDRA-8504
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8504
 Project: Cassandra
  Issue Type: Bug
 Environment: OSX and Ubuntu
Reporter: Philip Thompson
Assignee: Stefania
Priority: Minor
 Fix For: 3.0

 Attachments: node4.log


 The dtest 
 {{replace_address_test.TestReplaceAddress.replace_active_node_test}} is 
 failing on 3.0. The following can be seen in the log:{code}ERROR [main] 
 2014-12-17 15:12:33,871 CassandraDaemon.java:496 - Exception encountered 
 during startup
 java.lang.UnsupportedOperationException: Cannot replace a live node...
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:773)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:593)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
  ~[main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:356) 
 [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:479)
  [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:571) 
 [main/:na]
 ERROR [main] 2014-12-17 15:12:33,872 CassandraDaemon.java:584 - Exception 
 encountered during startup
 java.lang.UnsupportedOperationException: Cannot replace a live node...
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:773)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:593)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
  ~[main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:356) 
 [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:479)
  [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:571) 
 [main/:na]
 INFO  [StorageServiceShutdownHook] 2014-12-17 15:12:33,873 Gossiper.java:1349 
 - Announcing shutdown
 INFO  [StorageServiceShutdownHook] 2014-12-17 15:12:35,876 
 MessagingService.java:708 - Waiting for messaging service to quiesce{code}
 The test starts up a three node cluster, loads some data, then attempts to 
 start a fourth node with replace_address against the IP of a live node. This 
 is expected to fail, with one ERROR message in the log. In 3.0, we are seeing 
 two messages. 2.1-HEAD is working as expected. Attached is the full log of 
 the fourth node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8819) LOCAL_QUORUM writes returns wrong message

2015-03-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8819:
---
Tester: Alan Boudreault  (was: Philip Thompson)

 LOCAL_QUORUM writes returns wrong message
 -

 Key: CASSANDRA-8819
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8819
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.6
Reporter: Wei Zhu
Assignee: Sylvain Lebresne
 Fix For: 2.0.13

 Attachments: 8819-2.0.patch


 We have two DC3, each with 7 nodes.
 Here is the keyspace setup:
  create keyspace test
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {DC2 : 3, DC1 : 3}
  and durable_writes = true;
 We brought down two nodes in DC2 for maintenance. We only write to DC1 using 
 local_quroum (using datastax JavaClient)
 But we see this errors in the log:
 Cassandra timeout during write query at consistency LOCAL_QUORUM (4 replica 
 were required but only 3 acknowledged the write
 why does it say 4 replica were required? and Why would it give error back to 
 client since local_quorum should succeed.
 Here are the output from nodetool status
 Note: Ownership information does not include topology; for complete 
 information, specify a keyspace
 Datacenter: DC2
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address  Load   Tokens  Owns   Host ID
Rack
 UN  10.2.0.1  10.92 GB   256 7.9%     RAC206
 UN  10.2.0.2   6.17 GB256 8.0%     RAC106
 UN  10.2.0.3  6.63 GB256 7.3%     RAC107
 DL  10.2.0.4  1.54 GB256 7.7%    RAC107
 UN  10.2.0.5  6.02 GB256 6.6%     RAC106
 UJ  10.2.0.6   3.68 GB256 ?    RAC205
 UN  10.2.0.7  7.22 GB256 7.7%    RAC205
 Datacenter: DC1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address  Load   Tokens  Owns   Host ID
Rack
 UN  10.1.0.1   6.04 GB256 8.6%    RAC10
 UN  10.1.0.2   7.55 GB256 7.4%     RAC8
 UN  10.1.0.3   5.83 GB256 7.0%     RAC9
 UN  10.1.0.47.34 GB256 7.9%     RAC6
 UN  10.1.0.5   7.57 GB256 8.0%    RAC7
 UN  10.1.0.6   5.31 GB256 7.3%     RAC10
 UN  10.1.0.7   5.47 GB256 8.6%    RAC9
 I did a cql trace on the query and here is the trace, and it does say 
Write timeout; received 3 of 4 required replies | 17:27:52,831 |  10.1.0.1 
 |2002873
 at the end. I guess that is where the client gets the error from. But the 
 rows was inserted to Cassandra correctly. And I traced read with local_quorum 
 and it behaves correctly and the reads don't go to DC2. The problem is only 
 with writes on local_quorum.
 {code}
 Tracing session: 5a789fb0-b70d-11e4-8fca-99bff9c19890
  activity 
| timestamp
 | source  | source_elapsed
 -+--+-+
   
 execute_cql3_query | 17:27:50,828 
 |  10.1.0.1 |  0
  Parsing insert into test (user_id, created, event_data, event_id)values ( 
 123456789 , 9eab8950-b70c-11e4-8fca-99bff9c19891, 'test', '16'); | 
 17:27:50,828 |  10.1.0.1 | 39
   
Preparing statement | 17:27:50,828 
 |  10.1.0.1 |135
   
  Message received from /10.1.0.1 | 17:27:50,829 | 
  10.1.0.5 | 25
   
 Sending message to /10.1.0.5 | 17:27:50,829 | 
  10.1.0.1 |421
   
  Executing single-partition query on users | 17:27:50,829 
 |  10.1.0.5 |177

[jira] [Created] (CASSANDRA-8896) Investigate upstream changes to compressors to fit contents exactly to one page

2015-03-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-8896:
---

 Summary: Investigate upstream changes to compressors to fit 
contents exactly to one page
 Key: CASSANDRA-8896
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8896
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict


For optimal disk performance, it makes most sense to choose our compression 
boundaries based on compressed size, not uncompressed. If our compressors could 
take a target length, and return the number of source bytes they managed to fit 
into that space, this would permit us to lower the number of disk accesses per 
read. [~blambov]: you've dived into LZ4. How tricky do you think this might be?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345198#comment-14345198
 ] 

Benedict commented on CASSANDRA-8067:
-

+1, although I think this code could do with being refactored, as there's a bit 
of a poor isolation of concerns - the caller and callee of CacheSerializer 
methods repeat much of the same work.

 NullPointerException in KeyCacheSerializer
 --

 Key: CASSANDRA-8067
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Leleu
Assignee: Aleksey Yeschenko
 Fix For: 2.1.4

 Attachments: 8067.txt


 Hi,
 I have this stack trace in the logs of Cassandra server (v2.1)
 {code}
 ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:14,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
 Source) ~[na:1.7.0]
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
 ~[na:1.7.0]
 at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0]
 {code}
 It may not be critical because this error occured in the AutoSavingCache. 
 However the line 475 is about the CFMetaData so it may hide bigger issue...
 {code}
  474 CFMetaData cfm = 
 Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
 out);
 {code}
 Regards,
 Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8890) Enhance cassandra-env.sh to handle Java version output in case of OpenJDK icedtea

2015-03-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345235#comment-14345235
 ] 

Philip Thompson commented on CASSANDRA-8890:


Feel free to submit this as a patch as explained here:
http://wiki.apache.org/cassandra/HowToContribute

 Enhance cassandra-env.sh to handle Java version output in case of OpenJDK 
 icedtea
 --

 Key: CASSANDRA-8890
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8890
 Project: Cassandra
  Issue Type: Improvement
  Components: Config
 Environment: Red Hat Enterprise Linux Server release 6.4 (Santiago)
Reporter: Sumod Pawgi
Priority: Minor
 Fix For: 2.1.4


 Where observed - 
 Cassandra node has OpenJDK - 
 java version 1.7.0_09-icedtea
 In some situations, external agents trying to monitor a C* cluster would need 
 to run cassandra -v command to determine the Cassandra version and would 
 expect a numerical output e.g. java version 1.7.0_75 as in case of Oracle 
 JDK. But if the cluster has OpenJDK IcedTea installed, then this condition is 
 not satisfied and the agents will not work correctly as the output from 
 cassandra -v is 
 /opt/apache/cassandra/bin/../conf/cassandra-env.sh: line 102: [: 09-icedtea: 
 integer expression expected
 Cause - 
 The line which is causing this behavior is -
 jvmver=`echo $java_ver_output | grep '[openjdk|java] version' | awk -F'' 
 'NR==1 {print $2}'`
 Suggested enhancement -
 If we change the line to -
  jvmver=`echo $java_ver_output | grep '[openjdk|java] version' | awk -F'' 
 'NR==1 {print $2}' | awk 'BEGIN {FS=-};{print $1}'`,
 it will give $jvmver as - 1.7.0_09 for the above case. 
 Can we add this enhancement in the cassandra-env.sh? I would like to add it 
 myself and submit for review, but I am not familiar with C* check in process. 
 There might be better ways to do this, but I thought of this to be simplest 
 and as the edition is at the end of the line, it will be easy to reverse if 
 needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8834) Top partitions reporting wrong cardinality

2015-03-03 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345229#comment-14345229
 ] 

Chris Lohfink commented on CASSANDRA-8834:
--

So I cant seem to reproduce it but I when testing after upgrading to 2.1.3 
(along with stress tool upgrade that happened there) I was getting that 
exception from there, which I can neither explain or have happen again...  
Something involving the change in schema when going from old pure thrift table 
to what cqlstress creates now I assumed.

 Top partitions reporting wrong cardinality
 --

 Key: CASSANDRA-8834
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8834
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Chris Lohfink
Assignee: Chris Lohfink
 Fix For: 2.1.4

 Attachments: cardinality.patch


 It always reports a cardinality of 1.  Patch also includes a try/catch around 
 the conversion of partition keys that isn't always handled well in thrift cfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8877:
---
Fix Version/s: 3.0
 Assignee: Benjamin Lerer

 Ability to read the TTL and WRTIE TIME of an element in a collection
 

 Key: CASSANDRA-8877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian
Assignee: Benjamin Lerer
 Fix For: 3.0


 Currently it's possible to set the TTL and WRITE TIME of an element in a 
 collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8709) Convert SequentialWriter from using RandomAccessFile to nio channel

2015-03-03 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345249#comment-14345249
 ] 

Joshua McKenzie commented on CASSANDRA-8709:


Branch updated.

bq. CompressedSW.flushData() calls crcMetadata.append(compressed.buffer.array() 
... is clearer.
Fixed.  Left the rewind since appendDirect relies on .position()

bq. In DataIntegrityMetadata, your new appendDirect call shouldn't be using 
mark and reset since it's racy. Better to .duplicate() the input buffer.
Switched to duplicated ByteBuffer and mark/reset on that as the counters should 
be of local use only and thus no threat from a raciness perspective.

bq. In LZ4Compressor.compress() the source length should be using .remaining() 
not .limit()
Good catch - fixed.

bq. All of your non-direct byte buffer code makes me nervous since you are 
accessing .array()...
I went ahead and swapped all of those calls to the appendDirect form.

I also uncommented a block in CompressorTest that snuck into the patch file.

bq. Write test for CompressedSW across all compressors
Added. The unit tests uncovered what appears to be a bug in 
CompressedSequentialWriter.resetAndTruncate with resetting to a mark that's at 
buffer-aligned length. I backported that test into current 2.0/2.1 and the same 
error occurs; we don't mark the current buffered data as dirty on 
resetAndTruncate so if we reset to the chunkOffset with a full buffer it's 
never marked dirty from a subsequent write and reBuffer just drops the data.  
I'll open a ticket for 2.0.13 to get that fix in once we've confirmed it here.

 Convert SequentialWriter from using RandomAccessFile to nio channel
 ---

 Key: CASSANDRA-8709
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8709
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 3.0


 For non-mmap'ed I/O on Windows, using nio channels will give us substantially 
 more flexibility w/regards to renaming and moving files around while writing 
 them.  This change in conjunction with CASSANDRA-4050 should allow us to 
 remove the Windows bypass code in SSTableRewriter for non-memory-mapped I/O.
 In general, migrating from instances of RandomAccessFile to nio channels will 
 help make Windows and linux behavior more consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8850) clean up options syntax for create/alter role

2015-03-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345253#comment-14345253
 ] 

Sylvain Lebresne commented on CASSANDRA-8850:
-

I'll admit I'm not a huge fan of having gazillions ways of expressing the same 
thing, especially when there isn't a meaningful amount of character difference 
between the options. Since roles are new to 3.0, can't we just go with {{WITH}} 
and {{AND}} being mandatory (since that's how other DDL statements work)?

 clean up options syntax for create/alter role 
 --

 Key: CASSANDRA-8850
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8850
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 3.0

 Attachments: 8850.txt


 {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} 
 in a way more consistent with other statements.
 e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345289#comment-14345289
 ] 

Joshua McKenzie commented on CASSANDRA-8086:


It appears you've double-decremented on the connectionsPerClient record when 
the IP's over the limit:
{code}
if (perIpCount.incrementAndGet()  perIpLimit)
{
   perIpCount.decrementAndGet();
   // The decrement will be done in channelClosed(...)
{code}

While counter is decremented in channelClosed and likely what that comment 
refers to, you're also decrementing the connectionsPerClient record again for 
the address in question.

Other than that, LGTM.

 Cassandra should have ability to limit the number of native connections
 ---

 Key: CASSANDRA-8086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
 Project: Cassandra
  Issue Type: Bug
Reporter: Vishy Kasar
Assignee: Norman Maurer
 Fix For: 2.1.4

 Attachments: 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt


 We have a production cluster with 72 instances spread across 2 DCs. We have a 
 large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
 connects to 4 cassandra instances. Some event (we think it is a schema change 
 on server side) triggered the client to establish connections to all 
 cassandra instances of local DC. This brought the server to its knees. The 
 client connections failed and client attempted re-connections. 
 Cassandra should protect itself from such attack from client. Do we have any 
 knobs to control the number of max connections? If not, we need to add that 
 knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345295#comment-14345295
 ] 

Jonathan Ellis commented on CASSANDRA-8894:
---

bq. I propose selecting a buffer size that is the next larger power of 2 than 
our average record size (with a minimum of 4Kb), so that we expect to read in 
one operation.

Makes sense to me.

 I also propose that we create a pool of these buffers up-front

Sharing buffers across files is tricky because of the internals of 
RandomAccessReader.  Maybe this should be a separate ticket.

 Our default buffer size for (uncompressed) buffered reads should be smaller, 
 and based on the expected record size
 --

 Key: CASSANDRA-8894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


 A large contributor to slower buffered reads than mmapped is likely that we 
 read a full 64Kb at once, when average record sizes may be as low as 140 
 bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
 each read will touch 32 of these, meaning we are unlikely to almost ever be 
 hitting the TLB, and will be incurring at least 30 unnecessary misses each 
 time (as well as the other costs of larger than necessary accesses). When 
 working with an SSD there is little to no benefit reading more than 4Kb at 
 once, and in either case reading more data than we need is wasteful. So, I 
 propose selecting a buffer size that is the next larger power of 2 than our 
 average record size (with a minimum of 4Kb), so that we expect to read in one 
 operation. I also propose that we create a pool of these buffers up-front, 
 and that we ensure they are all exactly aligned to a virtual page, so that 
 the source and target operations each touch exactly one virtual page per 4Kb 
 of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8889) CQL spec is missing doc for support of bind variables for LIMIT, TTL, and TIMESTAMP

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-8889:
--

Assignee: Tyler Hobbs

 CQL spec is missing doc for support of bind variables for LIMIT, TTL, and 
 TIMESTAMP
 ---

 Key: CASSANDRA-8889
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8889
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Reporter: Jack Krupansky
Assignee: Tyler Hobbs
Priority: Minor

 CASSANDRA-4450 added the ability to specify a bind variable for the integer 
 value of a LIMIT, TTL, or TIMESTAMP option, but the CQL spec has not been 
 updated to reflect this enhancement.
 Also, the special predefined bind variable names are not documented in the 
 CQL spec: [limit], [ttl], and [timestamp].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norman Maurer updated CASSANDRA-8086:
-
Comment: was deleted

(was: you are right, sigh... fixing now)

 Cassandra should have ability to limit the number of native connections
 ---

 Key: CASSANDRA-8086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
 Project: Cassandra
  Issue Type: Bug
Reporter: Vishy Kasar
Assignee: Norman Maurer
 Fix For: 2.1.4

 Attachments: 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt


 We have a production cluster with 72 instances spread across 2 DCs. We have a 
 large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
 connects to 4 cassandra instances. Some event (we think it is a schema change 
 on server side) triggered the client to establish connections to all 
 cassandra instances of local DC. This brought the server to its knees. The 
 client connections failed and client attempted re-connections. 
 Cassandra should protect itself from such attack from client. Do we have any 
 knobs to control the number of max connections? If not, we need to add that 
 knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8657) long-test LongCompactionsTest fails

2015-03-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-8657:
--
Attachment: 8657-2.0.txt

The test wasn't properly marking the files as compacting, and also wasn't 
properly cleaning up between tests.

 long-test LongCompactionsTest fails
 ---

 Key: CASSANDRA-8657
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8657
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Carl Yeksigian
Priority: Minor
 Fix For: 2.1.4

 Attachments: 8657-2.0.txt, system.log


 Same error on 3 of the 4 tests in this suite - failure is the same for 2.0 
 and 2.1 branch:
 {noformat}
 [junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionsTest
 [junit] Tests run: 4, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 
 27.294 sec
 [junit] 
 [junit] Testcase: 
 testCompactionMany(org.apache.cassandra.db.compaction.LongCompactionsTest):   
 FAILED
 [junit] 
 /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] junit.framework.AssertionFailedError: 
 /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.init(AbstractCompactionTask.java:49)
 [junit] at 
 org.apache.cassandra.db.compaction.CompactionTask.init(CompactionTask.java:47)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionMany(LongCompactionsTest.java:67)
 [junit] 
 [junit] 
 [junit] Testcase: 
 testCompactionSlim(org.apache.cassandra.db.compaction.LongCompactionsTest):   
 FAILED
 [junit] 
 /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] junit.framework.AssertionFailedError: 
 /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.init(AbstractCompactionTask.java:49)
 [junit] at 
 org.apache.cassandra.db.compaction.CompactionTask.init(CompactionTask.java:47)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionSlim(LongCompactionsTest.java:58)
 [junit] 
 [junit] 
 [junit] Testcase: 
 testCompactionWide(org.apache.cassandra.db.compaction.LongCompactionsTest):   
 FAILED
 [junit] 
 /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] junit.framework.AssertionFailedError: 
 /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.init(AbstractCompactionTask.java:49)
 [junit] at 
 org.apache.cassandra.db.compaction.CompactionTask.init(CompactionTask.java:47)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionWide(LongCompactionsTest.java:49)
 [junit] 
 [junit] 
 [junit] Test org.apache.cassandra.db.compaction.LongCompactionsTest FAILED
 {noformat}
 A system.log is attached from the above run on 2.0 HEAD.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8234) CTAS for COPY

2015-03-03 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345246#comment-14345246
 ] 

 Brian Hess commented on CASSANDRA-8234:


It would also be useful to be able to do: 
INSERT INTO foo(x, y, z) SELECT a, b, c FROM bar;

That is, you already have a table set up and want to INSERT into it.  This is 
sort of under the covers of CTAS (step 1: create the table; step 2: insert the 
data into it).

 CTAS for COPY
 -

 Key: CASSANDRA-8234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Robin Schumacher
 Fix For: 3.1


 Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
 The COPY command can be enhanced to perform simple and customized copies of 
 existing tables to satisfy the need. 
 - Simple copy is COPY table a TO new table b.
 - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
 country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8832) SSTableRewriter.abort() should be more robust to failure

2015-03-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8832:

Reviewer: Branimir Lambov

 SSTableRewriter.abort() should be more robust to failure
 

 Key: CASSANDRA-8832
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8832
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1.4


 This fixes a bug introduced in CASSANDRA-8124 that attempts to open early 
 during abort, introducing a failure risk. This patch further preempts 
 CASSANDRA-8690 to wrap every rollback action in a try/catch block, so that 
 any internal assertion checks do not actually worsen the state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345280#comment-14345280
 ] 

Sylvain Lebresne commented on CASSANDRA-8877:
-

We should support that at some point, but that's probably dependent on 
CASSANDRA-7396. Unless we want to make {{writetime}} and {{ttl}} work on a 
collection column directly, but return a list of timestamp/ttl, one for each 
element (which can be done, though with the slight downside that it will make 
the code for handling timestamp and ttl in Selection a tad more complex).

 Ability to read the TTL and WRTIE TIME of an element in a collection
 

 Key: CASSANDRA-8877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian
Assignee: Benjamin Lerer
 Fix For: 3.0


 Currently it's possible to set the TTL and WRITE TIME of an element in a 
 collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8878) Counter Tables should be more clearly identified

2015-03-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345286#comment-14345286
 ] 

Jonathan Ellis commented on CASSANDRA-8878:
---

Won't we be able to mix counter and non-counter columns once Aleksey's counter 
cell format change is done?  In which case I'm reluctant to add special syntax.

 Counter Tables should be more clearly identified
 

 Key: CASSANDRA-8878
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8878
 Project: Cassandra
  Issue Type: Improvement
Reporter: Michaël Figuière
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 3.0


 Counter tables are internally considered as a particular kind of table, 
 different from the regular ones. This counter specific nature is implicitly 
 defined by the fact that columns within a table have the {{counter}} data 
 type. This nature turns out to be persistent over the time, that is if the 
 user do the following:
 {code}
 CREATE TABLE counttable (key uuid primary key, count counter);
 ALTER TABLE counttable DROP count;
 ALTER TABLE counttable ADD count2 int;
 {code} 
 The following error will be thrown:
 {code}
 Cannot add a non counter column (count2) in a counter column family
 {code}
 Even if the table doesn't have any counter column anymore. This implicit, 
 persistent nature can be challenging to understand for users (and impossible 
 to infer in the case above). For this reason a more explicit declaration of 
 counter tables would be appropriate, as:
 {code}
 CREATE COUNTER TABLE counttable (key uuid primary key, count counter);
 {code}
 Besides that, adding a boolean {{counter_table}} column in the 
 {{system.schema_columnfamilies}} table would allow external tools to easily 
 differentiate a counter table from a regular one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345293#comment-14345293
 ] 

Norman Maurer commented on CASSANDRA-8086:
--

you are right, sigh... fixing now

 Cassandra should have ability to limit the number of native connections
 ---

 Key: CASSANDRA-8086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
 Project: Cassandra
  Issue Type: Bug
Reporter: Vishy Kasar
Assignee: Norman Maurer
 Fix For: 2.1.4

 Attachments: 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt


 We have a production cluster with 72 instances spread across 2 DCs. We have a 
 large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
 connects to 4 cassandra instances. Some event (we think it is a schema change 
 on server side) triggered the client to establish connections to all 
 cassandra instances of local DC. This brought the server to its knees. The 
 client connections failed and client attempted re-connections. 
 Cassandra should protect itself from such attack from client. Do we have any 
 knobs to control the number of max connections? If not, we need to add that 
 knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345294#comment-14345294
 ] 

Norman Maurer commented on CASSANDRA-8086:
--

you are right, sigh... fixing now

 Cassandra should have ability to limit the number of native connections
 ---

 Key: CASSANDRA-8086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
 Project: Cassandra
  Issue Type: Bug
Reporter: Vishy Kasar
Assignee: Norman Maurer
 Fix For: 2.1.4

 Attachments: 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt


 We have a production cluster with 72 instances spread across 2 DCs. We have a 
 large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
 connects to 4 cassandra instances. Some event (we think it is a schema change 
 on server side) triggered the client to establish connections to all 
 cassandra instances of local DC. This brought the server to its knees. The 
 client connections failed and client attempted re-connections. 
 Cassandra should protect itself from such attack from client. Do we have any 
 knobs to control the number of max connections? If not, we need to add that 
 knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8878) Counter Tables should be more clearly identified

2015-03-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345310#comment-14345310
 ] 

Sylvain Lebresne commented on CASSANDRA-8878:
-

Afaik, none of reasons for no allowing mixing counter and non-counter will be 
removed by splitting counters in cells, so that wouldn't change anything for 
this issue.

 Counter Tables should be more clearly identified
 

 Key: CASSANDRA-8878
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8878
 Project: Cassandra
  Issue Type: Improvement
Reporter: Michaël Figuière
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 3.0


 Counter tables are internally considered as a particular kind of table, 
 different from the regular ones. This counter specific nature is implicitly 
 defined by the fact that columns within a table have the {{counter}} data 
 type. This nature turns out to be persistent over the time, that is if the 
 user do the following:
 {code}
 CREATE TABLE counttable (key uuid primary key, count counter);
 ALTER TABLE counttable DROP count;
 ALTER TABLE counttable ADD count2 int;
 {code} 
 The following error will be thrown:
 {code}
 Cannot add a non counter column (count2) in a counter column family
 {code}
 Even if the table doesn't have any counter column anymore. This implicit, 
 persistent nature can be challenging to understand for users (and impossible 
 to infer in the case above). For this reason a more explicit declaration of 
 counter tables would be appropriate, as:
 {code}
 CREATE COUNTER TABLE counttable (key uuid primary key, count counter);
 {code}
 Besides that, adding a boolean {{counter_table}} column in the 
 {{system.schema_columnfamilies}} table would allow external tools to easily 
 differentiate a counter table from a regular one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345314#comment-14345314
 ] 

Norman Maurer commented on CASSANDRA-8086:
--

Addressed comment and uploaded new patch

 Cassandra should have ability to limit the number of native connections
 ---

 Key: CASSANDRA-8086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
 Project: Cassandra
  Issue Type: Bug
Reporter: Vishy Kasar
Assignee: Norman Maurer
 Fix For: 2.1.4

 Attachments: 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final-v2.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt


 We have a production cluster with 72 instances spread across 2 DCs. We have a 
 large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
 connects to 4 cassandra instances. Some event (we think it is a schema change 
 on server side) triggered the client to establish connections to all 
 cassandra instances of local DC. This brought the server to its knees. The 
 client connections failed and client attempted re-connections. 
 Cassandra should protect itself from such attack from client. Do we have any 
 knobs to control the number of max connections? If not, we need to add that 
 knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norman Maurer updated CASSANDRA-8086:
-
Attachment: 
0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final-v2.patch

Address comment... 

 Cassandra should have ability to limit the number of native connections
 ---

 Key: CASSANDRA-8086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
 Project: Cassandra
  Issue Type: Bug
Reporter: Vishy Kasar
Assignee: Norman Maurer
 Fix For: 2.1.4

 Attachments: 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final-v2.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt


 We have a production cluster with 72 instances spread across 2 DCs. We have a 
 large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
 connects to 4 cassandra instances. Some event (we think it is a schema change 
 on server side) triggered the client to establish connections to all 
 cassandra instances of local DC. This brought the server to its knees. The 
 client connections failed and client attempted re-connections. 
 Cassandra should protect itself from such attack from client. Do we have any 
 knobs to control the number of max connections? If not, we need to add that 
 knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6060) Remove internal use of Strings for ks/cf names

2015-03-03 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345250#comment-14345250
 ] 

 Brian Hess commented on CASSANDRA-6060:


I know this ticket is closed, but there is another use case that might make 
this more useful.  Namely, with the advent of CTAS (CASSANDRA-8234), you could 
want to change the primary key of a table.  To do that, you could create a new 
table with the new primary key and select the old data into it.  The last step, 
for cleanliness, might be to drop the original table alter the name of the new 
table to the original table name - thereby completing the change of the primary 
key.

 Remove internal use of Strings for ks/cf names
 --

 Key: CASSANDRA-6060
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6060
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Ariel Weisberg
  Labels: performance

 We toss a lot of Strings around internally, including across the network.  
 Once a request has been Prepared, we ought to be able to encode these as int 
 ids.
 Unfortuntely, we moved from int to uuid in CASSANDRA-3794, which was a 
 reasonable move at the time, but a uuid is a lot bigger than an int.  Now 
 that we have CAS we can allow concurrent schema updates while still using 
 sequential int IDs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8850) clean up options syntax for create/alter role

2015-03-03 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345262#comment-14345262
 ] 

Sam Tunnicliffe commented on CASSANDRA-8850:


Certainly, we can do that  I'm also not a fan of having multiple equivalent 
expressions for the same thing. The reasoning for making them optional was to 
preserve support for things like {{CREATE ROLE r NOSUPERUSER;}} which was 
brought along from {{CREATE USER}} syntax. And I assume it was there originally 
to emulate postgres. 

I'll post a new patch ( a PR for dtests) directly.



 clean up options syntax for create/alter role 
 --

 Key: CASSANDRA-8850
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8850
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 3.0

 Attachments: 8850.txt


 {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} 
 in a way more consistent with other statements.
 e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345295#comment-14345295
 ] 

Jonathan Ellis edited comment on CASSANDRA-8894 at 3/3/15 4:35 PM:
---

bq. I propose selecting a buffer size that is the next larger power of 2 than 
our average record size (with a minimum of 4Kb), so that we expect to read in 
one operation.

Makes sense to me.

bq. I also propose that we create a pool of these buffers up-front

Sharing buffers across files is tricky because of the internals of 
RandomAccessReader.  Maybe this should be a separate ticket.


was (Author: jbellis):
bq. I propose selecting a buffer size that is the next larger power of 2 than 
our average record size (with a minimum of 4Kb), so that we expect to read in 
one operation.

Makes sense to me.

 I also propose that we create a pool of these buffers up-front

Sharing buffers across files is tricky because of the internals of 
RandomAccessReader.  Maybe this should be a separate ticket.

 Our default buffer size for (uncompressed) buffered reads should be smaller, 
 and based on the expected record size
 --

 Key: CASSANDRA-8894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


 A large contributor to slower buffered reads than mmapped is likely that we 
 read a full 64Kb at once, when average record sizes may be as low as 140 
 bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
 each read will touch 32 of these, meaning we are unlikely to almost ever be 
 hitting the TLB, and will be incurring at least 30 unnecessary misses each 
 time (as well as the other costs of larger than necessary accesses). When 
 working with an SSD there is little to no benefit reading more than 4Kb at 
 once, and in either case reading more data than we need is wasteful. So, I 
 propose selecting a buffer size that is the next larger power of 2 than our 
 average record size (with a minimum of 4Kb), so that we expect to read in one 
 operation. I also propose that we create a pool of these buffers up-front, 
 and that we ensure they are all exactly aligned to a virtual page, so that 
 the source and target operations each touch exactly one virtual page per 4Kb 
 of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-03-03 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345034#comment-14345034
 ] 

Marcus Eriksson commented on CASSANDRA-8739:


the new compacting L0 calculation takes the sstable *instances* from the 
datatracker compacting set - these instances are not the same as the ones in 
LCS L0 (the LCS L0 can have had their start positions moved), hoping to fix 
that in CASSANDRA-8764

 Don't check for overlap with sstables that have had their start positions 
 moved in LCS
 --

 Key: CASSANDRA-8739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.4

 Attachments: 0001-8739.patch


 When picking compaction candidates in LCS, we check that we won't cause any 
 overlap in the higher level. Problem is that we compare the files that have 
 had their start positions moved meaning we can cause overlap. We need to also 
 include the tmplink files when checking this.
 Note that in 2.1 overlap is not as big problem as earlier, if adding an 
 sstable would cause overlap, we send it back to L0 instead, meaning we do a 
 bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8757) IndexSummaryBuilder should construct itself offheap, and share memory between the result of each build() invocation

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345066#comment-14345066
 ] 

Benedict commented on CASSANDRA-8757:
-

OK, I've pushed a new version to the repository that improves the comments and 
integrates SafeMemoryWriter with DataOutputTest (also slightly changing the 
behaviour of SafeMemoryWriter to support this, but in a way that is probably 
generally sensible anyway)

 IndexSummaryBuilder should construct itself offheap, and share memory between 
 the result of each build() invocation
 ---

 Key: CASSANDRA-8757
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8757
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345036#comment-14345036
 ] 

Benedict commented on CASSANDRA-8739:
-

I'm hoping to fix this in CASSANDRA-8568 also

 Don't check for overlap with sstables that have had their start positions 
 moved in LCS
 --

 Key: CASSANDRA-8739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.4

 Attachments: 0001-8739.patch


 When picking compaction candidates in LCS, we check that we won't cause any 
 overlap in the higher level. Problem is that we compare the files that have 
 had their start positions moved meaning we can cause overlap. We need to also 
 include the tmplink files when checking this.
 Note that in 2.1 overlap is not as big problem as earlier, if adding an 
 sstable would cause overlap, we send it back to L0 instead, meaning we do a 
 bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8884) Opening a non-system keyspace before first accessing the system keyspace results in deadlock

2015-03-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8884:

Summary: Opening a non-system keyspace before first accessing the system 
keyspace results in deadlock  (was: CQLSSTableWriter freezes on addRow)

 Opening a non-system keyspace before first accessing the system keyspace 
 results in deadlock
 

 Key: CASSANDRA-8884
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8884
 Project: Cassandra
  Issue Type: Bug
Reporter: Piotr Kołaczkowski
Assignee: Benjamin Lerer
 Attachments: bulk.jstack


 I created a writer like this:
 {code}
 val writer = CQLSSTableWriter.builder()
   .forTable(tableDef.cql)
   .using(insertStatement)
   .withPartitioner(partitioner)
   .inDirectory(outputDirectory)
   .withBufferSizeInMB(bufferSizeInMB)
   .build()
 {code}
 Then I'm trying to write a row with {{addRow}} and it blocks forever.
 Everything related to {{CQLSSTableWriter}}, including its creation, is 
 happening in only one thread.
 {noformat}
 SSTableBatchOpen:3 daemon prio=10 tid=0x7f4b399d7000 nid=0x4778 waiting 
 for monitor entry [0x7f4b240a7000]
java.lang.Thread.State: BLOCKED (on object monitor)
   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:118)
   - waiting to lock 0xe35fd6d0 (a java.lang.Class for 
 org.apache.cassandra.db.Keyspace)
   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:99)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:1464)
   at 
 org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:517)
   at 
 org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:265)
   at 
 org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:306)
   at 
 org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:316)
   at 
 org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(SystemKeyspace.java:910)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.init(SSTableReader.java:561)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:433)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:343)
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:480)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 SSTableBatchOpen:2 daemon prio=10 tid=0x7f4b399e7800 nid=0x4777 waiting 
 for monitor entry [0x7f4b23ca3000]
java.lang.Thread.State: BLOCKED (on object monitor)
   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:118)
   - waiting to lock 0xe35fd6d0 (a java.lang.Class for 
 org.apache.cassandra.db.Keyspace)
   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:99)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:1464)
   at 
 org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:517)
   at 
 org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:265)
   at 
 org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:306)
   at 
 org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:316)
   at 
 org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(SystemKeyspace.java:910)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.init(SSTableReader.java:561)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:433)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:343)
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:480)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 SSTableBatchOpen:1 daemon prio=10 tid=0x7f4b399e7000 nid=0x4776 waiting 
 for monitor entry [0x7f4b2359d000]
java.lang.Thread.State: BLOCKED (on object monitor)
   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:118)
   - 

[jira] [Updated] (CASSANDRA-8516) NEW_NODE topology event emitted instead of MOVED_NODE by moving node

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8516:
---
 Reviewer: Brandon Williams
Reproduced In: 2.1.2, 2.0.11  (was: 2.0.11, 2.1.2)

 NEW_NODE topology event emitted instead of MOVED_NODE by moving node
 

 Key: CASSANDRA-8516
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8516
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Stefania
Priority: Minor
 Fix For: 2.0.13

 Attachments: cassandra_8516_a.txt, cassandra_8516_b.txt, 
 cassandra_8516_dtest.txt


 As discovered in CASSANDRA-8373, when you move a node in a single-node 
 cluster, a {{NEW_NODE}} event is generated instead of a {{MOVED_NODE}} event.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8657) long-test LongCompactionsTest fails

2015-03-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8657:
--
 Reviewer: Yuki Morishita
Reproduced In: 2.1.2, 2.0.12  (was: 2.0.12, 2.1.2)

[~yukim] to review

 long-test LongCompactionsTest fails
 ---

 Key: CASSANDRA-8657
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8657
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Carl Yeksigian
Priority: Minor
 Fix For: 2.0.13, 2.1.4

 Attachments: 8657-2.0.txt, system.log


 Same error on 3 of the 4 tests in this suite - failure is the same for 2.0 
 and 2.1 branch:
 {noformat}
 [junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionsTest
 [junit] Tests run: 4, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 
 27.294 sec
 [junit] 
 [junit] Testcase: 
 testCompactionMany(org.apache.cassandra.db.compaction.LongCompactionsTest):   
 FAILED
 [junit] 
 /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] junit.framework.AssertionFailedError: 
 /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.init(AbstractCompactionTask.java:49)
 [junit] at 
 org.apache.cassandra.db.compaction.CompactionTask.init(CompactionTask.java:47)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionMany(LongCompactionsTest.java:67)
 [junit] 
 [junit] 
 [junit] Testcase: 
 testCompactionSlim(org.apache.cassandra.db.compaction.LongCompactionsTest):   
 FAILED
 [junit] 
 /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] junit.framework.AssertionFailedError: 
 /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.init(AbstractCompactionTask.java:49)
 [junit] at 
 org.apache.cassandra.db.compaction.CompactionTask.init(CompactionTask.java:47)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionSlim(LongCompactionsTest.java:58)
 [junit] 
 [junit] 
 [junit] Testcase: 
 testCompactionWide(org.apache.cassandra.db.compaction.LongCompactionsTest):   
 FAILED
 [junit] 
 /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] junit.framework.AssertionFailedError: 
 /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.init(AbstractCompactionTask.java:49)
 [junit] at 
 org.apache.cassandra.db.compaction.CompactionTask.init(CompactionTask.java:47)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionWide(LongCompactionsTest.java:49)
 [junit] 
 [junit] 
 [junit] Test org.apache.cassandra.db.compaction.LongCompactionsTest FAILED
 {noformat}
 A system.log is attached from the above run on 2.0 HEAD.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345344#comment-14345344
 ] 

Benedict commented on CASSANDRA-8894:
-

bq. Sharing buffers across files is tricky because of the internals of 
RandomAccessReader. Maybe this should be a separate ticket.

I've filed CASSANDRA-8897 which encompasses this.

 Our default buffer size for (uncompressed) buffered reads should be smaller, 
 and based on the expected record size
 --

 Key: CASSANDRA-8894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


 A large contributor to slower buffered reads than mmapped is likely that we 
 read a full 64Kb at once, when average record sizes may be as low as 140 
 bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
 each read will touch 32 of these, meaning we are unlikely to almost ever be 
 hitting the TLB, and will be incurring at least 30 unnecessary misses each 
 time (as well as the other costs of larger than necessary accesses). When 
 working with an SSD there is little to no benefit reading more than 4Kb at 
 once, and in either case reading more data than we need is wasteful. So, I 
 propose selecting a buffer size that is the next larger power of 2 than our 
 average record size (with a minimum of 4Kb), so that we expect to read in one 
 operation. I also propose that we create a pool of these buffers up-front, 
 and that we ensure they are all exactly aligned to a virtual page, so that 
 the source and target operations each touch exactly one virtual page per 4Kb 
 of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8878) Counter Tables should be more clearly identified

2015-03-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345332#comment-14345332
 ] 

Jonathan Ellis commented on CASSANDRA-8878:
---

What would we need to do to get rid of this distinction, then?  It's maybe the 
ugliest wart we have left at the cql level.

 Counter Tables should be more clearly identified
 

 Key: CASSANDRA-8878
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8878
 Project: Cassandra
  Issue Type: Improvement
Reporter: Michaël Figuière
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 3.0


 Counter tables are internally considered as a particular kind of table, 
 different from the regular ones. This counter specific nature is implicitly 
 defined by the fact that columns within a table have the {{counter}} data 
 type. This nature turns out to be persistent over the time, that is if the 
 user do the following:
 {code}
 CREATE TABLE counttable (key uuid primary key, count counter);
 ALTER TABLE counttable DROP count;
 ALTER TABLE counttable ADD count2 int;
 {code} 
 The following error will be thrown:
 {code}
 Cannot add a non counter column (count2) in a counter column family
 {code}
 Even if the table doesn't have any counter column anymore. This implicit, 
 persistent nature can be challenging to understand for users (and impossible 
 to infer in the case above). For this reason a more explicit declaration of 
 counter tables would be appropriate, as:
 {code}
 CREATE COUNTER TABLE counttable (key uuid primary key, count counter);
 {code}
 Besides that, adding a boolean {{counter_table}} column in the 
 {{system.schema_columnfamilies}} table would allow external tools to easily 
 differentiate a counter table from a regular one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8897) Remove FileCacheService, instead pooling the buffers

2015-03-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-8897:
---

 Summary: Remove FileCacheService, instead pooling the buffers
 Key: CASSANDRA-8897
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8897
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


After CASSANDRA-8893, a RAR will be a very lightweight object and will not need 
caching, so we can eliminate this cache entirely. Instead we should have a pool 
of buffers that are page-aligned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345456#comment-14345456
 ] 

Tyler Hobbs commented on CASSANDRA-8877:


If we make this dependent on CASSANDRA-7396, would we only support it for 
single-element lookup, or would it be supported for slice syntax as well?  If 
we support it for slices, we will need to do what you suggest anyway (return a 
list).

 Ability to read the TTL and WRTIE TIME of an element in a collection
 

 Key: CASSANDRA-8877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian
Assignee: Benjamin Lerer
 Fix For: 3.0


 Currently it's possible to set the TTL and WRITE TIME of an element in a 
 collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8832) SSTableRewriter.abort() should be more robust to failure

2015-03-03 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345460#comment-14345460
 ] 

Branimir Lambov commented on CASSANDRA-8832:


AFAICS the actual fix for the problem was committed [as part of 
7705|https://github.com/apache/cassandra/commit/c75ee4160cb8fcdf47c90bfce8bf0d861f32d268]
 and this patch only adds continued processing after exceptions. Can you 
confirm this?

A couple of comments on the patch:
* {{replaceWithFinishedReaders}} can also throw (e.g. due to a reference 
counting bug), hiding any earlier errors. It should also be wrapped in a 
try/merge block.
* The static {{merge}} of throwables will probably be needed in many other 
places. Could we move it to a more generic location?
* Is it possible to include a regression test for the bug?

 SSTableRewriter.abort() should be more robust to failure
 

 Key: CASSANDRA-8832
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8832
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1.4


 This fixes a bug introduced in CASSANDRA-8124 that attempts to open early 
 during abort, introducing a failure risk. This patch further preempts 
 CASSANDRA-8690 to wrap every rollback action in a try/catch block, so that 
 any internal assertion checks do not actually worsen the state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) with large table

2015-03-03 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu updated CASSANDRA-8899:

Summary: cqlsh - not able to get row count with select(*) with large table  
(was: cqlsh not able to get row count with select(*) with large table)

 cqlsh - not able to get row count with select(*) with large table
 -

 Key: CASSANDRA-8899
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2 ubuntu12.04
Reporter: Jeff Liu

  I'm getting errors when running a query that looks at a large number of rows.
 {noformat}
 cqlsh:events select count(*) from catalog;
  count
 ---
  1
 (1 rows)
 cqlsh:events select count(*) from catalog limit 11000;
  count
 ---
  11000
 (1 rows)
 cqlsh:events select count(*) from catalog limit 5;
 errors={}, last_host=127.0.0.1
 cqlsh:events 
 {noformat}
 We don't make queries w/o a WHERE clause in Chisel itself but I can't 
 validate the correct number of rows are being inserted into the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) for large table

2015-03-03 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu updated CASSANDRA-8899:

Summary: cqlsh - not able to get row count with select(*) for large table  
(was: cqlsh - not able to get row count with select(*) with large table)

 cqlsh - not able to get row count with select(*) for large table
 

 Key: CASSANDRA-8899
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2 ubuntu12.04
Reporter: Jeff Liu

  I'm getting errors when running a query that looks at a large number of rows.
 {noformat}
 cqlsh:events select count(*) from catalog;
  count
 ---
  1
 (1 rows)
 cqlsh:events select count(*) from catalog limit 11000;
  count
 ---
  11000
 (1 rows)
 cqlsh:events select count(*) from catalog limit 5;
 errors={}, last_host=127.0.0.1
 cqlsh:events 
 {noformat}
 We don't make queries w/o a WHERE clause in Chisel itself but I can't 
 validate the correct number of rows are being inserted into the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-03 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345621#comment-14345621
 ] 

Aleksey Yeschenko commented on CASSANDRA-8067:
--

Agreed, but hesitant to do that in 2.1.x. I'll open a separate 3.0 ticket for 
just that.

 NullPointerException in KeyCacheSerializer
 --

 Key: CASSANDRA-8067
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Leleu
Assignee: Aleksey Yeschenko
 Fix For: 2.1.4

 Attachments: 8067.txt


 Hi,
 I have this stack trace in the logs of Cassandra server (v2.1)
 {code}
 ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:14,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
 Source) ~[na:1.7.0]
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
 ~[na:1.7.0]
 at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0]
 {code}
 It may not be critical because this error occured in the AutoSavingCache. 
 However the line 475 is about the CFMetaData so it may hide bigger issue...
 {code}
  474 CFMetaData cfm = 
 Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
 out);
 {code}
 Regards,
 Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Document bind markers for TIMESTAMP, TLL, and LIMIT

2015-03-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 3f6ad3c98 - f6d82a55f


Document bind markers for TIMESTAMP, TLL, and LIMIT

Patch by Tyler Hobbs for CASSANDRA-8889


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ee0c757
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ee0c757
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ee0c757

Branch: refs/heads/cassandra-2.1
Commit: 6ee0c757c387f5e55299e8f6bb433b9c6166ead2
Parents: 72c6ed2
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Mar 3 14:01:43 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 14:01:43 2015 -0600

--
 doc/cql3/CQL.textile | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ee0c757/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 6085d00..cf074af 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -131,6 +131,8 @@ CQL supports _prepared statements_. Prepared statement is 
an optimization that a
 
 In a statement, each time a column value is expected (in the data manipulation 
and query statements), a @variable@ (see above) can be used instead. A 
statement with bind variables must then be _prepared_. Once it has been 
prepared, it can executed by providing concrete values for the bind variables. 
The exact procedure to prepare a statement and execute a prepared statement 
depends on the CQL driver used and is beyond the scope of this document.
 
+In addition to providing column values, bind markers may be used to provide 
values for @LIMIT@, @TIMESTAMP@, and @TTL@ clauses.  If anonymous bind markers 
are used, the names for the query parameters will be @[limit]@, @[timestamp]@, 
and @[ttl]@, respectively.
+
 
 h2(#dataDefinition). Data Definition
 



[jira] [Issue Comment Deleted] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8067:

Comment: was deleted

(was: bq. but hesitant to do that in 2.1.x

Agreed)

 NullPointerException in KeyCacheSerializer
 --

 Key: CASSANDRA-8067
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Leleu
Assignee: Aleksey Yeschenko
 Fix For: 2.1.4

 Attachments: 8067.txt


 Hi,
 I have this stack trace in the logs of Cassandra server (v2.1)
 {code}
 ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:14,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
 Source) ~[na:1.7.0]
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
 ~[na:1.7.0]
 at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0]
 {code}
 It may not be critical because this error occured in the AutoSavingCache. 
 However the line 475 is about the CFMetaData so it may hide bigger issue...
 {code}
  474 CFMetaData cfm = 
 Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
 out);
 {code}
 Regards,
 Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345474#comment-14345474
 ] 

Drew Kutcharian edited comment on CASSANDRA-8877 at 3/3/15 6:46 PM:


[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['first_name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['first_name']), WRITETIME(fields['first_name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields = { 'first_name': 'john', 'last_name': 'doe' }

METADATA(fields) = { 'first_name': {'ttl': ttl seconds, 'writetime': 
timestamp }, 'last_name': {'ttl': ttl seconds, 'writetime': timestamp } }
{code}

or alternatively (without adding a new function):
{code}
SELECT fields, TTL(fields), WRITETIME(fields) from user
{code}

and the response would be
{code}
fields = { 'first_name': 'john', 'last_name': 'doe' }

TTL(fields) = { 'first_name': ttl seconds, 'last_name': ttl seconds }

WRITETIME(fields) = { 'first_name': writetime millis, 'last_name': 
writetime millis }
{code}



was (Author: drew_kutchar):
[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
METADATA(fields): { 'name': {'ttl': ttl seconds, 'writetime': timestamp } }
{code}

or alternatively (without adding a new function):
{code}
SELECT fields, TTL(fields), WRITETIME(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
TTL(fields): { 'name': ttl seconds }
WRITETIME(fields): { 'name': writetime millis }
{code}


 Ability to read the TTL and WRTIE TIME of an element in a collection
 

 Key: CASSANDRA-8877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 Currently it's possible to set the TTL and WRITE TIME of an element in a 
 collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Add missing MOVED_NODE event to native protocol spec

2015-03-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2818ca4cf - fccf0b4f6


Add missing MOVED_NODE event to native protocol spec

Patch by Michael Penick; reviewed by Tyler Hobbs for CASSANDRA-7816


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72c6ed28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72c6ed28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72c6ed28

Branch: refs/heads/trunk
Commit: 72c6ed2883a24486f6785b53cf73fdc8e78e2765
Parents: 33a3a09
Author: Michael Penick michael.pen...@datastax.com
Authored: Tue Mar 3 12:47:41 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 12:47:41 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 2 files changed, 11 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v1.spec
--
diff --git a/doc/native_protocol_v1.spec b/doc/native_protocol_v1.spec
index bc2bb78..41146f9 100644
--- a/doc/native_protocol_v1.spec
+++ b/doc/native_protocol_v1.spec
@@ -486,8 +486,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change (NEW_NODE or REMOVED_NODE) followed by the address of
-  the new/removed node.
+  type of change (NEW_NODE, REMOVED_NODE, or MOVED_NODE) followed
+  by the address of the new/removed/moved node.
 - STATUS_CHANGE: events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -509,6 +509,9 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
 
 5. Compression
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v2.spec
--
diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec
index ef54099..584ae2f 100644
--- a/doc/native_protocol_v2.spec
+++ b/doc/native_protocol_v2.spec
@@ -604,8 +604,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change (NEW_NODE or REMOVED_NODE) followed by the address of
-  the new/removed node.
+  type of change (NEW_NODE, REMOVED_NODE, or MOVED_NODE) followed
+  by the address of the new/removed/moved node.
 - STATUS_CHANGE: events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -627,6 +627,10 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
+
 4.2.7. AUTH_CHALLENGE
 
   A server authentication challenge (see AUTH_RESPONSE (Section 4.1.2) for more



[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-03 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fccf0b4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fccf0b4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fccf0b4f

Branch: refs/heads/trunk
Commit: fccf0b4f66c9ed60fa5bad10174676424a97
Parents: 2818ca4 3f6ad3c
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Mar 3 12:50:48 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 12:50:48 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 doc/native_protocol_v3.spec | 8 ++--
 3 files changed, 17 insertions(+), 6 deletions(-)
--




[jira] [Resolved] (CASSANDRA-7875) Prepared statements using dropped indexes are not handled correctly

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-7875.

   Resolution: Won't Fix
Fix Version/s: (was: 2.1.4)
   2.0.13
 Reviewer: Tyler Hobbs

+1 on leaving 2.0 alone.  I'm resolving this as Won't Fix, and we'll get the 
dtest merged.  Thanks Stefania!

 Prepared statements using dropped indexes are not handled correctly
 ---

 Key: CASSANDRA-7875
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7875
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Stefania
Priority: Minor
 Fix For: 2.0.13

 Attachments: prepared_statements_test.py, repro.py


 When select statements are prepared, we verify that the column restrictions 
 use indexes (where necessary).  However, we don't perform a similar check 
 when the statement is executed, so it fails somewhere further down the line.  
 In this case, it hits an assertion:
 {noformat}
 java.lang.AssertionError: Sequential scan with filters is not supported (if 
 you just created an index, you need to wait for the creation to be propagated 
 to all nodes before querying it)
   at 
 org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.getExtraFilter(ExtendedFilter.java:259)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1759)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1709)
   at 
 org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:119)
   at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1394)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 {noformat}
 During execution, we should check that the indexes still exist and provide a 
 better error if they do not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8850) clean up options syntax for create/alter role

2015-03-03 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8850:
---
Attachment: 8850-v2.txt

v2 attached where {{WITH}}  {{AND}} are mandatory in {{CREATE|ALTER ROLE}}. 

Update to auth_roles_dtest 
[here|https://github.com/riptano/cassandra-dtest/pull/178] 

 clean up options syntax for create/alter role 
 --

 Key: CASSANDRA-8850
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8850
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 3.0

 Attachments: 8850-v2.txt, 8850.txt


 {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} 
 in a way more consistent with other statements.
 e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-03-03 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f6ad3c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f6ad3c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f6ad3c9

Branch: refs/heads/cassandra-2.1
Commit: 3f6ad3c9886c01c2cdaed6cad10c6f0672004473
Parents: 2f7077c 72c6ed2
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Mar 3 12:50:20 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 12:50:20 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 doc/native_protocol_v3.spec | 8 ++--
 3 files changed, 17 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v1.spec
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v2.spec
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v3.spec
--
diff --cc doc/native_protocol_v3.spec
index 1d35d50,000..9894d76
mode 100644,00..100644
--- a/doc/native_protocol_v3.spec
+++ b/doc/native_protocol_v3.spec
@@@ -1,1027 -1,0 +1,1031 @@@
 +
 + CQL BINARY PROTOCOL v3
 +
 +
 +Table of Contents
 +
 +  1. Overview
 +  2. Frame header
 +2.1. version
 +2.2. flags
 +2.3. stream
 +2.4. opcode
 +2.5. length
 +  3. Notations
 +  4. Messages
 +4.1. Requests
 +  4.1.1. STARTUP
 +  4.1.2. AUTH_RESPONSE
 +  4.1.3. OPTIONS
 +  4.1.4. QUERY
 +  4.1.5. PREPARE
 +  4.1.6. EXECUTE
 +  4.1.7. BATCH
 +  4.1.8. REGISTER
 +4.2. Responses
 +  4.2.1. ERROR
 +  4.2.2. READY
 +  4.2.3. AUTHENTICATE
 +  4.2.4. SUPPORTED
 +  4.2.5. RESULT
 +4.2.5.1. Void
 +4.2.5.2. Rows
 +4.2.5.3. Set_keyspace
 +4.2.5.4. Prepared
 +4.2.5.5. Schema_change
 +  4.2.6. EVENT
 +  4.2.7. AUTH_CHALLENGE
 +  4.2.8. AUTH_SUCCESS
 +  5. Compression
 +  6. Data Type Serialization Formats
 +  7. User Defined Type Serialization
 +  8. Result paging
 +  9. Error codes
 +  10. Changes from v2
 +
 +
 +1. Overview
 +
 +  The CQL binary protocol is a frame based protocol. Frames are defined as:
 +
 +  0 8162432 40
 +  +-+-+-+-+-+
 +  | version |  flags  |  stream   | opcode  |
 +  +-+-+-+-+-+
 +  |length |
 +  +-+-+-+-+
 +  |   |
 +  ....  body ...  .
 +  .   .
 +  .   .
 +  +
 +
 +  The protocol is big-endian (network byte order).
 +
 +  Each frame contains a fixed size header (9 bytes) followed by a variable 
size
 +  body. The header is described in Section 2. The content of the body depends
 +  on the header opcode value (the body can in particular be empty for some
 +  opcode values). The list of allowed opcode is defined Section 2.3 and the
 +  details of each corresponding message is described Section 4.
 +
 +  The protocol distinguishes 2 types of frames: requests and responses. 
Requests
 +  are those frame sent by the clients to the server, response are the ones 
sent
 +  by the server. Note however that the protocol supports server pushes 
(events)
 +  so responses does not necessarily come right after a client request.
 +
 +  Note to client implementors: clients library should always assume that the
 +  body of a given frame may contain more data than what is described in this
 +  document. It will however always be safe to ignore the remaining of the 
frame
 +  body in such cases. The reason is that this may allow to sometimes extend 
the
 +  protocol with optional features without needing to change the protocol
 +  version.
 +
 +
 +
 +2. Frame header
 +
 +2.1. version
 +
 +  The version is a single byte that indicate both the direction of the message
 +  (request or response) and the version of the protocol in use. The up-most 
bit
 +  of version is used to define the direction of the message: 0 indicates a
 +  request, 1 indicates a responses. This can be useful for protocol analyzers 
to
 +  distinguish the nature of the packet from the direction which it is moving.
 +  The rest of that byte is the protocol version (3 for the protocol defined in
 +  this 

cassandra git commit: Add missing MOVED_NODE event to native protocol spec

2015-03-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 33a3a09cb - 72c6ed288


Add missing MOVED_NODE event to native protocol spec

Patch by Michael Penick; reviewed by Tyler Hobbs for CASSANDRA-7816


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72c6ed28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72c6ed28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72c6ed28

Branch: refs/heads/cassandra-2.0
Commit: 72c6ed2883a24486f6785b53cf73fdc8e78e2765
Parents: 33a3a09
Author: Michael Penick michael.pen...@datastax.com
Authored: Tue Mar 3 12:47:41 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 12:47:41 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 2 files changed, 11 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v1.spec
--
diff --git a/doc/native_protocol_v1.spec b/doc/native_protocol_v1.spec
index bc2bb78..41146f9 100644
--- a/doc/native_protocol_v1.spec
+++ b/doc/native_protocol_v1.spec
@@ -486,8 +486,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change (NEW_NODE or REMOVED_NODE) followed by the address of
-  the new/removed node.
+  type of change (NEW_NODE, REMOVED_NODE, or MOVED_NODE) followed
+  by the address of the new/removed/moved node.
 - STATUS_CHANGE: events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -509,6 +509,9 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
 
 5. Compression
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v2.spec
--
diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec
index ef54099..584ae2f 100644
--- a/doc/native_protocol_v2.spec
+++ b/doc/native_protocol_v2.spec
@@ -604,8 +604,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change (NEW_NODE or REMOVED_NODE) followed by the address of
-  the new/removed node.
+  type of change (NEW_NODE, REMOVED_NODE, or MOVED_NODE) followed
+  by the address of the new/removed/moved node.
 - STATUS_CHANGE: events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -627,6 +627,10 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
+
 4.2.7. AUTH_CHALLENGE
 
   A server authentication challenge (see AUTH_RESPONSE (Section 4.1.2) for more



[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345630#comment-14345630
 ] 

Benedict commented on CASSANDRA-8067:
-

bq. but hesitant to do that in 2.1.x

Agreed

 NullPointerException in KeyCacheSerializer
 --

 Key: CASSANDRA-8067
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Leleu
Assignee: Aleksey Yeschenko
 Fix For: 2.1.4

 Attachments: 8067.txt


 Hi,
 I have this stack trace in the logs of Cassandra server (v2.1)
 {code}
 ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:14,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
 Source) ~[na:1.7.0]
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
 ~[na:1.7.0]
 at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0]
 {code}
 It may not be critical because this error occured in the AutoSavingCache. 
 However the line 475 is about the CFMetaData so it may hide bigger issue...
 {code}
  474 CFMetaData cfm = 
 Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
 out);
 {code}
 Regards,
 Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345629#comment-14345629
 ] 

Benedict commented on CASSANDRA-8067:
-

bq. but hesitant to do that in 2.1.x

Agreed

 NullPointerException in KeyCacheSerializer
 --

 Key: CASSANDRA-8067
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Leleu
Assignee: Aleksey Yeschenko
 Fix For: 2.1.4

 Attachments: 8067.txt


 Hi,
 I have this stack trace in the logs of Cassandra server (v2.1)
 {code}
 ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:14,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
 Source) ~[na:1.7.0]
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
 ~[na:1.7.0]
 at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0]
 {code}
 It may not be critical because this error occured in the AutoSavingCache. 
 However the line 475 is about the CFMetaData so it may hide bigger issue...
 {code}
  474 CFMetaData cfm = 
 Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
 out);
 {code}
 Regards,
 Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Document bind markers for TIMESTAMP, TLL, and LIMIT

2015-03-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 72c6ed288 - 6ee0c757c


Document bind markers for TIMESTAMP, TLL, and LIMIT

Patch by Tyler Hobbs for CASSANDRA-8889


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ee0c757
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ee0c757
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ee0c757

Branch: refs/heads/cassandra-2.0
Commit: 6ee0c757c387f5e55299e8f6bb433b9c6166ead2
Parents: 72c6ed2
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Mar 3 14:01:43 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 14:01:43 2015 -0600

--
 doc/cql3/CQL.textile | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ee0c757/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 6085d00..cf074af 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -131,6 +131,8 @@ CQL supports _prepared statements_. Prepared statement is 
an optimization that a
 
 In a statement, each time a column value is expected (in the data manipulation 
and query statements), a @variable@ (see above) can be used instead. A 
statement with bind variables must then be _prepared_. Once it has been 
prepared, it can executed by providing concrete values for the bind variables. 
The exact procedure to prepare a statement and execute a prepared statement 
depends on the CQL driver used and is beyond the scope of this document.
 
+In addition to providing column values, bind markers may be used to provide 
values for @LIMIT@, @TIMESTAMP@, and @TTL@ clauses.  If anonymous bind markers 
are used, the names for the query parameters will be @[limit]@, @[timestamp]@, 
and @[ttl]@, respectively.
+
 
 h2(#dataDefinition). Data Definition
 



svn commit: r1663774 - in /cassandra/site/publish/doc/cql3: CQL-2.0.html CQL-2.1.html

2015-03-03 Thread tylerhobbs
Author: tylerhobbs
Date: Tue Mar  3 20:05:46 2015
New Revision: 1663774

URL: http://svn.apache.org/r1663774
Log:
Update CQL3 docs for CASSANDRA-8889

Modified:
cassandra/site/publish/doc/cql3/CQL-2.0.html
cassandra/site/publish/doc/cql3/CQL-2.1.html

Modified: cassandra/site/publish/doc/cql3/CQL-2.0.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/doc/cql3/CQL-2.0.html?rev=1663774r1=1663773r2=1663774view=diff
==
--- cassandra/site/publish/doc/cql3/CQL-2.0.html (original)
+++ cassandra/site/publish/doc/cql3/CQL-2.0.html Tue Mar  3 20:05:46 2015
@@ -38,7 +38,7 @@
 
   lt;properties ::= lt;property (AND lt;property)*
 lt;property ::= lt;identifier '=' ( lt;identifier | lt;constant | 
lt;map-literal )
-/pre/prepbr/Please note that not every possible productions of the 
grammar above will be valid in practice. Most notably, 
codelt;variable/code and nested codelt;collection-literal/code are 
currently not allowed inside codelt;collection-literal/code./ppA 
codelt;variable/code can be either anonymous (a question mark 
(code?/code)) or named (an identifier preceded by code:/code). Both 
declare a bind variables for a href=#preparedStatementprepared 
statements/a. The only difference between an anymous and a named variable is 
that a named one will be easier to refer to (how exactly depends on the client 
driver used)./ppThe codelt;properties/code production is use by 
statement that create and alter keyspaces and tables. Each 
codelt;property/code is either a emsimple/em one, in which case it 
just has a value, or a emmap/em one, in which case it#8217;s value is a 
map grouping sub-options. The following will refer to one
  or the other as the emkind/em (emsimple/em or emmap/em) of the 
property./ppA codelt;tablename/code will be used to identify a table. 
This is an identifier representing the table name that can be preceded by a 
keyspace name. The keyspace name, if provided, allow to identify a table in 
another keyspace than the currently active one (the currently active keyspace 
is set through the a href=#useStmtttUSE/tt/a statement)./ppFor 
supported codelt;function/code, see the section on a 
href=#functionsfunctions/a./ph3 id=preparedStatementPrepared 
Statement/h3pCQL supports emprepared statements/em. Prepared statement 
is an optimization that allows to parse a query only once but execute it 
multiple times with different concrete values./ppIn a statement, each time 
a column value is expected (in the data manipulation and query statements), a 
codelt;variable/code (see above) can be used instead. A statement with 
bind variables m
 ust then be emprepared/em. Once it has been prepared, it can executed by 
providing concrete values for the bind variables. The exact procedure to 
prepare a statement and execute a prepared statement depends on the CQL driver 
used and is beyond the scope of this document./ph2 id=dataDefinitionData 
Definition/h2h3 id=createKeyspaceStmtCREATE 
KEYSPACE/h3piSyntax:/i/ppre 
class=syntaxprelt;create-keyspace-stmt ::= CREATE KEYSPACE (IF NOT 
EXISTS)? lt;identifier WITH lt;properties
+/pre/prepbr/Please note that not every possible productions of the 
grammar above will be valid in practice. Most notably, 
codelt;variable/code and nested codelt;collection-literal/code are 
currently not allowed inside codelt;collection-literal/code./ppA 
codelt;variable/code can be either anonymous (a question mark 
(code?/code)) or named (an identifier preceded by code:/code). Both 
declare a bind variables for a href=#preparedStatementprepared 
statements/a. The only difference between an anymous and a named variable is 
that a named one will be easier to refer to (how exactly depends on the client 
driver used)./ppThe codelt;properties/code production is use by 
statement that create and alter keyspaces and tables. Each 
codelt;property/code is either a emsimple/em one, in which case it 
just has a value, or a emmap/em one, in which case it#8217;s value is a 
map grouping sub-options. The following will refer to one
  or the other as the emkind/em (emsimple/em or emmap/em) of the 
property./ppA codelt;tablename/code will be used to identify a table. 
This is an identifier representing the table name that can be preceded by a 
keyspace name. The keyspace name, if provided, allow to identify a table in 
another keyspace than the currently active one (the currently active keyspace 
is set through the a href=#useStmtttUSE/tt/a statement)./ppFor 
supported codelt;function/code, see the section on a 
href=#functionsfunctions/a./ph3 id=preparedStatementPrepared 
Statement/h3pCQL supports emprepared statements/em. Prepared statement 
is an optimization that allows to parse a query only once but execute it 
multiple times with different concrete values./ppIn a statement, each time 
a column value is expected (in the data manipulation and query statements), a 
codelt;variable/code (see above) can 

[jira] [Comment Edited] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345474#comment-14345474
 ] 

Drew Kutcharian edited comment on CASSANDRA-8877 at 3/3/15 6:40 PM:


[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
METADATA(fields): { 'name': {'ttl': ttl seconds, 'writetime': timestamp } }
{code}



was (Author: drew_kutchar):
[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, metadata(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
metadata(fields): { 'name': {'ttl': ttl seconds, 'writetime': timestamp } }
{code}


 Ability to read the TTL and WRTIE TIME of an element in a collection
 

 Key: CASSANDRA-8877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 Currently it's possible to set the TTL and WRITE TIME of an element in a 
 collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345474#comment-14345474
 ] 

Drew Kutcharian edited comment on CASSANDRA-8877 at 3/3/15 6:42 PM:


[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
METADATA(fields): { 'name': {'ttl': ttl seconds, 'writetime': timestamp } }
{code}

or alternatively (without adding a new function):
{code}
SELECT fields, TTL(fields), WRITETIME(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
TTL(fields): { 'name': ttl seconds }
WRITETIME(fields): { 'name': writetime millis }
{code}



was (Author: drew_kutchar):
[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
METADATA(fields): { 'name': {'ttl': ttl seconds, 'writetime': timestamp } }
{code}


 Ability to read the TTL and WRTIE TIME of an element in a collection
 

 Key: CASSANDRA-8877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 Currently it's possible to set the TTL and WRITE TIME of an element in a 
 collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8861) HyperLogLog Collection Type

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345487#comment-14345487
 ] 

Drew Kutcharian commented on CASSANDRA-8861:


Thanks [~iamaleksey]

 HyperLogLog Collection Type
 ---

 Key: CASSANDRA-8861
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8861
 Project: Cassandra
  Issue Type: Wish
Reporter: Drew Kutcharian
Assignee: Aleksey Yeschenko
 Fix For: 3.1


 Considering that HyperLogLog and its variants have become pretty popular in 
 analytics space and Cassandra has read-before-write collections (Lists), I 
 think it would not be too painful to add support for HyperLogLog collection 
 type. They would act similar to CQL 3 Sets, meaning you would be able to 
 set the value and add an element, but you won't be able to remove an 
 element. Also, when getting the value of a HyperLogLog collection column, 
 you'd get the cardinality.
 There are a couple of good attributes with HyperLogLog which fit Cassandra 
 pretty well.
 - Adding an element is idempotent (adding an existing element doesn't change 
 the HLL)
 - HLL can be thought of as a CRDT, since we can safely merge them. Which 
 means we can merge two HLLs during read-repair. But if that's too much work, 
 I guess we can even live with LWW since these counts are estimates after 
 all.
 There is already a proof of concept at:
 http://vilkeliskis.com/blog/2013/12/28/hacking_cassandra.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345569#comment-14345569
 ] 

Robert Stupp commented on CASSANDRA-8877:
-

It's related to CASSANDRA-7396 - i.e. it uses basically the same functionality 
(selecting individual collection elements). I'd prefer to make this ticket 
depend on CASSANDRA-7396.

 Ability to read the TTL and WRTIE TIME of an element in a collection
 

 Key: CASSANDRA-8877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 Currently it's possible to set the TTL and WRITE TIME of an element in a 
 collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8504) Stack trace is erroneously logged twice

2015-03-03 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-8504.
-
Resolution: Not a Problem

 Stack trace is erroneously logged twice
 ---

 Key: CASSANDRA-8504
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8504
 Project: Cassandra
  Issue Type: Bug
 Environment: OSX and Ubuntu
Reporter: Philip Thompson
Assignee: Stefania
Priority: Minor
 Fix For: 3.0

 Attachments: node4.log


 The dtest 
 {{replace_address_test.TestReplaceAddress.replace_active_node_test}} is 
 failing on 3.0. The following can be seen in the log:{code}ERROR [main] 
 2014-12-17 15:12:33,871 CassandraDaemon.java:496 - Exception encountered 
 during startup
 java.lang.UnsupportedOperationException: Cannot replace a live node...
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:773)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:593)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
  ~[main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:356) 
 [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:479)
  [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:571) 
 [main/:na]
 ERROR [main] 2014-12-17 15:12:33,872 CassandraDaemon.java:584 - Exception 
 encountered during startup
 java.lang.UnsupportedOperationException: Cannot replace a live node...
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:773)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:593)
  ~[main/:na]
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
  ~[main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:356) 
 [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:479)
  [main/:na]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:571) 
 [main/:na]
 INFO  [StorageServiceShutdownHook] 2014-12-17 15:12:33,873 Gossiper.java:1349 
 - Announcing shutdown
 INFO  [StorageServiceShutdownHook] 2014-12-17 15:12:35,876 
 MessagingService.java:708 - Waiting for messaging service to quiesce{code}
 The test starts up a three node cluster, loads some data, then attempts to 
 start a fourth node with replace_address against the IP of a live node. This 
 is expected to fail, with one ERROR message in the log. In 3.0, we are seeing 
 two messages. 2.1-HEAD is working as expected. Attached is the full log of 
 the fourth node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) for large table

2015-03-03 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu updated CASSANDRA-8899:

Description: 
 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events 
{noformat}

We are not able to make the select * query to get row count.

  was:
 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events 
{noformat}

We are not able to make the select(*) query to get row count.


 cqlsh - not able to get row count with select(*) for large table
 

 Key: CASSANDRA-8899
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2 ubuntu12.04
Reporter: Jeff Liu

  I'm getting errors when running a query that looks at a large number of rows.
 {noformat}
 cqlsh:events select count(*) from catalog;
  count
 ---
  1
 (1 rows)
 cqlsh:events select count(*) from catalog limit 11000;
  count
 ---
  11000
 (1 rows)
 cqlsh:events select count(*) from catalog limit 5;
 errors={}, last_host=127.0.0.1
 cqlsh:events 
 {noformat}
 We are not able to make the select * query to get row count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) for large table

2015-03-03 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu updated CASSANDRA-8899:

Description: 
 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events 
{noformat}

We are not able to make the select(*) query to get row count.

  was:
 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events 
{noformat}

We don't make queries w/o a WHERE clause in Chisel itself but I can't validate 
the correct number of rows are being inserted into the table.


 cqlsh - not able to get row count with select(*) for large table
 

 Key: CASSANDRA-8899
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2 ubuntu12.04
Reporter: Jeff Liu

  I'm getting errors when running a query that looks at a large number of rows.
 {noformat}
 cqlsh:events select count(*) from catalog;
  count
 ---
  1
 (1 rows)
 cqlsh:events select count(*) from catalog limit 11000;
  count
 ---
  11000
 (1 rows)
 cqlsh:events select count(*) from catalog limit 5;
 errors={}, last_host=127.0.0.1
 cqlsh:events 
 {noformat}
 We are not able to make the select(*) query to get row count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8899) cqlsh not able to get row count with select(*) with large table

2015-03-03 Thread Jeff Liu (JIRA)
Jeff Liu created CASSANDRA-8899:
---

 Summary: cqlsh not able to get row count with select(*) with large 
table
 Key: CASSANDRA-8899
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2 ubuntu12.04
Reporter: Jeff Liu


 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events 
{noformat}

We don't make queries w/o a WHERE clause in Chisel itself but I can't validate 
the correct number of rows are being inserted into the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8870) Tombstone overwhelming issue aborts client queries

2015-03-03 Thread Jeff Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345461#comment-14345461
 ] 

Jeff Liu commented on CASSANDRA-8870:
-

Another question I have been curious about is that why we would see those 
tombstone errors. in our application, we are doing insert and update only. Will 
update ops generate tombstones?

 Tombstone overwhelming issue aborts client queries
 --

 Key: CASSANDRA-8870
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8870
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra 2.1.2 ubunbtu 12.04
Reporter: Jeff Liu

 We are getting client queries timeout issues on the clients who are trying to 
 query data from cassandra cluster. 
 Nodetool status shows that all nodes are still up regardless.
 Logs from client side:
 {noformat}
 com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
 tried for query failed (tried: 
 cass-chisel01.abc01.abc02.abc.abc.com/10.66.182.113:9042 
 (com.datastax.driver.core.TransportException: 
 [cass-chisel01.tgr01.iad02.testd.nestlabs.com/10.66.182.113:9042] Connection 
 has been closed))
 at 
 com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108) 
 ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
 at 
 com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179) 
 ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_55]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_55]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
 {noformat}
 Logs from cassandra/system.log
 {noformat}
 ERROR [HintedHandoff:2] 2015-02-23 23:46:28,410 SliceQueryFilter.java:212 - 
 Scanned over 10 tombstones in system.hints; query aborted (see 
 tombstone_failure_threshold)
 ERROR [HintedHandoff:2] 2015-02-23 23:46:28,417 CassandraDaemon.java:153 - 
 Exception in thread Thread[HintedHandoff:2,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException: null
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:214)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:310)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1858)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1666)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:385)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:344)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$400(HintedHandOffManager.java:94)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:555)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_55]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_55]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8877:
---
Priority: Minor  (was: Major)

 Ability to read the TTL and WRTIE TIME of an element in a collection
 

 Key: CASSANDRA-8877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 Currently it's possible to set the TTL and WRITE TIME of an element in a 
 collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8870) Tombstone overwhelming issue aborts client queries

2015-03-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345472#comment-14345472
 ] 

Philip Thompson commented on CASSANDRA-8870:


[~shawn.kumar] is handling reproduction.

 Tombstone overwhelming issue aborts client queries
 --

 Key: CASSANDRA-8870
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8870
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra 2.1.2 ubunbtu 12.04
Reporter: Jeff Liu

 We are getting client queries timeout issues on the clients who are trying to 
 query data from cassandra cluster. 
 Nodetool status shows that all nodes are still up regardless.
 Logs from client side:
 {noformat}
 com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
 tried for query failed (tried: 
 cass-chisel01.abc01.abc02.abc.abc.com/10.66.182.113:9042 
 (com.datastax.driver.core.TransportException: 
 [cass-chisel01.tgr01.iad02.testd.nestlabs.com/10.66.182.113:9042] Connection 
 has been closed))
 at 
 com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108) 
 ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
 at 
 com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179) 
 ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_55]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_55]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
 {noformat}
 Logs from cassandra/system.log
 {noformat}
 ERROR [HintedHandoff:2] 2015-02-23 23:46:28,410 SliceQueryFilter.java:212 - 
 Scanned over 10 tombstones in system.hints; query aborted (see 
 tombstone_failure_threshold)
 ERROR [HintedHandoff:2] 2015-02-23 23:46:28,417 CassandraDaemon.java:153 - 
 Exception in thread Thread[HintedHandoff:2,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException: null
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:214)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:310)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1858)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1666)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:385)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:344)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$400(HintedHandOffManager.java:94)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:555)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_55]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_55]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345474#comment-14345474
 ] 

Drew Kutcharian commented on CASSANDRA-8877:


[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, metadata(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
metadata(fields): { 'name': {'ttl': ttl seconds, 'writetime': timestamp } }
{code}


 Ability to read the TTL and WRTIE TIME of an element in a collection
 

 Key: CASSANDRA-8877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Drew Kutcharian
Assignee: Benjamin Lerer
Priority: Minor
 Fix For: 3.0


 Currently it's possible to set the TTL and WRITE TIME of an element in a 
 collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-03-03 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f6ad3c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f6ad3c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f6ad3c9

Branch: refs/heads/trunk
Commit: 3f6ad3c9886c01c2cdaed6cad10c6f0672004473
Parents: 2f7077c 72c6ed2
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Mar 3 12:50:20 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 12:50:20 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 doc/native_protocol_v3.spec | 8 ++--
 3 files changed, 17 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v1.spec
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v2.spec
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v3.spec
--
diff --cc doc/native_protocol_v3.spec
index 1d35d50,000..9894d76
mode 100644,00..100644
--- a/doc/native_protocol_v3.spec
+++ b/doc/native_protocol_v3.spec
@@@ -1,1027 -1,0 +1,1031 @@@
 +
 + CQL BINARY PROTOCOL v3
 +
 +
 +Table of Contents
 +
 +  1. Overview
 +  2. Frame header
 +2.1. version
 +2.2. flags
 +2.3. stream
 +2.4. opcode
 +2.5. length
 +  3. Notations
 +  4. Messages
 +4.1. Requests
 +  4.1.1. STARTUP
 +  4.1.2. AUTH_RESPONSE
 +  4.1.3. OPTIONS
 +  4.1.4. QUERY
 +  4.1.5. PREPARE
 +  4.1.6. EXECUTE
 +  4.1.7. BATCH
 +  4.1.8. REGISTER
 +4.2. Responses
 +  4.2.1. ERROR
 +  4.2.2. READY
 +  4.2.3. AUTHENTICATE
 +  4.2.4. SUPPORTED
 +  4.2.5. RESULT
 +4.2.5.1. Void
 +4.2.5.2. Rows
 +4.2.5.3. Set_keyspace
 +4.2.5.4. Prepared
 +4.2.5.5. Schema_change
 +  4.2.6. EVENT
 +  4.2.7. AUTH_CHALLENGE
 +  4.2.8. AUTH_SUCCESS
 +  5. Compression
 +  6. Data Type Serialization Formats
 +  7. User Defined Type Serialization
 +  8. Result paging
 +  9. Error codes
 +  10. Changes from v2
 +
 +
 +1. Overview
 +
 +  The CQL binary protocol is a frame based protocol. Frames are defined as:
 +
 +  0 8162432 40
 +  +-+-+-+-+-+
 +  | version |  flags  |  stream   | opcode  |
 +  +-+-+-+-+-+
 +  |length |
 +  +-+-+-+-+
 +  |   |
 +  ....  body ...  .
 +  .   .
 +  .   .
 +  +
 +
 +  The protocol is big-endian (network byte order).
 +
 +  Each frame contains a fixed size header (9 bytes) followed by a variable 
size
 +  body. The header is described in Section 2. The content of the body depends
 +  on the header opcode value (the body can in particular be empty for some
 +  opcode values). The list of allowed opcode is defined Section 2.3 and the
 +  details of each corresponding message is described Section 4.
 +
 +  The protocol distinguishes 2 types of frames: requests and responses. 
Requests
 +  are those frame sent by the clients to the server, response are the ones 
sent
 +  by the server. Note however that the protocol supports server pushes 
(events)
 +  so responses does not necessarily come right after a client request.
 +
 +  Note to client implementors: clients library should always assume that the
 +  body of a given frame may contain more data than what is described in this
 +  document. It will however always be safe to ignore the remaining of the 
frame
 +  body in such cases. The reason is that this may allow to sometimes extend 
the
 +  protocol with optional features without needing to change the protocol
 +  version.
 +
 +
 +
 +2. Frame header
 +
 +2.1. version
 +
 +  The version is a single byte that indicate both the direction of the message
 +  (request or response) and the version of the protocol in use. The up-most 
bit
 +  of version is used to define the direction of the message: 0 indicates a
 +  request, 1 indicates a responses. This can be useful for protocol analyzers 
to
 +  distinguish the nature of the packet from the direction which it is moving.
 +  The rest of that byte is the protocol version (3 for the protocol defined in
 +  this document). In 

[jira] [Updated] (CASSANDRA-8883) Percentile computation should use ceil not floor in EstimatedHistogram

2015-03-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-8883:
--
Attachment: 8883-2.1.txt

Since numpy has access to the original values, it provides interpolation 
between the points if the percentile isn't exactly on a boundary:
{code}
np.percentile(np.array([1, 2, 3, 4, 5, 6]), 50)
== 3.5
{code}
Since we are using the histogram, we don't really know where that lands, so we 
just need to return a value inside of the range. Currently we are returning the 
end of the range before where the percentile occurs.

I've changed EstimatedHistogram to use ceil instead of floor, and updated the 
tests accordingly.

 Percentile computation should use ceil not floor in EstimatedHistogram
 --

 Key: CASSANDRA-8883
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8883
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Chris Lohfink
Assignee: Carl Yeksigian
Priority: Minor
 Fix For: 2.1.4

 Attachments: 8883-2.1.txt


 When computing the pcount Cassandra uses floor and the comparison with 
 elements is = so given a simple example of there being a total of five 
 elements
 {code}
 // data
 [1, 1, 1, 1, 1]
 // offsets
 [1, 2, 3, 4, 5]
 {code}
 Cassandra  would report the 50th percentile as 2.  While 3 is the more 
 expected value.  As a comparison using numpy
 {code}
 import numpy as np
 np.percentile(np.array([1, 2, 3, 4, 5]), 50)
 == 3.0
 {code}
 The percentiles was added in CASSANDRA-4022 but is now used a lot in metrics 
 Cassandra reports.  I think it should error on the side on overestimating 
 instead of underestimating. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) for large table

2015-03-03 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345599#comment-14345599
 ] 

Tyler Hobbs commented on CASSANDRA-8899:


This was resolved for 3.0 by CASSANDRA-4914.  I don't believe it would be too 
difficult to make 2.0 and 2.1 not use the limit for the max {{count()}} result 
(without backporting the rest of the aggregate function changes).

[~blerer] do you want to take a look and see how realistic that is?

 cqlsh - not able to get row count with select(*) for large table
 

 Key: CASSANDRA-8899
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2 ubuntu12.04
Reporter: Jeff Liu

  I'm getting errors when running a query that looks at a large number of rows.
 {noformat}
 cqlsh:events select count(*) from catalog;
  count
 ---
  1
 (1 rows)
 cqlsh:events select count(*) from catalog limit 11000;
  count
 ---
  11000
 (1 rows)
 cqlsh:events select count(*) from catalog limit 5;
 errors={}, last_host=127.0.0.1
 cqlsh:events 
 {noformat}
 We are not able to make the select * query to get row count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-03 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b5331353
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b5331353
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b5331353

Branch: refs/heads/trunk
Commit: b533135333e14ed4b482dbdc0febae7f2ee5be6f
Parents: fccf0b4 f6d82a5
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Mar 3 14:03:07 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 14:03:07 2015 -0600

--
 doc/cql3/CQL.textile | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b5331353/doc/cql3/CQL.textile
--



[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-03-03 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6d82a55
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6d82a55
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6d82a55

Branch: refs/heads/trunk
Commit: f6d82a55fbf938286245c8ed510094715d0c4dc1
Parents: 3f6ad3c 6ee0c75
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Mar 3 14:02:47 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 14:02:47 2015 -0600

--
 doc/cql3/CQL.textile | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6d82a55/doc/cql3/CQL.textile
--



[jira] [Resolved] (CASSANDRA-8889) CQL spec is missing doc for support of bind variables for LIMIT, TTL, and TIMESTAMP

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-8889.

Resolution: Fixed

I've updated the docs as commit  6ee0c757c3 and pushed the updated versions to 
the website.  Thanks!

 CQL spec is missing doc for support of bind variables for LIMIT, TTL, and 
 TIMESTAMP
 ---

 Key: CASSANDRA-8889
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8889
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Reporter: Jack Krupansky
Assignee: Tyler Hobbs
Priority: Minor

 CASSANDRA-4450 added the ability to specify a bind variable for the integer 
 value of a LIMIT, TTL, or TIMESTAMP option, but the CQL spec has not been 
 updated to reflect this enhancement.
 Also, the special predefined bind variable names are not documented in the 
 CQL spec: [limit], [ttl], and [timestamp].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8898) sstableloader utility should allow loading of data from mounted filesystem

2015-03-03 Thread Kenneth Failbus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Failbus updated CASSANDRA-8898:
---
Description: 
When trying to load data from a mounted filesystem onto a new cluster, 
following exceptions is observed intermittently, and at some point the 
sstableloader process gets hung without completing the loading process.

Please note that in my case the scenario was loading the existing sstables from 
an existing cluster to a brand new cluster.

Finally, it was found that there were some hard assumptions been made by 
sstableloader utility w.r.t response from the filesystem, which was not working 
with mounted filesystem.

The work-around was to copy each existing nodes sstable data files locally and 
then point sstableloader to that local filesystem to then load data onto new 
cluster.

In case of restoring during disaster recovery from backups the data using 
sstableloader, this copying to local filesystem of data files and then loading 
would take a long time.

It would be a good enhancement of the sstableloader utility to enable use of 
mounted filesystem as copying data locally and then loading is time consuming.

Below is the exception seen during the use of mounted filesystem.
{code}
java.lang.AssertionError: Reference counter -1 for 
/opt/tmp/casapp-c1-c00053-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-5449-Data.db
 
at 
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1146)
 
at 
org.apache.cassandra.streaming.StreamTransferTask.complete(StreamTransferTask.java:74)
 
at 
org.apache.cassandra.streaming.StreamSession.received(StreamSession.java:542) 
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:424)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:245)
 
at java.lang.Thread.run(Thread.java:744) 
WARN 21:07:16,853 [Stream #3e5a5ba0-bdef-11e4-a975-5777dbff0945] Stream failed

  at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:59)
 
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1406)
 
at 
org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:55)
 
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:59)
 
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
 
at 
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:339)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:311)
 
at java.lang.Thread.run(Thread.java:744) 
Caused by: java.io.FileNotFoundException: 
/opt/tmp/casapp-c1-c00055-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-5997-Data.db
 (No such file or directory) 
at java.io.RandomAccessFile.open(Native Method) 
at java.io.RandomAccessFile.init(RandomAccessFile.java:241) 
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.init(CompressedRandomAccessReader.java:76)
 
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:55)
 
... 8 more 
Exception in thread STREAM-OUT-/96.115.88.196 java.lang.NullPointerException 
at 
org.apache.cassandra.streaming.ConnectionHandler$MessageHandler.signalCloseDone(ConnectionHandler.java:205)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
 
at java.lang.Thread.run(Thread.java:744) 
ERROR 20:49:35,646 [Stream #d9fce650-bdf3-11e4-b6c0-252cb9b3e9f3] Streaming 
error occurred 
java.lang.AssertionError: Reference counter -3 for 
/opt/tmp/casapp-c1-c00055-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-4897-Data.db
 
at 
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1146)
 
at 
org.apache.cassandra.streaming.StreamTransferTask.complete(StreamTransferTask.java:74)
 
at 
org.apache.cassandra.streaming.StreamSession.received(StreamSession.java:542) 
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:424)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:245)
 
at java.lang.Thread.run(Thread.java:744) 
Exception in thread STREAM-IN-/96.115.88.196 

[jira] [Commented] (CASSANDRA-8860) Too many java.util.HashMap$Entry objects in heap

2015-03-03 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345431#comment-14345431
 ] 

Tyler Hobbs commented on CASSANDRA-8860:


+1, patch looks good

 Too many java.util.HashMap$Entry objects in heap
 

 Key: CASSANDRA-8860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8860
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.3, jdk 1.7u51
Reporter: Phil Yang
Assignee: Marcus Eriksson
 Fix For: 2.1.4

 Attachments: 0001-remove-cold_reads_to_omit.patch, 8860-v2.txt, 
 8860.txt, cassandra-env.sh, cassandra.yaml, jmap.txt, jstack.txt, 
 jstat-afterv1.txt, jstat-afterv2.txt, jstat-before.txt


 While I upgrading my cluster to 2.1.3, I find some nodes (not all) may have 
 GC issue after the node restarting successfully. Old gen grows very fast and 
 most of the space can not be recycled after setting its status to normal 
 immediately. The qps of both reading and writing are very low and there is no 
 heavy compaction.
 Jmap result seems strange that there are too many java.util.HashMap$Entry 
 objects in heap, where in my experience the [B is usually the No1.
 If I downgrade it to 2.1.1, this issue will not appear.
 I uploaded conf files and jstack/jmap outputs. I'll upload heap dump if 
 someone need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7816) Duplicate DOWN/UP Events Pushed with Native Protocol

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7816:
---
  Component/s: (was: Documentation  website)
   API
 Priority: Minor  (was: Trivial)
Fix Version/s: 2.1.4
   2.0.13
   Issue Type: Bug  (was: Improvement)
  Summary: Duplicate DOWN/UP Events Pushed with Native Protocol  (was: 
Updated the 4.2.6. EVENT section in the binary protocol specification)

I went ahead and committed the patch to update the native protocol specs as 
72c6ed288, since there was no debate there.

I've updated the ticket title and fields to reflect the current issue of 
duplicate notifications.

 Duplicate DOWN/UP Events Pushed with Native Protocol
 

 Key: CASSANDRA-7816
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7816
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Michael Penick
Assignee: Stefania
Priority: Minor
 Fix For: 2.0.13, 2.1.4

 Attachments: tcpdump_repeating_status_change.txt, trunk-7816.txt


 Added MOVED_NODE as a possible type of topology change and also specified 
 that it is possible to receive the same event multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8879) Alter table on compact storage broken

2015-03-03 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345640#comment-14345640
 ] 

Nick Bailey commented on CASSANDRA-8879:


FWIW, that is essentially the case I was hitting. This was a thrift table that 
I know contains only ascii data and rather than deal with hex/bytes i wanted to 
just update the schema. I can see the argument for not allowing this since you 
could be shooting yourself in the foot if the actual data isn't the right type. 
On the other hand the user-friendliness of having to alter my schema with 
thrift (in not completely obvious ways) leaves something to be desired as well. 
Either way thats probably separate from the actual bug in this ticket (since 
it's broken going bytes-ascii or ascii-bytes).

 Alter table on compact storage broken
 -

 Key: CASSANDRA-8879
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8879
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
Assignee: Tyler Hobbs
 Fix For: 2.0.13

 Attachments: 8879-2.0.txt


 In 2.0 HEAD, alter table on compact storage tables seems to be broken. With 
 the following table definition, altering the column breaks cqlsh and 
 generates a stack trace in the log.
 {noformat}
 CREATE TABLE settings (
   key blob,
   column1 blob,
   value blob,
   PRIMARY KEY ((key), column1)
 ) WITH COMPACT STORAGE
 {noformat}
 {noformat}
 cqlsh:OpsCenter alter table settings ALTER column1 TYPE ascii ;
 TSocket read 0 bytes
 cqlsh:OpsCenter DESC TABLE settings;
 {noformat}
 {noformat}
 ERROR [Thrift:7] 2015-02-26 17:20:24,640 CassandraDaemon.java (line 199) 
 Exception in thread Thread[Thrift:7,5,main]
 java.lang.AssertionError
 ...at 
 org.apache.cassandra.cql3.statements.AlterTableStatement.announceMigration(AlterTableStatement.java:198)
 ...at 
 org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:79)
 ...at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
 ...at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:175)
 ...at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1958)
 ...at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486)
 ...at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470)
 ...at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 ...at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
 ...at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
 ...at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 ...at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 ...at java.lang.Thread.run(Thread.java:724)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8898) sstableloader utility should allow loading of data from mounted filesystem

2015-03-03 Thread Kenneth Failbus (JIRA)
Kenneth Failbus created CASSANDRA-8898:
--

 Summary: sstableloader utility should allow loading of data from 
mounted filesystem
 Key: CASSANDRA-8898
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8898
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
 Environment: 2.0.12
Reporter: Kenneth Failbus


When trying to load data from a mounted filesystem onto a new cluster, 
following exceptions is observed intermittently, and at some point the 
sstableloader process gets hung without completing the loading process.

Please note that in my case the scenario was loading the existing sstables from 
an existing cluster to a brand new cluster.

Finally, it was found that there were some hard assumptions been made by 
sstableloader utility w.r.t response from the filesystem, which was not working 
with mounted filesystem.

The work-around was to copy each existing nodes sstable data files locally and 
then point sstableloader to that local filesystem to then load data.

In case of restoring during disaster recovery from backups the data using 
sstableloader, this copying to local filesystem of data files and then loading 
would take a long time.

It would be a good enhancement of the sstableloader utility to enable use of 
mounted filesystem as copying data locally and then loading is time consuming.

Below is the exception seen during the use of mounted filesystem.
{code}
java.lang.AssertionError: Reference counter -1 for 
/opt/tmp/casapp-c1-c00053-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-5449-Data.db
 
at 
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1146)
 
at 
org.apache.cassandra.streaming.StreamTransferTask.complete(StreamTransferTask.java:74)
 
at 
org.apache.cassandra.streaming.StreamSession.received(StreamSession.java:542) 
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:424)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:245)
 
at java.lang.Thread.run(Thread.java:744) 
WARN 21:07:16,853 [Stream #3e5a5ba0-bdef-11e4-a975-5777dbff0945] Stream failed

  at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:59)
 
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1406)
 
at 
org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:55)
 
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:59)
 
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
 
at 
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:339)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:311)
 
at java.lang.Thread.run(Thread.java:744) 
Caused by: java.io.FileNotFoundException: 
/opt/tmp/casapp-c1-c00055-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-5997-Data.db
 (No such file or directory) 
at java.io.RandomAccessFile.open(Native Method) 
at java.io.RandomAccessFile.init(RandomAccessFile.java:241) 
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.init(CompressedRandomAccessReader.java:76)
 
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:55)
 
... 8 more 
Exception in thread STREAM-OUT-/96.115.88.196 java.lang.NullPointerException 
at 
org.apache.cassandra.streaming.ConnectionHandler$MessageHandler.signalCloseDone(ConnectionHandler.java:205)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
 
at java.lang.Thread.run(Thread.java:744) 
ERROR 20:49:35,646 [Stream #d9fce650-bdf3-11e4-b6c0-252cb9b3e9f3] Streaming 
error occurred 
java.lang.AssertionError: Reference counter -3 for 
/opt/tmp/casapp-c1-c00055-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-4897-Data.db
 
at 
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1146)
 
at 
org.apache.cassandra.streaming.StreamTransferTask.complete(StreamTransferTask.java:74)
 
at 
org.apache.cassandra.streaming.StreamSession.received(StreamSession.java:542) 
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:424)
 
at 

[jira] [Updated] (CASSANDRA-8870) Tombstone overwhelming issue aborts client queries

2015-03-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8870:
---
Tester: Shawn Kumar

 Tombstone overwhelming issue aborts client queries
 --

 Key: CASSANDRA-8870
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8870
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra 2.1.2 ubunbtu 12.04
Reporter: Jeff Liu

 We are getting client queries timeout issues on the clients who are trying to 
 query data from cassandra cluster. 
 Nodetool status shows that all nodes are still up regardless.
 Logs from client side:
 {noformat}
 com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
 tried for query failed (tried: 
 cass-chisel01.abc01.abc02.abc.abc.com/10.66.182.113:9042 
 (com.datastax.driver.core.TransportException: 
 [cass-chisel01.tgr01.iad02.testd.nestlabs.com/10.66.182.113:9042] Connection 
 has been closed))
 at 
 com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108) 
 ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
 at 
 com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179) 
 ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_55]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_55]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
 {noformat}
 Logs from cassandra/system.log
 {noformat}
 ERROR [HintedHandoff:2] 2015-02-23 23:46:28,410 SliceQueryFilter.java:212 - 
 Scanned over 10 tombstones in system.hints; query aborted (see 
 tombstone_failure_threshold)
 ERROR [HintedHandoff:2] 2015-02-23 23:46:28,417 CassandraDaemon.java:153 - 
 Exception in thread Thread[HintedHandoff:2,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException: null
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:214)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:310)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1858)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1666)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:385)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:344)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$400(HintedHandOffManager.java:94)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:555)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_55]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_55]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Add missing MOVED_NODE event to native protocol spec

2015-03-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 2f7077c06 - 3f6ad3c98


Add missing MOVED_NODE event to native protocol spec

Patch by Michael Penick; reviewed by Tyler Hobbs for CASSANDRA-7816


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72c6ed28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72c6ed28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72c6ed28

Branch: refs/heads/cassandra-2.1
Commit: 72c6ed2883a24486f6785b53cf73fdc8e78e2765
Parents: 33a3a09
Author: Michael Penick michael.pen...@datastax.com
Authored: Tue Mar 3 12:47:41 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 12:47:41 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 2 files changed, 11 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v1.spec
--
diff --git a/doc/native_protocol_v1.spec b/doc/native_protocol_v1.spec
index bc2bb78..41146f9 100644
--- a/doc/native_protocol_v1.spec
+++ b/doc/native_protocol_v1.spec
@@ -486,8 +486,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change (NEW_NODE or REMOVED_NODE) followed by the address of
-  the new/removed node.
+  type of change (NEW_NODE, REMOVED_NODE, or MOVED_NODE) followed
+  by the address of the new/removed/moved node.
 - STATUS_CHANGE: events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -509,6 +509,9 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
 
 5. Compression
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v2.spec
--
diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec
index ef54099..584ae2f 100644
--- a/doc/native_protocol_v2.spec
+++ b/doc/native_protocol_v2.spec
@@ -604,8 +604,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change (NEW_NODE or REMOVED_NODE) followed by the address of
-  the new/removed node.
+  type of change (NEW_NODE, REMOVED_NODE, or MOVED_NODE) followed
+  by the address of the new/removed/moved node.
 - STATUS_CHANGE: events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -627,6 +627,10 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
+
 4.2.7. AUTH_CHALLENGE
 
   A server authentication challenge (see AUTH_RESPONSE (Section 4.1.2) for more



[jira] [Commented] (CASSANDRA-8832) SSTableRewriter.abort() should be more robust to failure

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345648#comment-14345648
 ] 

Benedict commented on CASSANDRA-8832:
-

bq. AFAICS the actual fix for the problem was committed as part of 7705 and 
this patch only adds continued processing after exceptions. Can you confirm 
this?

Regrettably, no. This was broken _by_ 7705 unfortunately. I've included a 
regression that demonstrates the problem. In the event that 
currentlyOpenedEarly != null, and we abort, we do not close (or unmark 
compacting) the early opened file.

bq. replaceWithFinishedReaders can also throw (e.g. due to a reference counting 
bug), hiding any earlier errors. It should also be wrapped in a try/merge block.

I wasn't too sure about this when I wrote it, since it both shouldn't fail in 
the same way (has to be programmer error rather than other problems), and it 
itself leaves the program in a problematic state if it doesn't complete 
successfully. A lot of code paths need reworking to be resilient to this, and I 
didn't want to scope creep. However since you raise it, I've opted to fix this 
latter problem and also wrap it in its own try/catch as you suggest.

bq. The static merge of throwables will probably be needed in many other 
places. Could we move it to a more generic location?

Again, I was torn on writing it since I can't think of a good place to group 
it. I've created our own Throwables utility class, which contains only this for 
now. If you have a better idea for where to put it, pipe up.



 SSTableRewriter.abort() should be more robust to failure
 

 Key: CASSANDRA-8832
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8832
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1.4


 This fixes a bug introduced in CASSANDRA-8124 that attempts to open early 
 during abort, introducing a failure risk. This patch further preempts 
 CASSANDRA-8690 to wrap every rollback action in a try/catch block, so that 
 any internal assertion checks do not actually worsen the state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8730) Optimize UUIDType comparisons

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345416#comment-14345416
 ] 

Benedict commented on CASSANDRA-8730:
-

I've pushed a small change with a very simple trick to permit both faster and 
simpler signed byte comparison of the LSB in TimeUUIDTyoe

 Optimize UUIDType comparisons
 -

 Key: CASSANDRA-8730
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8730
 Project: Cassandra
  Issue Type: Improvement
Reporter: J.B. Langston
Assignee: Benedict
 Fix For: 3.0


 Compaction is slow on tables using compound keys containing UUIDs due to 
 being CPU bound by key comparison.  [~benedict] said he sees some easy 
 optimizations that could be made for UUID comparison.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8832) SSTableRewriter.abort() should be more robust to failure

2015-03-03 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345460#comment-14345460
 ] 

Branimir Lambov edited comment on CASSANDRA-8832 at 3/3/15 6:39 PM:


AFAICS the actual fix for the problem was committed [as part of 
7705|https://github.com/apache/cassandra/commit/c75ee4160cb8fcdf47c90bfce8bf0d861f32d268#diff-426d04d201a410848604b55984d1b370R291]
 and this patch only adds continued processing after exceptions. Can you 
confirm this?

A couple of comments on the patch:
* {{replaceWithFinishedReaders}} can also throw (e.g. due to a reference 
counting bug), hiding any earlier errors. It should also be wrapped in a 
try/merge block.
* The static {{merge}} of throwables will probably be needed in many other 
places. Could we move it to a more generic location?
* Is it possible to include a regression test for the bug?


was (Author: blambov):
AFAICS the actual fix for the problem was committed [as part of 
7705|https://github.com/apache/cassandra/commit/c75ee4160cb8fcdf47c90bfce8bf0d861f32d268]
 and this patch only adds continued processing after exceptions. Can you 
confirm this?

A couple of comments on the patch:
* {{replaceWithFinishedReaders}} can also throw (e.g. due to a reference 
counting bug), hiding any earlier errors. It should also be wrapped in a 
try/merge block.
* The static {{merge}} of throwables will probably be needed in many other 
places. Could we move it to a more generic location?
* Is it possible to include a regression test for the bug?

 SSTableRewriter.abort() should be more robust to failure
 

 Key: CASSANDRA-8832
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8832
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1.4


 This fixes a bug introduced in CASSANDRA-8124 that attempts to open early 
 during abort, introducing a failure risk. This patch further preempts 
 CASSANDRA-8690 to wrap every rollback action in a try/catch block, so that 
 any internal assertion checks do not actually worsen the state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Document bind markers for TIMESTAMP, TLL, and LIMIT

2015-03-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk fccf0b4f6 - b53313533


Document bind markers for TIMESTAMP, TLL, and LIMIT

Patch by Tyler Hobbs for CASSANDRA-8889


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ee0c757
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ee0c757
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ee0c757

Branch: refs/heads/trunk
Commit: 6ee0c757c387f5e55299e8f6bb433b9c6166ead2
Parents: 72c6ed2
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Mar 3 14:01:43 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 14:01:43 2015 -0600

--
 doc/cql3/CQL.textile | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ee0c757/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 6085d00..cf074af 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -131,6 +131,8 @@ CQL supports _prepared statements_. Prepared statement is 
an optimization that a
 
 In a statement, each time a column value is expected (in the data manipulation 
and query statements), a @variable@ (see above) can be used instead. A 
statement with bind variables must then be _prepared_. Once it has been 
prepared, it can executed by providing concrete values for the bind variables. 
The exact procedure to prepare a statement and execute a prepared statement 
depends on the CQL driver used and is beyond the scope of this document.
 
+In addition to providing column values, bind markers may be used to provide 
values for @LIMIT@, @TIMESTAMP@, and @TTL@ clauses.  If anonymous bind markers 
are used, the names for the query parameters will be @[limit]@, @[timestamp]@, 
and @[ttl]@, respectively.
+
 
 h2(#dataDefinition). Data Definition
 



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-03-03 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6d82a55
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6d82a55
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6d82a55

Branch: refs/heads/cassandra-2.1
Commit: f6d82a55fbf938286245c8ed510094715d0c4dc1
Parents: 3f6ad3c 6ee0c75
Author: Tyler Hobbs ty...@datastax.com
Authored: Tue Mar 3 14:02:47 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Mar 3 14:02:47 2015 -0600

--
 doc/cql3/CQL.textile | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6d82a55/doc/cql3/CQL.textile
--



[jira] [Commented] (CASSANDRA-8657) long-test LongCompactionsTest fails

2015-03-03 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345751#comment-14345751
 ] 

Yuki Morishita commented on CASSANDRA-8657:
---

+1

 long-test LongCompactionsTest fails
 ---

 Key: CASSANDRA-8657
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8657
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Carl Yeksigian
Priority: Minor
 Fix For: 2.0.13, 2.1.4

 Attachments: 8657-2.0.txt, system.log


 Same error on 3 of the 4 tests in this suite - failure is the same for 2.0 
 and 2.1 branch:
 {noformat}
 [junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionsTest
 [junit] Tests run: 4, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 
 27.294 sec
 [junit] 
 [junit] Testcase: 
 testCompactionMany(org.apache.cassandra.db.compaction.LongCompactionsTest):   
 FAILED
 [junit] 
 /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] junit.framework.AssertionFailedError: 
 /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.init(AbstractCompactionTask.java:49)
 [junit] at 
 org.apache.cassandra.db.compaction.CompactionTask.init(CompactionTask.java:47)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionMany(LongCompactionsTest.java:67)
 [junit] 
 [junit] 
 [junit] Testcase: 
 testCompactionSlim(org.apache.cassandra.db.compaction.LongCompactionsTest):   
 FAILED
 [junit] 
 /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] junit.framework.AssertionFailedError: 
 /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.init(AbstractCompactionTask.java:49)
 [junit] at 
 org.apache.cassandra.db.compaction.CompactionTask.init(CompactionTask.java:47)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionSlim(LongCompactionsTest.java:58)
 [junit] 
 [junit] 
 [junit] Testcase: 
 testCompactionWide(org.apache.cassandra.db.compaction.LongCompactionsTest):   
 FAILED
 [junit] 
 /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] junit.framework.AssertionFailedError: 
 /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
  is not correctly marked compacting
 [junit] at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.init(AbstractCompactionTask.java:49)
 [junit] at 
 org.apache.cassandra.db.compaction.CompactionTask.init(CompactionTask.java:47)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
 [junit] at 
 org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionWide(LongCompactionsTest.java:49)
 [junit] 
 [junit] 
 [junit] Test org.apache.cassandra.db.compaction.LongCompactionsTest FAILED
 {noformat}
 A system.log is attached from the above run on 2.0 HEAD.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix rare NPE in KeyCacheSerializer

2015-03-03 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 abc4a37d0 - bef1d0cb0


Fix rare NPE in KeyCacheSerializer

patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for
CASSANDRA-8067


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bef1d0cb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bef1d0cb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bef1d0cb

Branch: refs/heads/cassandra-2.1
Commit: bef1d0cb064faa3641fee31e1584b77ca95c9843
Parents: abc4a37
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Mar 3 13:53:14 2015 -0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 3 13:56:18 2015 -0800

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/CacheService.java | 9 ++---
 2 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bef1d0cb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 76c2e10..a90dd48 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067)
  * Pick sstables for validation as late as possible inc repairs 
(CASSANDRA-8366)
  * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856)
  * Fix parallelism adjustment in range and secondary index queries

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bef1d0cb/src/java/org/apache/cassandra/service/CacheService.java
--
diff --git a/src/java/org/apache/cassandra/service/CacheService.java 
b/src/java/org/apache/cassandra/service/CacheService.java
index 1b93c2c..48c0941 100644
--- a/src/java/org/apache/cassandra/service/CacheService.java
+++ b/src/java/org/apache/cassandra/service/CacheService.java
@@ -467,11 +467,14 @@ public class CacheService implements CacheServiceMBean
 RowIndexEntry entry = CacheService.instance.keyCache.get(key);
 if (entry == null)
 return;
+
+CFMetaData cfm = Schema.instance.getCFMetaData(key.cfId);
+if (cfm == null)
+return; // the table no longer exists.
+
 ByteBufferUtil.writeWithLength(key.key, out);
-Descriptor desc = key.desc;
-out.writeInt(desc.generation);
+out.writeInt(key.desc.generation);
 out.writeBoolean(true);
-CFMetaData cfm = Schema.instance.getCFMetaData(key.desc.ksname, 
key.desc.cfname);
 cfm.comparator.rowIndexEntrySerializer().serialize(entry, out);
 }
 



[jira] [Updated] (CASSANDRA-7094) cqlsh: DESCRIBE is not case-insensitive

2015-03-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7094:
---
Assignee: Philip Thompson  (was: Tyler Hobbs)

 cqlsh: DESCRIBE is not case-insensitive
 ---

 Key: CASSANDRA-7094
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7094
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: cassandra 1.2.16
Reporter: Karl Mueller
Assignee: Philip Thompson
Priority: Trivial
  Labels: cqlsh

 Keyspaces which are named starting with capital letters (and perhaps other 
 things) sometimes require double quotes and sometimes do not.
 For example, describe works without quotes:
 cqlsh describe keyspace ProductGenomeLocal;
 CREATE KEYSPACE ProductGenomeLocal WITH replication = {
   'class': 'SimpleStrategy',
   'replication_factor': '3'
 };
 USE ProductGenomeLocal;
 [...]
 But use will not:
 cqlsh use ProductGenomeLocal;
 Bad Request: Keyspace 'productgenomelocal' does not exist
 It seems that qoutes should only really be necessary when there's spaces or 
 other symbols that need to be quoted. 
 At the least, the acceptance or failures of quotes should be consistent.
 Other minor annoyance: tab expansion works in use and describe with quotes, 
 but will not work in either without quotes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8870) Tombstone overwhelming issue aborts client queries

2015-03-03 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345871#comment-14345871
 ] 

Aleksey Yeschenko commented on CASSANDRA-8870:
--

They will if you update or insert null.

 Tombstone overwhelming issue aborts client queries
 --

 Key: CASSANDRA-8870
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8870
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra 2.1.2 ubunbtu 12.04
Reporter: Jeff Liu

 We are getting client queries timeout issues on the clients who are trying to 
 query data from cassandra cluster. 
 Nodetool status shows that all nodes are still up regardless.
 Logs from client side:
 {noformat}
 com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
 tried for query failed (tried: 
 cass-chisel01.abc01.abc02.abc.abc.com/10.66.182.113:9042 
 (com.datastax.driver.core.TransportException: 
 [cass-chisel01.tgr01.iad02.testd.nestlabs.com/10.66.182.113:9042] Connection 
 has been closed))
 at 
 com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108) 
 ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
 at 
 com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179) 
 ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_55]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_55]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
 {noformat}
 Logs from cassandra/system.log
 {noformat}
 ERROR [HintedHandoff:2] 2015-02-23 23:46:28,410 SliceQueryFilter.java:212 - 
 Scanned over 10 tombstones in system.hints; query aborted (see 
 tombstone_failure_threshold)
 ERROR [HintedHandoff:2] 2015-02-23 23:46:28,417 CassandraDaemon.java:153 - 
 Exception in thread Thread[HintedHandoff:2,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException: null
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:214)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:310)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1858)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1666)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:385)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:344)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$400(HintedHandOffManager.java:94)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:555)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_55]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_55]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8889) CQL spec is missing doc for support of bind variables for LIMIT, TTL, and TIMESTAMP

2015-03-03 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345711#comment-14345711
 ] 

Jack Krupansky commented on CASSANDRA-8889:
---

Thanks. The change for the special variable names looks fine, but the grammar 
for LIMIT, TTL, and TIMESTAMP still says integer - it needs to be ( 
integer | variable ).

 CQL spec is missing doc for support of bind variables for LIMIT, TTL, and 
 TIMESTAMP
 ---

 Key: CASSANDRA-8889
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8889
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Reporter: Jack Krupansky
Assignee: Tyler Hobbs
Priority: Minor

 CASSANDRA-4450 added the ability to specify a bind variable for the integer 
 value of a LIMIT, TTL, or TIMESTAMP option, but the CQL spec has not been 
 updated to reflect this enhancement.
 Also, the special predefined bind variable names are not documented in the 
 CQL spec: [limit], [ttl], and [timestamp].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8890) Enhance cassandra-env.sh to handle Java version output in case of OpenJDK icedtea

2015-03-03 Thread Sumod Pawgi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345770#comment-14345770
 ] 

Sumod Pawgi commented on CASSANDRA-8890:


Thanks Philip, I will take a shot at that.

 Enhance cassandra-env.sh to handle Java version output in case of OpenJDK 
 icedtea
 --

 Key: CASSANDRA-8890
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8890
 Project: Cassandra
  Issue Type: Improvement
  Components: Config
 Environment: Red Hat Enterprise Linux Server release 6.4 (Santiago)
Reporter: Sumod Pawgi
Priority: Minor
 Fix For: 2.1.4


 Where observed - 
 Cassandra node has OpenJDK - 
 java version 1.7.0_09-icedtea
 In some situations, external agents trying to monitor a C* cluster would need 
 to run cassandra -v command to determine the Cassandra version and would 
 expect a numerical output e.g. java version 1.7.0_75 as in case of Oracle 
 JDK. But if the cluster has OpenJDK IcedTea installed, then this condition is 
 not satisfied and the agents will not work correctly as the output from 
 cassandra -v is 
 /opt/apache/cassandra/bin/../conf/cassandra-env.sh: line 102: [: 09-icedtea: 
 integer expression expected
 Cause - 
 The line which is causing this behavior is -
 jvmver=`echo $java_ver_output | grep '[openjdk|java] version' | awk -F'' 
 'NR==1 {print $2}'`
 Suggested enhancement -
 If we change the line to -
  jvmver=`echo $java_ver_output | grep '[openjdk|java] version' | awk -F'' 
 'NR==1 {print $2}' | awk 'BEGIN {FS=-};{print $1}'`,
 it will give $jvmver as - 1.7.0_09 for the above case. 
 Can we add this enhancement in the cassandra-env.sh? I would like to add it 
 myself and submit for review, but I am not familiar with C* check in process. 
 There might be better ways to do this, but I thought of this to be simplest 
 and as the edition is at the end of the line, it will be easy to reverse if 
 needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones

2015-03-03 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345850#comment-14345850
 ] 

Jens Rantil commented on CASSANDRA-8574:


I'd be fine with that solution as long as the underlying problem can be solved 
-- the fact that it's really hard to reliably page through results that has a 
large amount of tombstones.

 Gracefully degrade SELECT when there are lots of tombstones
 ---

 Key: CASSANDRA-8574
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jens Rantil
 Fix For: 3.0


 *Background:* There's lots of tooling out there to do BigData analysis on 
 Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
 The problem with both of these so far, is that a single partition key with 
 too many tombstones can make the query job fail hard.
 The described scenario happens despite the user setting a rather small 
 FetchSize. I assume this is a common scenario if you have larger rows.
 *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
 smaller batch of results if there are too many tombstones. The tombstones are 
 ordered according to clustering key and one should be able to page through 
 them. Potentially:
 SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
 would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
 I understand that this obviously would degrade performance, but it would at 
 least yield a result.
 *Additional comment:* I haven't dug into Cassandra code, but conceptually I 
 guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8901) Generalize progress reporting between tools and a server

2015-03-03 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-8901:
-

 Summary: Generalize progress reporting between tools and a server
 Key: CASSANDRA-8901
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8901
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor


Right now, {{nodetool repair}} uses its own method and JMX notification message 
format to report progress of async operation call. As we are expanding async 
call to other operations (CASSANDRA-7124), we should have generalized way to 
report to clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7094) cqlsh: DESCRIBE is not case-insensitive

2015-03-03 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14345673#comment-14345673
 ] 

Tyler Hobbs commented on CASSANDRA-7094:


[~philipthompson] do you want to take a stab at this?

 cqlsh: DESCRIBE is not case-insensitive
 ---

 Key: CASSANDRA-7094
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7094
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: cassandra 1.2.16
Reporter: Karl Mueller
Assignee: Tyler Hobbs
Priority: Trivial
  Labels: cqlsh

 Keyspaces which are named starting with capital letters (and perhaps other 
 things) sometimes require double quotes and sometimes do not.
 For example, describe works without quotes:
 cqlsh describe keyspace ProductGenomeLocal;
 CREATE KEYSPACE ProductGenomeLocal WITH replication = {
   'class': 'SimpleStrategy',
   'replication_factor': '3'
 };
 USE ProductGenomeLocal;
 [...]
 But use will not:
 cqlsh use ProductGenomeLocal;
 Bad Request: Keyspace 'productgenomelocal' does not exist
 It seems that qoutes should only really be necessary when there's spaces or 
 other symbols that need to be quoted. 
 At the least, the acceptance or failures of quotes should be consistent.
 Other minor annoyance: tab expansion works in use and describe with quotes, 
 but will not work in either without quotes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-03 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab15d8e6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab15d8e6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab15d8e6

Branch: refs/heads/trunk
Commit: ab15d8e61698809913fcf9c32817551dafefe699
Parents: 6951726 abc4a37
Author: Yuki Morishita yu...@apache.org
Authored: Tue Mar 3 15:07:24 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Tue Mar 3 15:07:24 2015 -0600

--
 CHANGES.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab15d8e6/CHANGES.txt
--



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-03 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b2dfe1be
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b2dfe1be
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b2dfe1be

Branch: refs/heads/trunk
Commit: b2dfe1be96288bd9d15ec40cd3d20deff09ca625
Parents: ab15d8e bef1d0c
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Mar 3 13:58:37 2015 -0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Mar 3 13:58:37 2015 -0800

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/service/CacheService.java | 10 ++
 2 files changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dfe1be/CHANGES.txt
--
diff --cc CHANGES.txt
index d8b222e,a90dd48..cc3658d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,5 +1,66 @@@
 +3.0
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * Improve concurrency of repair (CASSANDRA-6455, 8208)
 + * Select optimal CRC32 implementation at runtime (CASSANDRA-8614)
 + * Evaluate MurmurHash of Token once per query (CASSANDRA-7096)
 +
 +
  2.1.4
+  * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067)
   * Pick sstables for validation as late as possible inc repairs 
(CASSANDRA-8366)
   * Fix commitlog getPendingTasks 

  1   2   3   >