[jira] [Updated] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-7817:


Priority: Minor  (was: Major)

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106530#comment-14106530
 ] 

Sylvain Lebresne commented on CASSANDRA-7817:
-

I believe that is working as designed. The goal of the threshold is to warn 
during a read if too many cells are skipped due to tombstoning, and this 
because the experience is that people weren't understanding why their query was 
being slow. If you have a huge partition and you delete it, the code still has 
to read and skip all previous record until those are compacted away.

Now I understand the naming of the option could sounds slightly confusing in 
that case, and a possibly more precise name could be 
tombstoned_cells_warn_threshold, but I'm not sure it's worth changing the 
option name at this point.

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7818) Improve compaction logging

2014-08-22 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-7818:
--

 Summary: Improve compaction logging
 Key: CASSANDRA-7818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7818
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Priority: Minor
 Fix For: 2.1.1


We should log more information about compactions to be able to debug issues 
more efficiently

* give each CompactionTask an id that we log (so that you can relate the 
start-compaction-messages to the finished-compaction ones)
* log what level the sstables are taken from



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7042) Disk space growth until restart

2014-08-22 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106635#comment-14106635
 ] 

Marcus Eriksson commented on CASSANDRA-7042:


I'm trying to figure out the file not found exceptions in CASSANDRA-7145 - 
don't think they are related to the disk space growth though, so not marking 
this as a duplicate (yet)

 Disk space growth until restart
 ---

 Key: CASSANDRA-7042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04
 Sun Java 7
 Cassandra 2.0.6
Reporter: Zach Aller
 Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
 after.log, before.log, tabledump_after_restart.txt, 
 tabledump_before_restart.txt


 Cassandra will constantly eat disk space not sure whats causing it the only 
 thing that seems to fix it is a restart of cassandra this happens about every 
 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
 restart cassandra it usually all clears itself up and disks return to normal 
 for a while then something triggers its and starts climbing again. Sometimes 
 when we restart compactions pending skyrocket and if we restart a second time 
 the compactions pending drop off back to a normal level. One other thing to 
 note is the space is not free'd until cassandra starts back up and not when 
 shutdown.
 I will get a clean log of before and after restarting next time it happens 
 and post it.
 Here is a common ERROR in our logs that might be related
 {noformat}
 ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
 (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at java.io.RandomAccessFile.open(Native Method)
 at java.io.RandomAccessFile.init(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 at 
 org.apache.cassandra.io.util.ThrottledReader.init(ThrottledReader.java:35)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
 ... 17 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7809) UDF cleanups (#7395 follow-up)

2014-08-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106640#comment-14106640
 ] 

Sylvain Lebresne commented on CASSANDRA-7809:
-

bq. Maybe put a placeholder in the map of declared functions?

Good idea as that also allows to provide a more meaningful error message when 
the badly-loaded function is executed. I've pushed an additional commit to the 
branch.

bq. Just return argRes if it's not EXACT_MATCH

That's not exactly what we want because even if we get a {{WEAKLY_ASSIGNABLE}}, 
we still need to finish the loop to check if some argument might be not 
assignable at all. But arguably the switch was not super elegant so I changed 
it and added a comment.

bq. Why does Maps/Sets/Lists.testAssignment() return WEAKLY_ASSIGNABLE for 
exact matches?

That was part-laziness and part because it's not really all that useful in 
practice (it'll make a difference for something like '\{(int)?, (int)?, 
(int)?}' but that's about it, and why would you ever write that when you can 
write '(setint){?, ?, ?}' or really just '(setint)?' ?). But anyway, 
neither are extremely good reasons so I've fixed it.

 UDF cleanups (#7395 follow-up)
 --

 Key: CASSANDRA-7809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7809
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 3.0


 The current code for UDF is largely not reusing the pre-existing 
 mechanics/code for native/hardcoded functions. I don't see a good reason for 
 that but I do see downsides: it's more code to maintain and makes it much 
 easier to have inconsitent behavior between hard-coded and user-defined 
 function. More concretely, {{UDFRegistery/UDFFunctionOverloads}} 
 fundamentally do the same thing than {{Functions}}, we should just merge 
 both. I'm also not sure there is a need for both {{UFMetadata}} and 
 {{UDFunction}} since {{UFMetadata}} really only store infos on a given 
 function (contrarly to what the javadoc pretends).  I suggest we consolidate 
 all this to cleanup the code, but also as a way to fix 2 problems that the 
 UDF code has but that the existing code for native functions don't:
 * if there is multiple overloads of a function, the UDF code picks the first 
 version whose argument types are compatible with the concrete arguments 
 provided. This is broken for bind markers: we don't know the type of markers 
 and so the first function match may not at all be what the user want. The 
 only sensible choice is to detect that type of ambiguity and reject the 
 query, asking the user to explicitly type-cast their bind marker (which is 
 what the code for hard-coded function does).
 * the UDF code builds a function signature using the CQL type names of the 
 arguments and use that to distinguish multiple overrides in the schema. This 
 means in particular that {{f(v text)}} and {{f(v varchar)}} are considered 
 distinct, which is wrong since CQL considers {{varchar}} as a simple alias of 
 {{text}}. And in fact, the function resolution does consider them aliases 
 leading to seemingly broken behavior.
 There is a few other small problems that I'm proposing to fix while doing 
 this cleanup:
 * Function creation only use the function name when checking if the function 
 exists, which is not enough since we allow multiple over-loadings. You can 
 bypass the check by using OR REPLACE but that's obviously broken.
 * {{IF NOT EXISTS}} for function creation is broken.
 * The code allows to replace a function (with {{OR REPLACE}}) by a new 
 function with an incompatible return type. Imo that's dodgy and we should 
 refuse it (users can still drop and re-create the method if they really want).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7810) tombstones gc'd before being locally applied

2014-08-22 Thread Jonathan Halliday (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Halliday resolved CASSANDRA-7810.
--

Resolution: Cannot Reproduce

 tombstones gc'd before being locally applied
 

 Key: CASSANDRA-7810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7810
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.0.rc6
Reporter: Jonathan Halliday
Assignee: Marcus Eriksson
 Fix For: 2.1.0

 Attachments: range_tombstone_test.py


 # single node environment
 CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1 };
 use test;
 create table foo (a int, b int, primary key(a,b));
 alter table foo with gc_grace_seconds = 0;
 insert into foo (a,b) values (1,2);
 select * from foo;
 -- one row returned. so far, so good.
 delete from foo where a=1 and b=2;
 select * from foo;
 -- 0 rows. still rainbows and kittens.
 bin/nodetool flush;
 bin/nodetool compact;
 select * from foo;
  a | b
 ---+---
  1 | 2
 (1 rows)
 gahhh.
 looks like the tombstones were considered obsolete and thrown away before 
 being applied to the compaction?  gc_grace just means the interval after 
 which they won't be available to remote nodes repair - they should still 
 apply locally regardless (and do correctly in 2.0.9)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7810) tombstones gc'd before being locally applied

2014-08-22 Thread Jonathan Halliday (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106645#comment-14106645
 ] 

Jonathan Halliday commented on CASSANDRA-7810:
--

hmm, looks like something fishy in my environment - it works fine when I spin 
up a new vm instance for the test.  Guess I'm going to be rebuilding my dev 
environment this morning then...   Thanks guys.

 tombstones gc'd before being locally applied
 

 Key: CASSANDRA-7810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7810
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.0.rc6
Reporter: Jonathan Halliday
Assignee: Marcus Eriksson
 Fix For: 2.1.0

 Attachments: range_tombstone_test.py


 # single node environment
 CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1 };
 use test;
 create table foo (a int, b int, primary key(a,b));
 alter table foo with gc_grace_seconds = 0;
 insert into foo (a,b) values (1,2);
 select * from foo;
 -- one row returned. so far, so good.
 delete from foo where a=1 and b=2;
 select * from foo;
 -- 0 rows. still rainbows and kittens.
 bin/nodetool flush;
 bin/nodetool compact;
 select * from foo;
  a | b
 ---+---
  1 | 2
 (1 rows)
 gahhh.
 looks like the tombstones were considered obsolete and thrown away before 
 being applied to the compaction?  gc_grace just means the interval after 
 which they won't be available to remote nodes repair - they should still 
 apply locally regardless (and do correctly in 2.0.9)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7042) Disk space growth until restart

2014-08-22 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106646#comment-14106646
 ] 

Benedict commented on CASSANDRA-7042:
-

At a guess, this is CASSANDRA-7139. Try raising your compaction throughput 
limit and/or lowering your concurrent_compactors (both in the cassandra.yaml)



 Disk space growth until restart
 ---

 Key: CASSANDRA-7042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04
 Sun Java 7
 Cassandra 2.0.6
Reporter: Zach Aller
 Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
 after.log, before.log, tabledump_after_restart.txt, 
 tabledump_before_restart.txt


 Cassandra will constantly eat disk space not sure whats causing it the only 
 thing that seems to fix it is a restart of cassandra this happens about every 
 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
 restart cassandra it usually all clears itself up and disks return to normal 
 for a while then something triggers its and starts climbing again. Sometimes 
 when we restart compactions pending skyrocket and if we restart a second time 
 the compactions pending drop off back to a normal level. One other thing to 
 note is the space is not free'd until cassandra starts back up and not when 
 shutdown.
 I will get a clean log of before and after restarting next time it happens 
 and post it.
 Here is a common ERROR in our logs that might be related
 {noformat}
 ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
 (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at java.io.RandomAccessFile.open(Native Method)
 at java.io.RandomAccessFile.init(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 at 
 org.apache.cassandra.io.util.ThrottledReader.init(ThrottledReader.java:35)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
 ... 17 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7819) In progress compactions should not prevent deletion of stale sstables

2014-08-22 Thread Benedict (JIRA)
Benedict created CASSANDRA-7819:
---

 Summary: In progress compactions should not prevent deletion of 
stale sstables
 Key: CASSANDRA-7819
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7819
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Priority: Minor


Compactions retain references to potentially many sstables that existed when 
they were started but that are now obsolete; many concurrent compactions can 
compound this dramatically, and with very large files in size tiered compaction 
it is possible to inflate disk utilisation dramatically beyond what is 
necessary.

I propose, during compaction, periodically checking which sstables are obsolete 
and simply replacing them with the sstable that replaced it. These sstables are 
by definition only used for lookup, since we are in the process of obsoleting 
the sstables we're compacting, they're only used to reference overlapping 
ranges which may be covered by tombstones.

A simplest solution might even be to simply detect obsoletion and recalculate 
our overlapping tree afresh. This is a pretty quick operation in the grand 
scheme of things, certainly wrt compaction, so nothing lost to do this at the 
rate we obsolete sstables.

See CASSANDRA-7139 for original discussion of the problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7819) In progress compactions should not prevent deletion of stale sstables

2014-08-22 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-7819:


Attachment: 7819.txt

Went ahead and did it, since it was surprisingly simple to throw together. 
[~krummas] wdyt? Not sure if our unit tests are sufficient to check the safety 
of it, but it seems trivially safe enough to me, since compaction is single 
threaded.

 In progress compactions should not prevent deletion of stale sstables
 -

 Key: CASSANDRA-7819
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7819
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Priority: Minor
  Labels: compaction
 Fix For: 2.0.10

 Attachments: 7819.txt


 Compactions retain references to potentially many sstables that existed when 
 they were started but that are now obsolete; many concurrent compactions can 
 compound this dramatically, and with very large files in size tiered 
 compaction it is possible to inflate disk utilisation dramatically beyond 
 what is necessary.
 I propose, during compaction, periodically checking which sstables are 
 obsolete and simply replacing them with the sstable that replaced it. These 
 sstables are by definition only used for lookup, since we are in the process 
 of obsoleting the sstables we're compacting, they're only used to reference 
 overlapping ranges which may be covered by tombstones.
 A simplest solution might even be to simply detect obsoletion and recalculate 
 our overlapping tree afresh. This is a pretty quick operation in the grand 
 scheme of things, certainly wrt compaction, so nothing lost to do this at the 
 rate we obsolete sstables.
 See CASSANDRA-7139 for original discussion of the problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7042) Disk space growth until restart

2014-08-22 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1410#comment-1410
 ] 

Benedict commented on CASSANDRA-7042:
-

I've filed CASSANDRA-7819 with a more general fix, in case this turns out to be 
something different. 

 Disk space growth until restart
 ---

 Key: CASSANDRA-7042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04
 Sun Java 7
 Cassandra 2.0.6
Reporter: Zach Aller
 Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
 after.log, before.log, tabledump_after_restart.txt, 
 tabledump_before_restart.txt


 Cassandra will constantly eat disk space not sure whats causing it the only 
 thing that seems to fix it is a restart of cassandra this happens about every 
 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
 restart cassandra it usually all clears itself up and disks return to normal 
 for a while then something triggers its and starts climbing again. Sometimes 
 when we restart compactions pending skyrocket and if we restart a second time 
 the compactions pending drop off back to a normal level. One other thing to 
 note is the space is not free'd until cassandra starts back up and not when 
 shutdown.
 I will get a clean log of before and after restarting next time it happens 
 and post it.
 Here is a common ERROR in our logs that might be related
 {noformat}
 ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
 (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at java.io.RandomAccessFile.open(Native Method)
 at java.io.RandomAccessFile.init(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 at 
 org.apache.cassandra.io.util.ThrottledReader.init(ThrottledReader.java:35)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
 ... 17 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Add USING to CREATE FUNCTION for CASSANDRA-7811

2014-08-22 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk bc9dc0f0a - aad152d81


Add USING to CREATE FUNCTION for CASSANDRA-7811


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aad152d8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aad152d8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aad152d8

Branch: refs/heads/trunk
Commit: aad152d81c3a9fcd25222c9d9cf3e10265607906
Parents: bc9dc0f
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 11:47:13 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 11:47:13 2014 +0200

--
 pylib/cqlshlib/cql3handling.py  |  2 +-
 src/java/org/apache/cassandra/cql3/Cql.g|  2 +-
 test/unit/org/apache/cassandra/cql3/UFTest.java | 32 ++--
 3 files changed, 18 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/aad152d8/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 9d1187a..5e28a9c 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -1001,7 +1001,7 @@ syntax_rules += r'''
   stringLiteral
 )
   )
-  | (stringLiteral)
+  | (USING stringLiteral)
 )
  ;
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aad152d8/src/java/org/apache/cassandra/cql3/Cql.g
--
diff --git a/src/java/org/apache/cassandra/cql3/Cql.g 
b/src/java/org/apache/cassandra/cql3/Cql.g
index d44fc7c..01da5ca 100644
--- a/src/java/org/apache/cassandra/cql3/Cql.g
+++ b/src/java/org/apache/cassandra/cql3/Cql.g
@@ -513,7 +513,7 @@ createFunctionStatement returns [CreateFunctionStatement 
expr]
   K_RETURNS
   rt=comparatorType
   (
-  (  { language=CLASS; } cls = STRING_LITERAL { 
bodyOrClassName = $cls.text; } )
+  ( K_USING cls = STRING_LITERAL { bodyOrClassName = $cls.text; } )
 | ( K_LANGUAGE l = IDENT { language=$l.text; } K_AS
 (
   ( body = STRING_LITERAL

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aad152d8/test/unit/org/apache/cassandra/cql3/UFTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UFTest.java 
b/test/unit/org/apache/cassandra/cql3/UFTest.java
index 304ee04..ec494a6 100644
--- a/test/unit/org/apache/cassandra/cql3/UFTest.java
+++ b/test/unit/org/apache/cassandra/cql3/UFTest.java
@@ -48,7 +48,7 @@ public class UFTest extends CQLTester
 {
 createTable(CREATE TABLE %s (key int primary key, val double)); // 
not used, but required by CQLTester
 
-execute(create function foo::cf ( input double ) returns double 
'org.apache.cassandra.cql3.UFTest#sin');
+execute(create function foo::cf ( input double ) returns double using 
'org.apache.cassandra.cql3.UFTest#sin');
 execute(drop function foo::cf);
 }
 
@@ -57,8 +57,8 @@ public class UFTest extends CQLTester
 {
 createTable(CREATE TABLE %s (key int primary key, val double)); // 
not used, but required by CQLTester
 
-execute(create function foo::cff ( input double ) returns double 
'org.apache.cassandra.cql3.UFTest#sin');
-execute(create function foo::cff ( input double ) returns double 
'org.apache.cassandra.cql3.UFTest#sin');
+execute(create function foo::cff ( input double ) returns double 
using 'org.apache.cassandra.cql3.UFTest#sin');
+execute(create function foo::cff ( input double ) returns double 
using 'org.apache.cassandra.cql3.UFTest#sin');
 }
 
 
@@ -67,7 +67,7 @@ public class UFTest extends CQLTester
 {
 createTable(CREATE TABLE %s (key int primary key, val double)); // 
not used, but required by CQLTester
 
-execute(create function if not exists foo::cfine ( input double ) 
returns double 'org.apache.cassandra.cql3.UFTest#sin');
+execute(create function if not exists foo::cfine ( input double ) 
returns double using 'org.apache.cassandra.cql3.UFTest#sin');
 execute(drop function foo::cfine);
 }
 
@@ -76,42 +76,42 @@ public class UFTest extends CQLTester
 public void ddlCreateFunctionBadClass() throws Throwable
 {
 createTable(CREATE TABLE %s (key int primary key, val double)); // 
not used, but required by CQLTester
-execute(create function foo::cff ( input double ) returns double 

[jira] [Resolved] (CASSANDRA-7811) Add USING to CREATE FUNCTION syntax

2014-08-22 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-7811.
-

Resolution: Fixed
  Assignee: Sylvain Lebresne

Alright, that was trivial enough that I took the liberty to ninja-commit this, 
see commit 
[aad152d|https://github.com/apache/cassandra/commit/aad152d81c3a9fcd25222c9d9cf3e10265607906]
 for reference.

 Add USING to CREATE FUNCTION syntax
 ---

 Key: CASSANDRA-7811
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7811
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial
 Fix For: 3.0


 The current syntax to create a function using a class is:
 {noformat}
 CREATE FUNCTION f() RETURNS int 'com.my.class#myMethod'
 {noformat}
 This is a minor detail but the absence of a keywork between the return type 
 and the class/method string bugs me. I'm submitting that we change this to
 {noformat}
 CREATE FUNCTION f() RETURNS int USING 'com.my.class#myMethod'
 {noformat}
 which would also be more consistent with the {{CREATE TRIGGER}} syntax. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7145) FileNotFoundException during compaction

2014-08-22 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-7145:
---

Attachment: 0001-avoid-marking-compacted-sstables-as-compacting.patch

If we have a situation where this happens (in sequence);

# We ask LeveledManifest for a new CompactionCandidate
# LCS returns a CompactionCandidate containing sstables marked as compacting (a 
bug)
# The compaction that held one of the sstables we marked in #2 finishes and 
removes the files that were included in the compaction
# We successfully mark the compacted sstable as compacting (it is no longer 
marked as compacting in the View)
# FileNotFoundException once we start trying to compact

Attached patch 
* removes a case in LCS where we could return compacting sstables in a 
CompactionCandidate
* makes sure we can't mark compacted sstables as compacting

It would be much appreciated if anyone that can reproduce this could try with 
the attached patch to see if the problem goes away.

 FileNotFoundException during compaction
 ---

 Key: CASSANDRA-7145
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7145
 Project: Cassandra
  Issue Type: Bug
 Environment: CentOS 6.3, Datastax Enterprise 4.0.1 (Cassandra 2.0.5), 
 Java 1.7.0_55
Reporter: PJ
Assignee: Marcus Eriksson
 Fix For: 2.0.10

 Attachments: 
 0001-avoid-marking-compacted-sstables-as-compacting.patch, compaction - 
 FileNotFoundException.txt, repair - RuntimeException.txt, startup - 
 AssertionError.txt


 I can't finish any compaction because my nodes always throw a 
 FileNotFoundException. I've already tried the following but nothing helped:
 1. nodetool flush
 2. nodetool repair (ends with RuntimeException; see attachment)
 3. node restart (via dse cassandra-stop)
 Whenever I restart the nodes, another type of exception is logged (see 
 attachment) somewhere near the end of startup process. This particular 
 exception doesn't seem to be critical because the nodes still manage to 
 finish the startup and become online.
 I don't have specific steps to reproduce the problem that I'm experiencing 
 with compaction and repair. I'm in the middle of migrating 4.8 billion rows 
 from MySQL via SSTableLoader. 
 Some things that may or may not be relevant:
 1. I didn't drop and recreate the keyspace (so probably not related to 
 CASSANDRA-4857)
 2. I do the bulk-loading in batches of 1 to 20 millions rows. When a batch 
 reaches 100% total progress (i.e. starts to build secondary index), I kill 
 the sstableloader process and cancel the index build
 3. I restart the nodes occasionally. It's possible that there is an on-going 
 compaction during one of those restarts.
 Related StackOverflow question (mine): 
 http://stackoverflow.com/questions/23435847/filenotfoundexception-during-compaction



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7159) sstablemetadata command should print some more stuff

2014-08-22 Thread Vladislav Sinjavin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106710#comment-14106710
 ] 

Vladislav Sinjavin commented on CASSANDRA-7159:
---

Hi Sylvain!

Thanks for your quick response.

About the min/max column names. With the help of the unit tests for the 
SSTableMetadataViewer I found out that there is a ByteBufferUtil class and it 
look's like it can deserialize correctly column names: 
ByteBufferUtil.string(buffer). What do you mean that we can use comparator of 
the sstable?

About the min/max token. Thanks for you help, I see this fields in the SSTable 
class as you said but I can't see an easy way to get an instance of this object 
just with the help of the *.db file. 
Probably we can get an instance of the ColumnFamilyStore 
(Keyspace.open(KEYSPACE1).getColumnFamilyStore(Counter1)) and then  but I 
think it's not a correct way. Please, could you advise?

Thanks in advance!

 sstablemetadata command should print some more stuff
 

 Key: CASSANDRA-7159
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7159
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Vladislav Sinjavin
Priority: Trivial
  Labels: lhf

 It would be nice if the sstablemetadata command printed out some more of the 
 stuff we track.  Like the Min/Max column names and the min/max token in the 
 file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7819) In progress compactions should not prevent deletion of stale sstables

2014-08-22 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106722#comment-14106722
 ] 

Marcus Eriksson commented on CASSANDRA-7819:


quick first comment;

bq. since compaction is single threaded 

In 2.0 we still have ParallelCompactionIterable - though i have not checked yet 
if its use of CompactionController could be dangerous here

will review properly soon

 In progress compactions should not prevent deletion of stale sstables
 -

 Key: CASSANDRA-7819
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7819
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
  Labels: compaction
 Fix For: 2.0.10

 Attachments: 7819.txt


 Compactions retain references to potentially many sstables that existed when 
 they were started but that are now obsolete; many concurrent compactions can 
 compound this dramatically, and with very large files in size tiered 
 compaction it is possible to inflate disk utilisation dramatically beyond 
 what is necessary.
 I propose, during compaction, periodically checking which sstables are 
 obsolete and simply replacing them with the sstable that replaced it. These 
 sstables are by definition only used for lookup, since we are in the process 
 of obsoleting the sstables we're compacting, they're only used to reference 
 overlapping ranges which may be covered by tombstones.
 A simplest solution might even be to simply detect obsoletion and recalculate 
 our overlapping tree afresh. This is a pretty quick operation in the grand 
 scheme of things, certainly wrt compaction, so nothing lost to do this at the 
 rate we obsolete sstables.
 See CASSANDRA-7139 for original discussion of the problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7042) Disk space growth until restart

2014-08-22 Thread Zach Aller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106807#comment-14106807
 ] 

Zach Aller commented on CASSANDRA-7042:
---

Our compaction limit is set to 0 and compaction memory 768MB. Concurrent 
compactors is set to the default and multithreading is off and compaction pre 
heat cache is also off. I am going to be getting a debug log today hopefully 
with this (log4j.logger.org.apache.cassandra.db.compaction=DEBUG) so we will 
see what the reveals.

 Disk space growth until restart
 ---

 Key: CASSANDRA-7042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04
 Sun Java 7
 Cassandra 2.0.6
Reporter: Zach Aller
 Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
 after.log, before.log, tabledump_after_restart.txt, 
 tabledump_before_restart.txt


 Cassandra will constantly eat disk space not sure whats causing it the only 
 thing that seems to fix it is a restart of cassandra this happens about every 
 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
 restart cassandra it usually all clears itself up and disks return to normal 
 for a while then something triggers its and starts climbing again. Sometimes 
 when we restart compactions pending skyrocket and if we restart a second time 
 the compactions pending drop off back to a normal level. One other thing to 
 note is the space is not free'd until cassandra starts back up and not when 
 shutdown.
 I will get a clean log of before and after restarting next time it happens 
 and post it.
 Here is a common ERROR in our logs that might be related
 {noformat}
 ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
 (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at java.io.RandomAccessFile.open(Native Method)
 at java.io.RandomAccessFile.init(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 at 
 org.apache.cassandra.io.util.ThrottledReader.init(ThrottledReader.java:35)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
 ... 17 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106815#comment-14106815
 ] 

Digant Modha commented on CASSANDRA-7817:
-

Even if it's row level deletion, the code still has to read all the 
cells/columns?  Does that mean that the row level deletion optimization 
should/does not play a role in this case?  Thanks.

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106817#comment-14106817
 ] 

Sylvain Lebresne commented on CASSANDRA-7817:
-

bq.  Does that mean that the row level deletion optimization should/does not 
play a role in this case?

You might want to clarify which row level deletion optimization you're 
talking about exactly, because i'm not really sure which one you're refering to.

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106822#comment-14106822
 ] 

Digant Modha commented on CASSANDRA-7817:
-

I mean full row deletion - delete using partition key only.

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7159) sstablemetadata command should print some more stuff

2014-08-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106824#comment-14106824
 ] 

Sylvain Lebresne commented on CASSANDRA-7159:
-

bq. there is a ByteBufferUtil class and it look's like it can deserialize 
correctly column names

You might want to make sure you understand what the column names are in that 
context. What the min/maxColumnNames in the SSTableMetadata stores is the min 
and max values for each component of the *internal column* (not CQL column) 
names contained in the sstable (I would argue that min/maxColumnNames is bad 
naming in that case and we should probably not use some better designation in 
the output of sstablemetadata that this issue will add). Those value are not a 
all guaranteed to be strings, so ByteBufferUtil.string() won't work in general. 
The comparator however has the type for each of those components.

bq. Probably we can get an instance of the ColumnFamilyStore 
(Keyspace.open(KEYSPACE1).getColumnFamilyStore(Counter1))

Let's not get there. As I mentioned, those values are stored in the Summary 
file, so let's just read them from there.

 sstablemetadata command should print some more stuff
 

 Key: CASSANDRA-7159
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7159
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Vladislav Sinjavin
Priority: Trivial
  Labels: lhf

 It would be nice if the sstablemetadata command printed out some more of the 
 stuff we track.  Like the Min/Max column names and the min/max token in the 
 file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106831#comment-14106831
 ] 

Sylvain Lebresne commented on CASSANDRA-7817:
-

If you question is is a row tombsone not used?, then a row tombstone is used. 
But if you have a large number of cells/columns for the row in one sstable, and 
have the full row tombstone in another sstable and you do a read, the code has 
no way to know a priori that the row tombstone deletes all the cells/columns: 
it has to read all the cells to check if they are or not deleted by the row 
tombstone. Of course, as soon as both sstables are compacted together, the 
cells will be physically removed (since they are shadowed by the row tombstone) 
and subsequent read will not trigger the tombstone warning.

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Digant Modha (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106834#comment-14106834
 ] 

Digant Modha commented on CASSANDRA-7817:
-

Thanks,  that answers my question.

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7042) Disk space growth until restart

2014-08-22 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106836#comment-14106836
 ] 

Benedict commented on CASSANDRA-7042:
-

The default concurrent_compactors is most likely too high for machines with 
many CPUs and modest disk throughput. What disk layout do you have?

 Disk space growth until restart
 ---

 Key: CASSANDRA-7042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04
 Sun Java 7
 Cassandra 2.0.6
Reporter: Zach Aller
 Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
 after.log, before.log, tabledump_after_restart.txt, 
 tabledump_before_restart.txt


 Cassandra will constantly eat disk space not sure whats causing it the only 
 thing that seems to fix it is a restart of cassandra this happens about every 
 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
 restart cassandra it usually all clears itself up and disks return to normal 
 for a while then something triggers its and starts climbing again. Sometimes 
 when we restart compactions pending skyrocket and if we restart a second time 
 the compactions pending drop off back to a normal level. One other thing to 
 note is the space is not free'd until cassandra starts back up and not when 
 shutdown.
 I will get a clean log of before and after restarting next time it happens 
 and post it.
 Here is a common ERROR in our logs that might be related
 {noformat}
 ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
 (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at java.io.RandomAccessFile.open(Native Method)
 at java.io.RandomAccessFile.init(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 at 
 org.apache.cassandra.io.util.ThrottledReader.init(ThrottledReader.java:35)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
 ... 17 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7816) Updated the 4.2.6. EVENT section in the binary protocol specification

2014-08-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106837#comment-14106837
 ] 

Sylvain Lebresne commented on CASSANDRA-7816:
-

Indeed, MOVED_NODE is missing, thanks. Regarding the same event sent multiple 
times, have you actually experienced that? Because that would be something 
worth tracking and fixing (don't get me wrong, I think it's a very good idea 
for client to not crash if the same event is sent multiple times, but we should 
avoid to do it server side).

 Updated the 4.2.6. EVENT section in the binary protocol specification
 ---

 Key: CASSANDRA-7816
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7816
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Reporter: Michael Penick
Priority: Trivial
 Attachments: trunk-7816.txt


 Added MOVED_NODE as a possible type of topology change and also specified 
 that it is possible to receive the same event multiple times.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7042) Disk space growth until restart

2014-08-22 Thread Zach Aller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106842#comment-14106842
 ] 

Zach Aller commented on CASSANDRA-7042:
---

We are using amazon ec2 i2.xlarge so one 800GB SSD

 Disk space growth until restart
 ---

 Key: CASSANDRA-7042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04
 Sun Java 7
 Cassandra 2.0.6
Reporter: Zach Aller
 Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
 after.log, before.log, tabledump_after_restart.txt, 
 tabledump_before_restart.txt


 Cassandra will constantly eat disk space not sure whats causing it the only 
 thing that seems to fix it is a restart of cassandra this happens about every 
 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
 restart cassandra it usually all clears itself up and disks return to normal 
 for a while then something triggers its and starts climbing again. Sometimes 
 when we restart compactions pending skyrocket and if we restart a second time 
 the compactions pending drop off back to a normal level. One other thing to 
 note is the space is not free'd until cassandra starts back up and not when 
 shutdown.
 I will get a clean log of before and after restarting next time it happens 
 and post it.
 Here is a common ERROR in our logs that might be related
 {noformat}
 ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
 (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at java.io.RandomAccessFile.open(Native Method)
 at java.io.RandomAccessFile.init(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 at 
 org.apache.cassandra.io.util.ThrottledReader.init(ThrottledReader.java:35)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
 ... 17 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7343) CAS contention back off time should be configurable

2014-08-22 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7343:
-

Reviewer:   (was: Aleksey Yeschenko)

 CAS contention back off time should be configurable 
 

 Key: CASSANDRA-7343
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7343
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Fix For: 2.0.10

 Attachments: cas20-7343.diff, trunk-7343.diff


 We are currently making the contention call sleep for upto 100 millis. This 
 is not ideal for all situations specially if you are doing LOCAL_SERIAL. 
 This value should be configurable based on CL.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7042) Disk space growth until restart

2014-08-22 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106846#comment-14106846
 ] 

Benedict commented on CASSANDRA-7042:
-

Ok, so that box has a default of 32 concurrent compactors on 2.0 (on 2.1 it 
will default to 2). This is almost certainly too many, and I think this is 
highly likely to be your problem. What would be interesting is to try the patch 
I have posted in CASSANDRA-7819 to see if it fixes it, however if you want an 
immediate fix, lowering your concurrent compactors to = 4 is probably best.

 Disk space growth until restart
 ---

 Key: CASSANDRA-7042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04
 Sun Java 7
 Cassandra 2.0.6
Reporter: Zach Aller
 Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
 after.log, before.log, tabledump_after_restart.txt, 
 tabledump_before_restart.txt


 Cassandra will constantly eat disk space not sure whats causing it the only 
 thing that seems to fix it is a restart of cassandra this happens about every 
 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
 restart cassandra it usually all clears itself up and disks return to normal 
 for a while then something triggers its and starts climbing again. Sometimes 
 when we restart compactions pending skyrocket and if we restart a second time 
 the compactions pending drop off back to a normal level. One other thing to 
 note is the space is not free'd until cassandra starts back up and not when 
 shutdown.
 I will get a clean log of before and after restarting next time it happens 
 and post it.
 Here is a common ERROR in our logs that might be related
 {noformat}
 ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
 (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at java.io.RandomAccessFile.open(Native Method)
 at java.io.RandomAccessFile.init(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 at 
 org.apache.cassandra.io.util.ThrottledReader.init(ThrottledReader.java:35)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
 ... 17 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Improve error message from 7499

2014-08-22 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 e4d5edae7 - fd8f5b9f7


Improve error message from 7499


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd8f5b9f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd8f5b9f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd8f5b9f

Branch: refs/heads/cassandra-2.0
Commit: fd8f5b9f7e88bebf180c6142f772ba2808bc8b01
Parents: e4d5eda
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:02:43 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:03:38 2014 +0200

--
 src/java/org/apache/cassandra/cql3/statements/BatchStatement.java  | 2 +-
 .../apache/cassandra/cql3/statements/ModificationStatement.java| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd8f5b9f/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index cbe3016..8a9a8f0 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -127,7 +127,7 @@ public class BatchStatement implements CQLStatement, 
MeasurableForPreparedCache
 statement.validate(state);
 
 if (hasConditions  statement.requiresRead())
-throw new InvalidRequestException(Operations using list 
indexes are not allowed with IF conditions);
+throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd8f5b9f/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 99dd9d9..165dbc1 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -157,7 +157,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 throw new InvalidRequestException(Cannot provide custom 
timestamp for conditional update);
 
 if (requiresRead())
-throw new InvalidRequestException(Operations using list 
indexes are not allowed with IF conditions);
+throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 }
 
 if (isCounter())



[jira] [Resolved] (CASSANDRA-7441) Deleting an element from a list in UPDATE does not work with IF condition

2014-08-22 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-7441.
-

Resolution: Duplicate

This is actually a duplicate of CASSANDRA-7499.

 Deleting an element from a list in UPDATE does not work with IF condition
 -

 Key: CASSANDRA-7441
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7441
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Arvind Nithrakashyap
Assignee: Sylvain Lebresne
Priority: Minor

 When issuing a list deletion with an IF condition, that does not seem to work 
 even when it says that the change was applied correctly. 
 Here's a reproducible test case:
 {code}
 cqlsh:casstest create table foo(id text, values listint, condition int, 
 primary key(id));
 cqlsh:casstest insert into foo(id, values, condition)  values ('a', [1,2,3], 
 0);
 cqlsh:casstest select * from foo;
  id | condition | values
 +---+---
   a | 0 | [1, 2, 3]
 (1 rows)
 cqlsh:casstest update foo set values = values - [3] where id = 'a' IF 
 condition = 0;
  [applied]
 ---
   True
 cqlsh:casstest select * from foo;
  id | condition | values
 +---+---
   a | 0 | [1, 2, 3]
 (1 rows)
 cqlsh:casstest update foo set values = values - [3] where id = 'a';
 cqlsh:casstest select * from foo;
  id | condition | values
 +---+
   a | 0 | [1, 2]
 (1 rows)
 {code}
 Addition seems to work though
 {code}
 cqlsh:casstest update foo set values = values + [3] where id = 'a' IF 
 condition = 0;
  [applied]
 ---
   True
 cqlsh:casstest select * from foo;
  id | condition | values
 +---+---
   a | 0 | [1, 2, 3]
 (1 rows)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7819) In progress compactions should not prevent deletion of stale sstables

2014-08-22 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106853#comment-14106853
 ] 

Benedict commented on CASSANDRA-7819:
-

Hmm. Yes, it looks like this _isn't_ safe with parallel compaction. However we 
can make it safe by simply making the variable volatile.

 In progress compactions should not prevent deletion of stale sstables
 -

 Key: CASSANDRA-7819
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7819
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
  Labels: compaction
 Fix For: 2.0.10

 Attachments: 7819.txt


 Compactions retain references to potentially many sstables that existed when 
 they were started but that are now obsolete; many concurrent compactions can 
 compound this dramatically, and with very large files in size tiered 
 compaction it is possible to inflate disk utilisation dramatically beyond 
 what is necessary.
 I propose, during compaction, periodically checking which sstables are 
 obsolete and simply replacing them with the sstable that replaced it. These 
 sstables are by definition only used for lookup, since we are in the process 
 of obsoleting the sstables we're compacting, they're only used to reference 
 overlapping ranges which may be covered by tombstones.
 A simplest solution might even be to simply detect obsoletion and recalculate 
 our overlapping tree afresh. This is a pretty quick operation in the grand 
 scheme of things, certainly wrt compaction, so nothing lost to do this at the 
 rate we obsolete sstables.
 See CASSANDRA-7139 for original discussion of the problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7042) Disk space growth until restart

2014-08-22 Thread Zach Aller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106866#comment-14106866
 ] 

Zach Aller commented on CASSANDRA-7042:
---

Ok, I will still probably try to grab a debug log then after that try lowering 
concurrent compactors to see if it helps the issue.

 Disk space growth until restart
 ---

 Key: CASSANDRA-7042
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04
 Sun Java 7
 Cassandra 2.0.6
Reporter: Zach Aller
 Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
 after.log, before.log, tabledump_after_restart.txt, 
 tabledump_before_restart.txt


 Cassandra will constantly eat disk space not sure whats causing it the only 
 thing that seems to fix it is a restart of cassandra this happens about every 
 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
 restart cassandra it usually all clears itself up and disks return to normal 
 for a while then something triggers its and starts climbing again. Sometimes 
 when we restart compactions pending skyrocket and if we restart a second time 
 the compactions pending drop off back to a normal level. One other thing to 
 note is the space is not free'd until cassandra starts back up and not when 
 shutdown.
 I will get a clean log of before and after restarting next time it happens 
 and post it.
 Here is a common ERROR in our logs that might be related
 {noformat}
 ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
 (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
 at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:67)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
 at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.FileNotFoundException: 
 /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
 (No such file or directory)
 at java.io.RandomAccessFile.open(Native Method)
 at java.io.RandomAccessFile.init(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:58)
 at 
 org.apache.cassandra.io.util.ThrottledReader.init(ThrottledReader.java:35)
 at 
 org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
 ... 17 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Update java driver to 2.0.5 (for hadoop)

2014-08-22 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 fd8f5b9f7 - 200b80288


Update java driver to 2.0.5 (for hadoop)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/200b8028
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/200b8028
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/200b8028

Branch: refs/heads/cassandra-2.0
Commit: 200b802884041c3d154b61e5f8379837cd929b2e
Parents: fd8f5b9
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:11:44 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:11:44 2014 +0200

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/200b8028/build.xml
--
diff --git a/build.xml b/build.xml
index 611345d..dd59bd2 100644
--- a/build.xml
+++ b/build.xml
@@ -387,7 +387,7 @@
   dependency groupId=edu.stanford.ppl artifactId=snaptree 
version=0.1 /
   dependency groupId=org.mindrot artifactId=jbcrypt 
version=0.3m /
   dependency groupId=io.netty artifactId=netty 
version=3.6.6.Final /
-  dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.4 /
+  dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.5 /
   dependency groupId=net.sf.supercsv artifactId=super-csv 
version=2.1.0 /
 /dependencyManagement
 developer id=alakshman name=Avinash Lakshman/



[jira] [Reopened] (CASSANDRA-7810) tombstones gc'd before being locally applied

2014-08-22 Thread Jonathan Halliday (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Halliday reopened CASSANDRA-7810:
--


sorry guys, still seeing this intermittently even after an env cleanup. I 
suspect it's linked to the merging of sstables. If I flush the data and 
tombstones together to a single sstable and compact then it's fine. If I flush 
the data and the tombstones separately such that I have two sstables, then 
compact them, it goes wrong. Any chance someone could try that modified process 
and see if it's reproducible? thx.

 tombstones gc'd before being locally applied
 

 Key: CASSANDRA-7810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7810
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.0.rc6
Reporter: Jonathan Halliday
Assignee: Marcus Eriksson
 Fix For: 2.1.0

 Attachments: range_tombstone_test.py


 # single node environment
 CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1 };
 use test;
 create table foo (a int, b int, primary key(a,b));
 alter table foo with gc_grace_seconds = 0;
 insert into foo (a,b) values (1,2);
 select * from foo;
 -- one row returned. so far, so good.
 delete from foo where a=1 and b=2;
 select * from foo;
 -- 0 rows. still rainbows and kittens.
 bin/nodetool flush;
 bin/nodetool compact;
 select * from foo;
  a | b
 ---+---
  1 | 2
 (1 rows)
 gahhh.
 looks like the tombstones were considered obsolete and thrown away before 
 being applied to the compaction?  gc_grace just means the interval after 
 which they won't be available to remote nodes repair - they should still 
 apply locally regardless (and do correctly in 2.0.9)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-22 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
build.xml
src/java/org/apache/cassandra/cql3/statements/BatchStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8da13437
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8da13437
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8da13437

Branch: refs/heads/cassandra-2.1.0
Commit: 8da134377a04614a5343ccf3eb211e8c48dd90fa
Parents: a0923db 200b802
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:15:50 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:15:50 2014 +0200

--
 build.xml |   2 +-
 .../cassandra/cql3/statements/BatchStatement.java |   2 +-
 .../cql3/statements/ModificationStatement.java|   2 +-
 tools/lib/cassandra-driver-core-2.0.4.jar | Bin 544025 - 0 bytes
 tools/lib/cassandra-driver-core-2.0.5.jar | Bin 0 - 544552 bytes
 5 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8da13437/build.xml
--
diff --cc build.xml
index ae77274,dd59bd2..16ff03b
--- a/build.xml
+++ b/build.xml
@@@ -395,12 -384,10 +395,12 @@@
dependency groupId=org.apache.cassandra 
artifactId=cassandra-thrift version=${version} /
dependency groupId=com.yammer.metrics artifactId=metrics-core 
version=2.2.0 /
dependency groupId=com.addthis.metrics 
artifactId=reporter-config version=2.1.0 /
 -  dependency groupId=edu.stanford.ppl artifactId=snaptree 
version=0.1 /
dependency groupId=org.mindrot artifactId=jbcrypt 
version=0.3m /
 -  dependency groupId=io.netty artifactId=netty 
version=3.6.6.Final /
 +  dependency groupId=io.airlift artifactId=airline version=0.6 
/
 +  dependency groupId=io.netty artifactId=netty-all 
version=4.0.20.Final /
 +  dependency groupId=com.google.code.findbugs artifactId=jsr305 
version=2.0.2 /
 +  dependency groupId=com.clearspring.analytics artifactId=stream 
version=2.5.2 /
-   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.4 /
+   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.5 /
dependency groupId=net.sf.supercsv artifactId=super-csv 
version=2.1.0 /
  /dependencyManagement
  developer id=alakshman name=Avinash Lakshman/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8da13437/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index 90be914,8a9a8f0..49617ee
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@@ -124,35 -124,10 +124,35 @@@ public class BatchStatement implements 
  if (timestampSet  statement.isTimestampSet())
  throw new InvalidRequestException(Timestamp must be set 
either on BATCH or individual statements);
  
 -statement.validate(state);
 +if (type == Type.COUNTER  !statement.isCounter())
 +throw new InvalidRequestException(Cannot include non-counter 
statement in a counter batch);
 +
 +if (type == Type.LOGGED  statement.isCounter())
 +throw new InvalidRequestException(Cannot include a counter 
statement in a logged batch);
 +
 +if (statement.isCounter())
 +hasCounters = true;
 +else
 +hasNonCounters = true;
 +}
 +
 +if (hasCounters  hasNonCounters)
 +throw new InvalidRequestException(Counter and non-counter 
mutations cannot exist in the same batch);
  
 -if (hasConditions  statement.requiresRead())
 -throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 +if (hasConditions)
 +{
 +String ksName = null;
 +String cfName = null;
 +for (ModificationStatement stmt : statements)
 +{
 +if (ksName != null  (!stmt.keyspace().equals(ksName) || 
!stmt.columnFamily().equals(cfName)))
 +throw new InvalidRequestException(Batch with conditions 
cannot span multiple tables);
 +ksName = stmt.keyspace();
 +cfName = stmt.columnFamily();
 +
 +if (stmt.requiresRead())
- throw new 

[2/3] git commit: Update java driver to 2.0.5 (for hadoop)

2014-08-22 Thread slebresne
Update java driver to 2.0.5 (for hadoop)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/200b8028
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/200b8028
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/200b8028

Branch: refs/heads/cassandra-2.1.0
Commit: 200b802884041c3d154b61e5f8379837cd929b2e
Parents: fd8f5b9
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:11:44 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:11:44 2014 +0200

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/200b8028/build.xml
--
diff --git a/build.xml b/build.xml
index 611345d..dd59bd2 100644
--- a/build.xml
+++ b/build.xml
@@ -387,7 +387,7 @@
   dependency groupId=edu.stanford.ppl artifactId=snaptree 
version=0.1 /
   dependency groupId=org.mindrot artifactId=jbcrypt 
version=0.3m /
   dependency groupId=io.netty artifactId=netty 
version=3.6.6.Final /
-  dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.4 /
+  dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.5 /
   dependency groupId=net.sf.supercsv artifactId=super-csv 
version=2.1.0 /
 /dependencyManagement
 developer id=alakshman name=Avinash Lakshman/



[1/3] git commit: Improve error message from 7499

2014-08-22 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 a0923dbc0 - 8da134377


Improve error message from 7499


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd8f5b9f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd8f5b9f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd8f5b9f

Branch: refs/heads/cassandra-2.1.0
Commit: fd8f5b9f7e88bebf180c6142f772ba2808bc8b01
Parents: e4d5eda
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:02:43 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:03:38 2014 +0200

--
 src/java/org/apache/cassandra/cql3/statements/BatchStatement.java  | 2 +-
 .../apache/cassandra/cql3/statements/ModificationStatement.java| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd8f5b9f/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index cbe3016..8a9a8f0 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -127,7 +127,7 @@ public class BatchStatement implements CQLStatement, 
MeasurableForPreparedCache
 statement.validate(state);
 
 if (hasConditions  statement.requiresRead())
-throw new InvalidRequestException(Operations using list 
indexes are not allowed with IF conditions);
+throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd8f5b9f/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 99dd9d9..165dbc1 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -157,7 +157,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 throw new InvalidRequestException(Cannot provide custom 
timestamp for conditional update);
 
 if (requiresRead())
-throw new InvalidRequestException(Operations using list 
indexes are not allowed with IF conditions);
+throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 }
 
 if (isCounter())



[1/4] git commit: Improve error message from 7499

2014-08-22 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 854aab79b - 94f1107ec


Improve error message from 7499


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd8f5b9f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd8f5b9f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd8f5b9f

Branch: refs/heads/cassandra-2.1
Commit: fd8f5b9f7e88bebf180c6142f772ba2808bc8b01
Parents: e4d5eda
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:02:43 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:03:38 2014 +0200

--
 src/java/org/apache/cassandra/cql3/statements/BatchStatement.java  | 2 +-
 .../apache/cassandra/cql3/statements/ModificationStatement.java| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd8f5b9f/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index cbe3016..8a9a8f0 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -127,7 +127,7 @@ public class BatchStatement implements CQLStatement, 
MeasurableForPreparedCache
 statement.validate(state);
 
 if (hasConditions  statement.requiresRead())
-throw new InvalidRequestException(Operations using list 
indexes are not allowed with IF conditions);
+throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd8f5b9f/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 99dd9d9..165dbc1 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -157,7 +157,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 throw new InvalidRequestException(Cannot provide custom 
timestamp for conditional update);
 
 if (requiresRead())
-throw new InvalidRequestException(Operations using list 
indexes are not allowed with IF conditions);
+throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 }
 
 if (isCounter())



[4/4] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-22 Thread slebresne
Merge branch 'cassandra-2.1.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/94f1107e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/94f1107e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/94f1107e

Branch: refs/heads/cassandra-2.1
Commit: 94f1107eca2f5965d6fc55ccf718a23a2f4fe2b7
Parents: 854aab7 8da1343
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:16:46 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:16:46 2014 +0200

--
 build.xml |   2 +-
 tools/lib/cassandra-driver-core-2.0.4.jar | Bin 544025 - 0 bytes
 tools/lib/cassandra-driver-core-2.0.5.jar | Bin 0 - 544552 bytes
 3 files changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/94f1107e/build.xml
--
diff --cc build.xml
index d747bbc,16ff03b..e678d88
--- a/build.xml
+++ b/build.xml
@@@ -399,16 -400,12 +399,16 @@@
dependency groupId=io.netty artifactId=netty-all 
version=4.0.20.Final /
dependency groupId=com.google.code.findbugs artifactId=jsr305 
version=2.0.2 /
dependency groupId=com.clearspring.analytics artifactId=stream 
version=2.5.2 /
-   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.4 /
+   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.5 /
dependency groupId=net.sf.supercsv artifactId=super-csv 
version=2.1.0 /
 +dependency groupId=net.ju-n.compile-command-annotations 
artifactId=compile-command-annotations version=1.2.0 /
  /dependencyManagement
  developer id=alakshman name=Avinash Lakshman/
 -developer id=antelder name=Anthony Elder/
 +developer id=aleksey name=Aleksey Yeschenko/
 +developer id=amorton name=Aaron Morton/
 +developer id=benedict name=Benedict Elliott Smith/
  developer id=brandonwilliams name=Brandon Williams/
 +developer id=dbrosius name=David Brosius/
  developer id=eevans name=Eric Evans/
  developer id=gdusbabek name=Gary Dusbabek/
  developer id=goffinet name=Chris Goffinet/



[2/5] git commit: Update java driver to 2.0.5 (for hadoop)

2014-08-22 Thread slebresne
Update java driver to 2.0.5 (for hadoop)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/200b8028
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/200b8028
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/200b8028

Branch: refs/heads/trunk
Commit: 200b802884041c3d154b61e5f8379837cd929b2e
Parents: fd8f5b9
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:11:44 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:11:44 2014 +0200

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/200b8028/build.xml
--
diff --git a/build.xml b/build.xml
index 611345d..dd59bd2 100644
--- a/build.xml
+++ b/build.xml
@@ -387,7 +387,7 @@
   dependency groupId=edu.stanford.ppl artifactId=snaptree 
version=0.1 /
   dependency groupId=org.mindrot artifactId=jbcrypt 
version=0.3m /
   dependency groupId=io.netty artifactId=netty 
version=3.6.6.Final /
-  dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.4 /
+  dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.5 /
   dependency groupId=net.sf.supercsv artifactId=super-csv 
version=2.1.0 /
 /dependencyManagement
 developer id=alakshman name=Avinash Lakshman/



[3/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-22 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
build.xml
src/java/org/apache/cassandra/cql3/statements/BatchStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8da13437
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8da13437
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8da13437

Branch: refs/heads/cassandra-2.1
Commit: 8da134377a04614a5343ccf3eb211e8c48dd90fa
Parents: a0923db 200b802
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:15:50 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:15:50 2014 +0200

--
 build.xml |   2 +-
 .../cassandra/cql3/statements/BatchStatement.java |   2 +-
 .../cql3/statements/ModificationStatement.java|   2 +-
 tools/lib/cassandra-driver-core-2.0.4.jar | Bin 544025 - 0 bytes
 tools/lib/cassandra-driver-core-2.0.5.jar | Bin 0 - 544552 bytes
 5 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8da13437/build.xml
--
diff --cc build.xml
index ae77274,dd59bd2..16ff03b
--- a/build.xml
+++ b/build.xml
@@@ -395,12 -384,10 +395,12 @@@
dependency groupId=org.apache.cassandra 
artifactId=cassandra-thrift version=${version} /
dependency groupId=com.yammer.metrics artifactId=metrics-core 
version=2.2.0 /
dependency groupId=com.addthis.metrics 
artifactId=reporter-config version=2.1.0 /
 -  dependency groupId=edu.stanford.ppl artifactId=snaptree 
version=0.1 /
dependency groupId=org.mindrot artifactId=jbcrypt 
version=0.3m /
 -  dependency groupId=io.netty artifactId=netty 
version=3.6.6.Final /
 +  dependency groupId=io.airlift artifactId=airline version=0.6 
/
 +  dependency groupId=io.netty artifactId=netty-all 
version=4.0.20.Final /
 +  dependency groupId=com.google.code.findbugs artifactId=jsr305 
version=2.0.2 /
 +  dependency groupId=com.clearspring.analytics artifactId=stream 
version=2.5.2 /
-   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.4 /
+   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.5 /
dependency groupId=net.sf.supercsv artifactId=super-csv 
version=2.1.0 /
  /dependencyManagement
  developer id=alakshman name=Avinash Lakshman/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8da13437/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index 90be914,8a9a8f0..49617ee
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@@ -124,35 -124,10 +124,35 @@@ public class BatchStatement implements 
  if (timestampSet  statement.isTimestampSet())
  throw new InvalidRequestException(Timestamp must be set 
either on BATCH or individual statements);
  
 -statement.validate(state);
 +if (type == Type.COUNTER  !statement.isCounter())
 +throw new InvalidRequestException(Cannot include non-counter 
statement in a counter batch);
 +
 +if (type == Type.LOGGED  statement.isCounter())
 +throw new InvalidRequestException(Cannot include a counter 
statement in a logged batch);
 +
 +if (statement.isCounter())
 +hasCounters = true;
 +else
 +hasNonCounters = true;
 +}
 +
 +if (hasCounters  hasNonCounters)
 +throw new InvalidRequestException(Counter and non-counter 
mutations cannot exist in the same batch);
  
 -if (hasConditions  statement.requiresRead())
 -throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 +if (hasConditions)
 +{
 +String ksName = null;
 +String cfName = null;
 +for (ModificationStatement stmt : statements)
 +{
 +if (ksName != null  (!stmt.keyspace().equals(ksName) || 
!stmt.columnFamily().equals(cfName)))
 +throw new InvalidRequestException(Batch with conditions 
cannot span multiple tables);
 +ksName = stmt.keyspace();
 +cfName = stmt.columnFamily();
 +
 +if (stmt.requiresRead())
- throw new 

[4/5] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-22 Thread slebresne
Merge branch 'cassandra-2.1.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/94f1107e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/94f1107e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/94f1107e

Branch: refs/heads/trunk
Commit: 94f1107eca2f5965d6fc55ccf718a23a2f4fe2b7
Parents: 854aab7 8da1343
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:16:46 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:16:46 2014 +0200

--
 build.xml |   2 +-
 tools/lib/cassandra-driver-core-2.0.4.jar | Bin 544025 - 0 bytes
 tools/lib/cassandra-driver-core-2.0.5.jar | Bin 0 - 544552 bytes
 3 files changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/94f1107e/build.xml
--
diff --cc build.xml
index d747bbc,16ff03b..e678d88
--- a/build.xml
+++ b/build.xml
@@@ -399,16 -400,12 +399,16 @@@
dependency groupId=io.netty artifactId=netty-all 
version=4.0.20.Final /
dependency groupId=com.google.code.findbugs artifactId=jsr305 
version=2.0.2 /
dependency groupId=com.clearspring.analytics artifactId=stream 
version=2.5.2 /
-   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.4 /
+   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.5 /
dependency groupId=net.sf.supercsv artifactId=super-csv 
version=2.1.0 /
 +dependency groupId=net.ju-n.compile-command-annotations 
artifactId=compile-command-annotations version=1.2.0 /
  /dependencyManagement
  developer id=alakshman name=Avinash Lakshman/
 -developer id=antelder name=Anthony Elder/
 +developer id=aleksey name=Aleksey Yeschenko/
 +developer id=amorton name=Aaron Morton/
 +developer id=benedict name=Benedict Elliott Smith/
  developer id=brandonwilliams name=Brandon Williams/
 +developer id=dbrosius name=David Brosius/
  developer id=eevans name=Eric Evans/
  developer id=gdusbabek name=Gary Dusbabek/
  developer id=goffinet name=Chris Goffinet/



[3/5] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-22 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1.0

Conflicts:
build.xml
src/java/org/apache/cassandra/cql3/statements/BatchStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8da13437
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8da13437
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8da13437

Branch: refs/heads/trunk
Commit: 8da134377a04614a5343ccf3eb211e8c48dd90fa
Parents: a0923db 200b802
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:15:50 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:15:50 2014 +0200

--
 build.xml |   2 +-
 .../cassandra/cql3/statements/BatchStatement.java |   2 +-
 .../cql3/statements/ModificationStatement.java|   2 +-
 tools/lib/cassandra-driver-core-2.0.4.jar | Bin 544025 - 0 bytes
 tools/lib/cassandra-driver-core-2.0.5.jar | Bin 0 - 544552 bytes
 5 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8da13437/build.xml
--
diff --cc build.xml
index ae77274,dd59bd2..16ff03b
--- a/build.xml
+++ b/build.xml
@@@ -395,12 -384,10 +395,12 @@@
dependency groupId=org.apache.cassandra 
artifactId=cassandra-thrift version=${version} /
dependency groupId=com.yammer.metrics artifactId=metrics-core 
version=2.2.0 /
dependency groupId=com.addthis.metrics 
artifactId=reporter-config version=2.1.0 /
 -  dependency groupId=edu.stanford.ppl artifactId=snaptree 
version=0.1 /
dependency groupId=org.mindrot artifactId=jbcrypt 
version=0.3m /
 -  dependency groupId=io.netty artifactId=netty 
version=3.6.6.Final /
 +  dependency groupId=io.airlift artifactId=airline version=0.6 
/
 +  dependency groupId=io.netty artifactId=netty-all 
version=4.0.20.Final /
 +  dependency groupId=com.google.code.findbugs artifactId=jsr305 
version=2.0.2 /
 +  dependency groupId=com.clearspring.analytics artifactId=stream 
version=2.5.2 /
-   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.4 /
+   dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.5 /
dependency groupId=net.sf.supercsv artifactId=super-csv 
version=2.1.0 /
  /dependencyManagement
  developer id=alakshman name=Avinash Lakshman/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8da13437/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index 90be914,8a9a8f0..49617ee
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@@ -124,35 -124,10 +124,35 @@@ public class BatchStatement implements 
  if (timestampSet  statement.isTimestampSet())
  throw new InvalidRequestException(Timestamp must be set 
either on BATCH or individual statements);
  
 -statement.validate(state);
 +if (type == Type.COUNTER  !statement.isCounter())
 +throw new InvalidRequestException(Cannot include non-counter 
statement in a counter batch);
 +
 +if (type == Type.LOGGED  statement.isCounter())
 +throw new InvalidRequestException(Cannot include a counter 
statement in a logged batch);
 +
 +if (statement.isCounter())
 +hasCounters = true;
 +else
 +hasNonCounters = true;
 +}
 +
 +if (hasCounters  hasNonCounters)
 +throw new InvalidRequestException(Counter and non-counter 
mutations cannot exist in the same batch);
  
 -if (hasConditions  statement.requiresRead())
 -throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 +if (hasConditions)
 +{
 +String ksName = null;
 +String cfName = null;
 +for (ModificationStatement stmt : statements)
 +{
 +if (ksName != null  (!stmt.keyspace().equals(ksName) || 
!stmt.columnFamily().equals(cfName)))
 +throw new InvalidRequestException(Batch with conditions 
cannot span multiple tables);
 +ksName = stmt.keyspace();
 +cfName = stmt.columnFamily();
 +
 +if (stmt.requiresRead())
- throw new 

[2/4] git commit: Update java driver to 2.0.5 (for hadoop)

2014-08-22 Thread slebresne
Update java driver to 2.0.5 (for hadoop)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/200b8028
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/200b8028
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/200b8028

Branch: refs/heads/cassandra-2.1
Commit: 200b802884041c3d154b61e5f8379837cd929b2e
Parents: fd8f5b9
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:11:44 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:11:44 2014 +0200

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/200b8028/build.xml
--
diff --git a/build.xml b/build.xml
index 611345d..dd59bd2 100644
--- a/build.xml
+++ b/build.xml
@@ -387,7 +387,7 @@
   dependency groupId=edu.stanford.ppl artifactId=snaptree 
version=0.1 /
   dependency groupId=org.mindrot artifactId=jbcrypt 
version=0.3m /
   dependency groupId=io.netty artifactId=netty 
version=3.6.6.Final /
-  dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.4 /
+  dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core version=2.0.5 /
   dependency groupId=net.sf.supercsv artifactId=super-csv 
version=2.1.0 /
 /dependencyManagement
 developer id=alakshman name=Avinash Lakshman/



[5/5] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-22 Thread slebresne
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc630832
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc630832
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc630832

Branch: refs/heads/trunk
Commit: bc6308321729ac613b3db912782ff257b43c568e
Parents: aad152d 94f1107
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:17:01 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:17:01 2014 +0200

--
 build.xml |   2 +-
 tools/lib/cassandra-driver-core-2.0.4.jar | Bin 544025 - 0 bytes
 tools/lib/cassandra-driver-core-2.0.5.jar | Bin 0 - 544552 bytes
 3 files changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc630832/build.xml
--



[1/5] git commit: Improve error message from 7499

2014-08-22 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk aad152d81 - bc6308321


Improve error message from 7499


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd8f5b9f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd8f5b9f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd8f5b9f

Branch: refs/heads/trunk
Commit: fd8f5b9f7e88bebf180c6142f772ba2808bc8b01
Parents: e4d5eda
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 16:02:43 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 16:03:38 2014 +0200

--
 src/java/org/apache/cassandra/cql3/statements/BatchStatement.java  | 2 +-
 .../apache/cassandra/cql3/statements/ModificationStatement.java| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd8f5b9f/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index cbe3016..8a9a8f0 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -127,7 +127,7 @@ public class BatchStatement implements CQLStatement, 
MeasurableForPreparedCache
 statement.validate(state);
 
 if (hasConditions  statement.requiresRead())
-throw new InvalidRequestException(Operations using list 
indexes are not allowed with IF conditions);
+throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd8f5b9f/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 99dd9d9..165dbc1 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -157,7 +157,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 throw new InvalidRequestException(Cannot provide custom 
timestamp for conditional update);
 
 if (requiresRead())
-throw new InvalidRequestException(Operations using list 
indexes are not allowed with IF conditions);
+throw new InvalidRequestException(Operations on lists 
requiring a read (setting by index and deletions by index or value) are not 
allowed with IF conditions);
 }
 
 if (isCounter())



[jira] [Updated] (CASSANDRA-7523) add date and time types

2014-08-22 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-7523:
---

Issue Type: New Feature  (was: Bug)

 add date and time types
 ---

 Key: CASSANDRA-7523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7523
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Joshua McKenzie
Priority: Minor
 Fix For: 2.1.1, 3.0


 http://www.postgresql.org/docs/9.1/static/datatype-datetime.html
 (we already have timestamp; interval is out of scope for now, and see 
 CASSANDRA-6350 for discussion on timestamp-with-time-zone.  but date/time 
 should be pretty easy to add.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7523) add date and time types

2014-08-22 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-7523:
---

Fix Version/s: (was: 2.0.10)
   3.0
   2.1.1

Moving to 2.1.1 / 3.0 as discussed.

 add date and time types
 ---

 Key: CASSANDRA-7523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7523
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Joshua McKenzie
Priority: Minor
 Fix For: 2.1.1, 3.0


 http://www.postgresql.org/docs/9.1/static/datatype-datetime.html
 (we already have timestamp; interval is out of scope for now, and see 
 CASSANDRA-6350 for discussion on timestamp-with-time-zone.  but date/time 
 should be pretty easy to add.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Add missing license header

2014-08-22 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 200b80288 - e28e7bf2b


Add missing license header


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e28e7bf2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e28e7bf2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e28e7bf2

Branch: refs/heads/cassandra-2.0
Commit: e28e7bf2b1a6bed1a4d38d86a1738cb8159c3f92
Parents: 200b802
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 17:07:25 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 17:07:25 2014 +0200

--
 .../LimitedLocalNodeFirstLocalBalancingPolicy.java | 17 +
 1 file changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e28e7bf2/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
 
b/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
index 3aa7df0..8949892 100644
--- 
a/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
+++ 
b/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.cassandra.hadoop.cql3;
 
 import com.datastax.driver.core.Cluster;



Git Push Summary

2014-08-22 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.0.10-tentative [created] e28e7bf2b


Git Push Summary

2014-08-22 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.0.10-tentative [deleted] cd37d07ba


[jira] [Resolved] (CASSANDRA-7817) when entire row is deleted, the records in the row seem to counted toward TombstoneOverwhelmingException

2014-08-22 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-7817.
---

Resolution: Not a Problem

 when entire row is deleted, the records in the row seem to counted toward 
 TombstoneOverwhelmingException
 

 Key: CASSANDRA-7817
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7817
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.0.9
Reporter: Digant Modha
Priority: Minor

 I saw this behavior in development cluster, but was able to reproduce it in a 
 single node setup.  In development cluster I had more than 52,000 records and 
 used default values for tombstone threshold.
 For testing purpose, I used lower numbers for thresholds:
 tombstone_warn_threshold: 100
 tombstone_failure_threshold: 1000
 Here are the steps:
 table:
 CREATE TABLE cstestcf_conflate_data (
   key ascii,
   datehr int,
   validfrom timestamp,
   asof timestamp,
   copied boolean,
   datacenter ascii,
   storename ascii,
   value blob,
   version ascii,
   PRIMARY KEY ((key, datehr), validfrom, asof)
 ) WITH CLUSTERING ORDER BY (validfrom DESC, asof DESC) ;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
  count
 ---
470
 (1 rows)
 cqlsh:cstestks delete from cstestcf_conflate_data WHERE KEY='BK_2' and 
 datehr = 2014082119;
 cqlsh:cstestks select count(*) from cstestcf_conflate_data WHERE KEY='BK_2' 
 and datehr = 2014082119;
 Request did not complete within rpc_timeout.
 Exception in system.log:
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1363)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7079) allow filtering within wide row

2014-08-22 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-7079.
-

   Resolution: Duplicate
Reproduced In: 2.0.9, 2.0.8, 2.0.7  (was: 2.0.7, 2.0.8, 2.0.9)

 allow filtering within wide row
 ---

 Key: CASSANDRA-7079
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7079
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
Reporter: Ashot Golovenko
Assignee: Sylvain Lebresne

 Let's say I have a table with wide rows.
 CREATE TABLE relation (
 u1 bigint,
 u2 bigint,
 f boolean,
 PRIMARY KEY (u1, u2));
 Usually I need to retrieve the whole row: 
 select * from relation where u1 = ?;
 But sometimes I just need the relations within u1 with f = true.
 By now I can't perform the following without creating an index which will 
 degrade write performance:
 select * from relation where u1 = ? and f=true allow filtering;
 So now I filter rows on server side which means more network traffic and I 
 don't know how much more server resources. Filtering rows in this case on a 
 server side looks like nothing hard.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7473) Dtest: Windows-specific failure: sc_with_row_cache_test (super_column_cache_test.TestSCCache)

2014-08-22 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14106941#comment-14106941
 ] 

Philip Thompson commented on CASSANDRA-7473:


I'm not sure this is the issue. There are other tests which use cassandra-cli 
that run fine on windows.

 Dtest: Windows-specific failure: sc_with_row_cache_test 
 (super_column_cache_test.TestSCCache)
 -

 Key: CASSANDRA-7473
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7473
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Kishan Karunaratne
Priority: Minor
  Labels: Windows

 Windows-specific dtest failure:
 {code:title=failure message}
 ==
 FAIL: sc_with_row_cache_test (super_column_cache_test.TestSCCache)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 44, in 
 sc_with_row_cache_test
 assert_columns(cli, ['name'])
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 10, in 
 assert_columns
 assert not cli.has_errors(), cli.errors()
 AssertionError: 'org.apache.thrift.transport.TTransportException: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:185)\r\n\tat 
 org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)\r\n\tat
  
 org.apache.cassandra.thrift.TFramedTransportFactory.openTransport(TFramedTransportFactory.java:41)\r\n\tat
  org.apache.cassandra.cli.CliMain.connect(CliMain.java:65)\r\n\tat 
 org.apache.cassandra.cli.CliMain.main(CliMain.java:237)\r\nCaused by: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 java.net.DualStackPlainSocketImpl.connect0(Native Method)\r\n\tat 
 java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)\r\n\tat
  java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)\r\n\tat 
 java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)\r\n\tat 
 java.net.Socket.connect(Socket.java:579)\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:180)\r\n\t... 4 
 more\r\nException connecting to 127.0.0.1/9160. Reason: Connection refused: 
 connect.\r\n'
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-7473) Dtest: Windows-specific failure: sc_with_row_cache_test (super_column_cache_test.TestSCCache)

2014-08-22 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-7473:
--

Assignee: Philip Thompson  (was: Kishan Karunaratne)

 Dtest: Windows-specific failure: sc_with_row_cache_test 
 (super_column_cache_test.TestSCCache)
 -

 Key: CASSANDRA-7473
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7473
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows

 Windows-specific dtest failure:
 {code:title=failure message}
 ==
 FAIL: sc_with_row_cache_test (super_column_cache_test.TestSCCache)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 44, in 
 sc_with_row_cache_test
 assert_columns(cli, ['name'])
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 10, in 
 assert_columns
 assert not cli.has_errors(), cli.errors()
 AssertionError: 'org.apache.thrift.transport.TTransportException: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:185)\r\n\tat 
 org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)\r\n\tat
  
 org.apache.cassandra.thrift.TFramedTransportFactory.openTransport(TFramedTransportFactory.java:41)\r\n\tat
  org.apache.cassandra.cli.CliMain.connect(CliMain.java:65)\r\n\tat 
 org.apache.cassandra.cli.CliMain.main(CliMain.java:237)\r\nCaused by: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 java.net.DualStackPlainSocketImpl.connect0(Native Method)\r\n\tat 
 java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)\r\n\tat
  java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)\r\n\tat 
 java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)\r\n\tat 
 java.net.Socket.connect(Socket.java:579)\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:180)\r\n\t... 4 
 more\r\nException connecting to 127.0.0.1/9160. Reason: Connection refused: 
 connect.\r\n'
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Issue Comment Deleted] (CASSANDRA-7473) Dtest: Windows-specific failure: sc_with_row_cache_test (super_column_cache_test.TestSCCache)

2014-08-22 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7473:
---

Comment: was deleted

(was: I'm not sure this is the issue. There are other tests which use 
cassandra-cli that run fine on windows.)

 Dtest: Windows-specific failure: sc_with_row_cache_test 
 (super_column_cache_test.TestSCCache)
 -

 Key: CASSANDRA-7473
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7473
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows

 Windows-specific dtest failure:
 {code:title=failure message}
 ==
 FAIL: sc_with_row_cache_test (super_column_cache_test.TestSCCache)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 44, in 
 sc_with_row_cache_test
 assert_columns(cli, ['name'])
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 10, in 
 assert_columns
 assert not cli.has_errors(), cli.errors()
 AssertionError: 'org.apache.thrift.transport.TTransportException: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:185)\r\n\tat 
 org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)\r\n\tat
  
 org.apache.cassandra.thrift.TFramedTransportFactory.openTransport(TFramedTransportFactory.java:41)\r\n\tat
  org.apache.cassandra.cli.CliMain.connect(CliMain.java:65)\r\n\tat 
 org.apache.cassandra.cli.CliMain.main(CliMain.java:237)\r\nCaused by: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 java.net.DualStackPlainSocketImpl.connect0(Native Method)\r\n\tat 
 java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)\r\n\tat
  java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)\r\n\tat 
 java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)\r\n\tat 
 java.net.Socket.connect(Socket.java:579)\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:180)\r\n\t... 4 
 more\r\nException connecting to 127.0.0.1/9160. Reason: Connection refused: 
 connect.\r\n'
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7816) Updated the 4.2.6. EVENT section in the binary protocol specification

2014-08-22 Thread Michael Penick (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107017#comment-14107017
 ] 

Michael Penick commented on CASSANDRA-7816:
---

Yes, I've seen multiple UP/DOWN messages sent for the same event (I haven't 
tried other events yet). At first, I thought it was a bug in my driver, but was 
unable to determine the cause so I checked the java-driver:

https://github.com/datastax/java-driver/blob/2.0/driver-core/src/main/java/com/datastax/driver/core/Host.java#L217-L219
https://github.com/datastax/java-driver/blob/2.1/driver-core/src/main/java/com/datastax/driver/core/Host.java#L219-L221

I figured it might be a side effect of C*'s distributed design and the gossip 
protocol so it didn't seem unreasonable. I haven't dug into the server-side 
code yet to officially confirm why that's happening. I think it makes sense to 
have a note in the protocol specification if this is determined to be part of 
normal operating behavior.


 Updated the 4.2.6. EVENT section in the binary protocol specification
 ---

 Key: CASSANDRA-7816
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7816
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Reporter: Michael Penick
Priority: Trivial
 Attachments: trunk-7816.txt


 Added MOVED_NODE as a possible type of topology change and also specified 
 that it is possible to receive the same event multiple times.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4206) AssertionError: originally calculated column size of 629444349 but now it is 588008950

2014-08-22 Thread Richard Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107031#comment-14107031
 ] 

Richard Low commented on CASSANDRA-4206:


The root cause of this in 1.2 is CASSANDRA-7808.

 AssertionError: originally calculated column size of 629444349 but now it is 
 588008950
 --

 Key: CASSANDRA-4206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.9
 Environment: Debian Squeeze Linux, kernel 2.6.32, sun-java6-bin 
 6.26-0squeeze1
Reporter: Patrik Modesto

 I've 4 node cluster of Cassandra 1.0.9. There is a rfTest3 keyspace with RF=3 
 and one CF with two secondary indexes. I'm importing data into this CF using 
 Hadoop Mapreduce job, each row has less than 10 colkumns. From JMX:
 MaxRowSize:  1597
 MeanRowSize: 369
 And there are some tens of millions of rows.
 It's write-heavy usage and there is a big pressure on each node, there are 
 quite some dropped mutations on each node. After ~12 hours of inserting I see 
 these assertion exceptiona on 3 out of four nodes:
 {noformat}
 ERROR 06:25:40,124 Fatal exception in thread Thread[HintedHandoff:1,1,main]
 java.lang.RuntimeException: java.util.concurrent.ExecutionException:
 java.lang.AssertionError: originally calculated column size of 629444349 but 
 now it is 588008950
at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:388)
at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:256)
at 
 org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:84)
at 
 org.apache.cassandra.db.HintedHandOffManager$3.runMayThrow(HintedHandOffManager.java:437)
at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 Caused by: java.util.concurrent.ExecutionException:
 java.lang.AssertionError: originally calculated column size of
 629444349 but now it is 588008950
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:384)
... 7 more
 Caused by: java.lang.AssertionError: originally calculated column size
 of 629444349 but now it is 588008950
at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:124)
at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:161)
at 
 org.apache.cassandra.db.compaction.CompactionManager$7.call(CompactionManager.java:380)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
 {noformat}
 Few lines regarding Hints from the output.log:
 {noformat}
  INFO 06:21:26,202 Compacting large row 
 system/HintsColumnFamily:7000 (1712834057 bytes) 
 incrementally
  INFO 06:22:52,610 Compacting large row 
 system/HintsColumnFamily:1000 (2616073981 bytes) 
 incrementally
  INFO 06:22:59,111 flushing high-traffic column family CFS(Keyspace='system', 
 ColumnFamily='HintsColumnFamily') (estimated 305147360 bytes)
  INFO 06:22:59,813 Enqueuing flush of 
 Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live bytes, 
 7452 ops)
  INFO 06:22:59,814 Writing 
 Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live bytes, 
 7452 ops)
 {noformat}
 I think the problem may be somehow connected to an IntegerType secondary 
 index. I had a different problem with CF with two secondary indexes, the 
 first UTF8Type, the second IntegerType. After a few hours of inserting data 
 in the afternoon and midnight repair+compact, the next day I couldn't find 
 any row using the IntegerType secondary index. The output was like this:
 {noformat}
 [default@rfTest3] get IndexTest where col1 = 
 '3230727:http://zaskolak.cz/download.php';
 ---
 RowKey: 3230727:8383582:http://zaskolak.cz/download.php
 = (column=col1, value=3230727:http://zaskolak.cz/download.php, 
 timestamp=1335348630332000)
 = (column=col2, value=8383582, timestamp=1335348630332000)
 ---
 RowKey: 

[jira] [Commented] (CASSANDRA-7816) Updated the 4.2.6. EVENT section in the binary protocol specification

2014-08-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107039#comment-14107039
 ] 

Sylvain Lebresne commented on CASSANDRA-7816:
-

bq. Yes, I've seen multiple UP/DOWN messages sent for the same event

I guess my question is, do you have a simple way to reproduce that? Or is that 
something you saw once or twice completely randomly during your tests.

The reason I added this kind of comments in the java driver is that I didn't 
wanted to dive into the gossip code to make extra sure that it was impossible 
to get a duplicate event even on a race, and I didn't wanted people to get 
surprised and blame the driver if that happened, but I don't think there is a 
good reason this could happen easily. So wondered if you have easy step to 
reproduce, because if that's the case we can have a quick look at fixing it. 
Note that I'm good adding specifying in the doc that there is no strong 
guarantee on this anyway, it's just that you ticket makes me wonder if there 
isn't something we can easily fix server side.

 Updated the 4.2.6. EVENT section in the binary protocol specification
 ---

 Key: CASSANDRA-7816
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7816
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Reporter: Michael Penick
Priority: Trivial
 Attachments: trunk-7816.txt


 Added MOVED_NODE as a possible type of topology change and also specified 
 that it is possible to receive the same event multiple times.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7473) Dtest: Windows-specific failure: sc_with_row_cache_test (super_column_cache_test.TestSCCache)

2014-08-22 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107040#comment-14107040
 ] 

Aleksey Yeschenko commented on CASSANDRA-7473:
--

This particular test, and the rest of the tests that depend on CLI, should be 
rewritten to use Thrift directly (not CQL3).

Pretty sure we have a ticket for it, actually.

 Dtest: Windows-specific failure: sc_with_row_cache_test 
 (super_column_cache_test.TestSCCache)
 -

 Key: CASSANDRA-7473
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7473
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows

 Windows-specific dtest failure:
 {code:title=failure message}
 ==
 FAIL: sc_with_row_cache_test (super_column_cache_test.TestSCCache)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 44, in 
 sc_with_row_cache_test
 assert_columns(cli, ['name'])
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 10, in 
 assert_columns
 assert not cli.has_errors(), cli.errors()
 AssertionError: 'org.apache.thrift.transport.TTransportException: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:185)\r\n\tat 
 org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)\r\n\tat
  
 org.apache.cassandra.thrift.TFramedTransportFactory.openTransport(TFramedTransportFactory.java:41)\r\n\tat
  org.apache.cassandra.cli.CliMain.connect(CliMain.java:65)\r\n\tat 
 org.apache.cassandra.cli.CliMain.main(CliMain.java:237)\r\nCaused by: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 java.net.DualStackPlainSocketImpl.connect0(Native Method)\r\n\tat 
 java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)\r\n\tat
  java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)\r\n\tat 
 java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)\r\n\tat 
 java.net.Socket.connect(Socket.java:579)\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:180)\r\n\t... 4 
 more\r\nException connecting to 127.0.0.1/9160. Reason: Connection refused: 
 connect.\r\n'
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7642) Adaptive Consistency

2014-08-22 Thread Tupshin Harper (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107052#comment-14107052
 ] 

Tupshin Harper commented on CASSANDRA-7642:
---

 I don't like the minx/max consistency terminology in the context of:

Transparent downgrading violates the CL contract, and that contract
considered be just about the most important element of Cassandra's
runtime behaviour. Fully transparent downgrading without any contract
is dangerous. However, would it be problem if we specify explicitly
only two discrete CL levels - MIN_CL and MAX_CL?

I strongly believe that it is a problem even with only two explicit
levels specified.

As such, I propose two changes to the spec:

1) the terminology changes from min/max to terms representing block
until for max and actual contractual consistency level for min.
2) Even more critically, ensure that the protocol and driver
provide a communication mechanism back to the client for every
operation, which of the two CL levels was fulfilled by the request.

 Adaptive Consistency
 

 Key: CASSANDRA-7642
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7642
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Rustam Aliyev
 Fix For: 3.0


 h4. Problem
 At minimum, application requires consistency level of X, which must be fault 
 tolerant CL. However, when there is no failure it would be advantageous to 
 use stronger consistency Y (YX).
 h4. Suggestion
 Application defines minimum (X) and maximum (Y) consistency levels. C* can 
 apply adaptive consistency logic to use Y whenever possible and downgrade to 
 X when failure occurs.
 Implementation should not negatively impact performance. Therefore, state has 
 to be maintained globally (not per request).
 h4. Example
 {{MIN_CL=LOCAL_QUORUM}}
 {{MAX_CL=EACH_QUORUM}}
 h4. Use Case
 Consider a case where user wants to maximize their uptime and consistency. 
 They designing a system using C* where transactions are read/written with 
 LOCAL_QUORUM and distributed across 2 DCs. Occasional inconsistencies between 
 DCs can be tolerated. R/W with LOCAL_QUORUM is satisfactory in most of the 
 cases.
 Application requires new transactions to be read back right after they were 
 generated. Write and read could be done through different DCs (no 
 stickiness). In some cases when user writes into DC1 and reads immediately 
 from DC2, replication delay may cause problems. Transaction won't show up on 
 read in DC2, user will retry and create duplicate transaction. Occasional 
 duplicates are fine and the goal is to minimize number of dups.
 Therefore, we want to perform writes with stronger consistency (EACH_QUORUM) 
 whenever possible without compromising on availability. Using adaptive 
 consistency they should be able to define:
{{Read CL = LOCAL_QUORUM}}
{{Write CL = ADAPTIVE (MIN:LOCAL_QUORUM, MAX:EACH_QUORUM)}}
 Similar scenario can be described for {{Write CL = ADAPTIVE (MIN:QUORUM, 
 MAX:ALL)}} case.
 h4. Criticism
 # This functionality can/should be implemented by user himself.
 bq. It will be hard for an average user to implement topology monitoring and 
 state machine. Moreover, this is a pattern which repeats.
 # Transparent downgrading violates the CL contract, and that contract 
 considered be just about the most important element of Cassandra's runtime 
 behavior.
 bq.Fully transparent downgrading without any contract is dangerous. However, 
 would it be problem if we specify explicitly only two discrete CL levels - 
 MIN_CL and MAX_CL?
 # If you have split brain DCs (partitioned in CAP), you have to sacrifice 
 either consistency or availability, and auto downgrading sacrifices the 
 consistency in dangerous ways if the application isn't designed to handle it. 
 And if the application is designed to handle it, then it should be able to 
 handle it in normal circumstances, not just degraded/extraordinary ones.
 bq. Agreed. Application should be designed for MIN_CL. In that case, MAX_CL 
 will not be causing much harm, only adding flexibility.
 # It might be a better idea to loudly downgrade, instead of silently 
 downgrading, meaning that the client code does an explicit retry with lower 
 consistency on failure and takes some other kind of action to attempt to 
 inform either users or operators of the problem. The silent part of the 
 downgrading which could be dangerous.
 bq. There are certainly cases where user should be informed when consistency 
 changes in order to perform custom action. For this purpose we could 
 allow/require user to register callback function which will be triggered when 
 consistency level changes. Best practices could be enforced by requiring 
 callback.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7473) Dtest: Windows-specific failure: sc_with_row_cache_test (super_column_cache_test.TestSCCache)

2014-08-22 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107098#comment-14107098
 ] 

Philip Thompson commented on CASSANDRA-7473:


I have corrected the specific error Josh was seeing. In regards to Kishan's 
point, the cli bug is a ccm issue. I agree with Aleksey that the correct 
resolution is to use Thrift directly. I do not know of a existing ticket for 
that though, so I will make the changes to sc_with_row_cache_test as part of 
this ticket.

 Dtest: Windows-specific failure: sc_with_row_cache_test 
 (super_column_cache_test.TestSCCache)
 -

 Key: CASSANDRA-7473
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7473
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: win7x64 SP1, Cassandra 3.0 / trunk
Reporter: Joshua McKenzie
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows

 Windows-specific dtest failure:
 {code:title=failure message}
 ==
 FAIL: sc_with_row_cache_test (super_column_cache_test.TestSCCache)
 --
 Traceback (most recent call last):
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 44, in 
 sc_with_row_cache_test
 assert_columns(cli, ['name'])
   File C:\src\cassandra-dtest\super_column_cache_test.py, line 10, in 
 assert_columns
 assert not cli.has_errors(), cli.errors()
 AssertionError: 'org.apache.thrift.transport.TTransportException: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:185)\r\n\tat 
 org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)\r\n\tat
  
 org.apache.cassandra.thrift.TFramedTransportFactory.openTransport(TFramedTransportFactory.java:41)\r\n\tat
  org.apache.cassandra.cli.CliMain.connect(CliMain.java:65)\r\n\tat 
 org.apache.cassandra.cli.CliMain.main(CliMain.java:237)\r\nCaused by: 
 java.net.ConnectException: Connection refused: connect\r\n\tat 
 java.net.DualStackPlainSocketImpl.connect0(Native Method)\r\n\tat 
 java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)\r\n\tat
  
 java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)\r\n\tat
  java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)\r\n\tat 
 java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)\r\n\tat 
 java.net.Socket.connect(Socket.java:579)\r\n\tat 
 org.apache.thrift.transport.TSocket.open(TSocket.java:180)\r\n\t... 4 
 more\r\nException connecting to 127.0.0.1/9160. Reason: Connection refused: 
 connect.\r\n'
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5839) Save repair data to system table

2014-08-22 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107111#comment-14107111
 ] 

sankalp kohli commented on CASSANDRA-5839:
--

Reviewed your patch. Here is the feedback
1) Keyspace name is added at two places. In DatabaseDescriptor and Schema. Why 
can't we use one list and use it at both places for cleanup purposes. 
2) It should help if we add the exception to the table. This will help in 
knowing why repair failed. If it is failing for same reason or different.
3) If there is an exception in RepairSession, it wont be written to this table. 
May be we can catch write to table and re throw the exception. 

 Save repair data to system table
 

 Key: CASSANDRA-5839
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5839
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Tools
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 3.0

 Attachments: 0001-5839.patch, 2.0.4-5839-draft.patch, 
 2.0.6-5839-v2.patch


 As noted in CASSANDRA-2405, it would be useful to store repair results, 
 particularly with sub-range repair available (CASSANDRA-5280).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7810) tombstones gc'd before being locally applied

2014-08-22 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107264#comment-14107264
 ] 

Yuki Morishita edited comment on CASSANDRA-7810 at 8/22/14 6:37 PM:


Confirmed.
When merging two SSTable, one with composite cell and the other with 
RangeTombstone, only RangeTombstone is removed from merged SSTable (with 
gc_grace=0).
I think there is something going on when comparing those two cell name when 
reducing.


was (Author: yukim):
Confirmed.
When merging two SSTable, one with composite cell and the other with 
RangeTombstone, only RangeTombstone is removed from merged SSTable.
I think there is something going on when comparing those two cell name when 
reducing.

 tombstones gc'd before being locally applied
 

 Key: CASSANDRA-7810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7810
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.0.rc6
Reporter: Jonathan Halliday
Assignee: Marcus Eriksson
 Fix For: 2.1.0

 Attachments: range_tombstone_test.py


 # single node environment
 CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1 };
 use test;
 create table foo (a int, b int, primary key(a,b));
 alter table foo with gc_grace_seconds = 0;
 insert into foo (a,b) values (1,2);
 select * from foo;
 -- one row returned. so far, so good.
 delete from foo where a=1 and b=2;
 select * from foo;
 -- 0 rows. still rainbows and kittens.
 bin/nodetool flush;
 bin/nodetool compact;
 select * from foo;
  a | b
 ---+---
  1 | 2
 (1 rows)
 gahhh.
 looks like the tombstones were considered obsolete and thrown away before 
 being applied to the compaction?  gc_grace just means the interval after 
 which they won't be available to remote nodes repair - they should still 
 apply locally regardless (and do correctly in 2.0.9)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7810) tombstones gc'd before being locally applied

2014-08-22 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107264#comment-14107264
 ] 

Yuki Morishita commented on CASSANDRA-7810:
---

Confirmed.
When merging two SSTable, one with composite cell and the other with 
RangeTombstone, only RangeTombstone is removed from merged SSTable.
I think there is something going on when comparing those two cell name when 
reducing.

 tombstones gc'd before being locally applied
 

 Key: CASSANDRA-7810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7810
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.0.rc6
Reporter: Jonathan Halliday
Assignee: Marcus Eriksson
 Fix For: 2.1.0

 Attachments: range_tombstone_test.py


 # single node environment
 CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1 };
 use test;
 create table foo (a int, b int, primary key(a,b));
 alter table foo with gc_grace_seconds = 0;
 insert into foo (a,b) values (1,2);
 select * from foo;
 -- one row returned. so far, so good.
 delete from foo where a=1 and b=2;
 select * from foo;
 -- 0 rows. still rainbows and kittens.
 bin/nodetool flush;
 bin/nodetool compact;
 select * from foo;
  a | b
 ---+---
  1 | 2
 (1 rows)
 gahhh.
 looks like the tombstones were considered obsolete and thrown away before 
 being applied to the compaction?  gc_grace just means the interval after 
 which they won't be available to remote nodes repair - they should still 
 apply locally regardless (and do correctly in 2.0.9)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Save source exception in CorruptBlockException

2014-08-22 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 e28e7bf2b - a5617d689


Save source exception in CorruptBlockException

ninja patch by Pavel Yaskevich; ninja reviewed by Aleksey Yeschenko


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a5617d68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a5617d68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a5617d68

Branch: refs/heads/cassandra-2.0
Commit: a5617d689c8feee40f9412c373d02c9f1770d359
Parents: e28e7bf
Author: Pavel Yaskevich xe...@apache.org
Authored: Fri Aug 22 21:52:11 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:52:11 2014 +0300

--
 .../io/compress/CompressedRandomAccessReader.java |  2 +-
 .../cassandra/io/compress/CorruptBlockException.java  | 14 --
 2 files changed, 13 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5617d68/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 131a4d6..64495b8 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -119,7 +119,7 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 }
 catch (IOException e)
 {
-throw new CorruptBlockException(getPath(), chunk);
+throw new CorruptBlockException(getPath(), chunk, e);
 }
 
 if (metadata.parameters.getCrcCheckChance()  
FBUtilities.threadLocalRandom().nextDouble())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5617d68/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java 
b/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
index 60b4d1f..bcce6b9 100644
--- a/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
+++ b/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
@@ -23,11 +23,21 @@ public class CorruptBlockException extends IOException
 {
 public CorruptBlockException(String filePath, CompressionMetadata.Chunk 
chunk)
 {
-this(filePath, chunk.offset, chunk.length);
+this(filePath, chunk, null);
+}
+
+public CorruptBlockException(String filePath, CompressionMetadata.Chunk 
chunk, Throwable cause)
+{
+this(filePath, chunk.offset, chunk.length, cause);
 }
 
 public CorruptBlockException(String filePath, long offset, int length)
 {
-super(String.format((%s): corruption detected, chunk at %d of length 
%d., filePath, offset, length));
+this(filePath, offset, length, null);
+}
+
+public CorruptBlockException(String filePath, long offset, int length, 
Throwable cause)
+{
+super(String.format((%s): corruption detected, chunk at %d of length 
%d., filePath, offset, length), cause);
 }
 }



[3/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35cfa61c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35cfa61c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35cfa61c

Branch: refs/heads/cassandra-2.1.0
Commit: 35cfa61c53c4caac680630e8a7e118b0d4692e17
Parents: 8da1343 a5617d6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Aug 22 21:55:56 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:55:56 2014 +0300

--

--




[5/5] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e3a4fba4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e3a4fba4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e3a4fba4

Branch: refs/heads/cassandra-2.1
Commit: e3a4fba4fc5888eeac3ff4bc0bf79809b7234b39
Parents: 6434ad8 35cfa61
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Aug 22 21:56:25 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:56:25 2014 +0300

--

--




[1/3] git commit: Add missing license header

2014-08-22 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 8da134377 - 35cfa61c5


Add missing license header


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e28e7bf2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e28e7bf2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e28e7bf2

Branch: refs/heads/cassandra-2.1.0
Commit: e28e7bf2b1a6bed1a4d38d86a1738cb8159c3f92
Parents: 200b802
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 17:07:25 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 17:07:25 2014 +0200

--
 .../LimitedLocalNodeFirstLocalBalancingPolicy.java | 17 +
 1 file changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e28e7bf2/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
 
b/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
index 3aa7df0..8949892 100644
--- 
a/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
+++ 
b/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.cassandra.hadoop.cql3;
 
 import com.datastax.driver.core.Cluster;



[4/5] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35cfa61c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35cfa61c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35cfa61c

Branch: refs/heads/cassandra-2.1
Commit: 35cfa61c53c4caac680630e8a7e118b0d4692e17
Parents: 8da1343 a5617d6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Aug 22 21:55:56 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:55:56 2014 +0300

--

--




[2/3] git commit: Save source exception in CorruptBlockException

2014-08-22 Thread aleksey
Save source exception in CorruptBlockException

ninja patch by Pavel Yaskevich; ninja reviewed by Aleksey Yeschenko


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a5617d68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a5617d68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a5617d68

Branch: refs/heads/cassandra-2.1.0
Commit: a5617d689c8feee40f9412c373d02c9f1770d359
Parents: e28e7bf
Author: Pavel Yaskevich xe...@apache.org
Authored: Fri Aug 22 21:52:11 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:52:11 2014 +0300

--
 .../io/compress/CompressedRandomAccessReader.java |  2 +-
 .../cassandra/io/compress/CorruptBlockException.java  | 14 --
 2 files changed, 13 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5617d68/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 131a4d6..64495b8 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -119,7 +119,7 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 }
 catch (IOException e)
 {
-throw new CorruptBlockException(getPath(), chunk);
+throw new CorruptBlockException(getPath(), chunk, e);
 }
 
 if (metadata.parameters.getCrcCheckChance()  
FBUtilities.threadLocalRandom().nextDouble())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5617d68/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java 
b/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
index 60b4d1f..bcce6b9 100644
--- a/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
+++ b/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
@@ -23,11 +23,21 @@ public class CorruptBlockException extends IOException
 {
 public CorruptBlockException(String filePath, CompressionMetadata.Chunk 
chunk)
 {
-this(filePath, chunk.offset, chunk.length);
+this(filePath, chunk, null);
+}
+
+public CorruptBlockException(String filePath, CompressionMetadata.Chunk 
chunk, Throwable cause)
+{
+this(filePath, chunk.offset, chunk.length, cause);
 }
 
 public CorruptBlockException(String filePath, long offset, int length)
 {
-super(String.format((%s): corruption detected, chunk at %d of length 
%d., filePath, offset, length));
+this(filePath, offset, length, null);
+}
+
+public CorruptBlockException(String filePath, long offset, int length, 
Throwable cause)
+{
+super(String.format((%s): corruption detected, chunk at %d of length 
%d., filePath, offset, length), cause);
 }
 }



[2/5] git commit: Save source exception in CorruptBlockException

2014-08-22 Thread aleksey
Save source exception in CorruptBlockException

ninja patch by Pavel Yaskevich; ninja reviewed by Aleksey Yeschenko


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a5617d68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a5617d68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a5617d68

Branch: refs/heads/cassandra-2.1
Commit: a5617d689c8feee40f9412c373d02c9f1770d359
Parents: e28e7bf
Author: Pavel Yaskevich xe...@apache.org
Authored: Fri Aug 22 21:52:11 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:52:11 2014 +0300

--
 .../io/compress/CompressedRandomAccessReader.java |  2 +-
 .../cassandra/io/compress/CorruptBlockException.java  | 14 --
 2 files changed, 13 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5617d68/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 131a4d6..64495b8 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -119,7 +119,7 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 }
 catch (IOException e)
 {
-throw new CorruptBlockException(getPath(), chunk);
+throw new CorruptBlockException(getPath(), chunk, e);
 }
 
 if (metadata.parameters.getCrcCheckChance()  
FBUtilities.threadLocalRandom().nextDouble())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5617d68/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java 
b/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
index 60b4d1f..bcce6b9 100644
--- a/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
+++ b/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
@@ -23,11 +23,21 @@ public class CorruptBlockException extends IOException
 {
 public CorruptBlockException(String filePath, CompressionMetadata.Chunk 
chunk)
 {
-this(filePath, chunk.offset, chunk.length);
+this(filePath, chunk, null);
+}
+
+public CorruptBlockException(String filePath, CompressionMetadata.Chunk 
chunk, Throwable cause)
+{
+this(filePath, chunk.offset, chunk.length, cause);
 }
 
 public CorruptBlockException(String filePath, long offset, int length)
 {
-super(String.format((%s): corruption detected, chunk at %d of length 
%d., filePath, offset, length));
+this(filePath, offset, length, null);
+}
+
+public CorruptBlockException(String filePath, long offset, int length, 
Throwable cause)
+{
+super(String.format((%s): corruption detected, chunk at %d of length 
%d., filePath, offset, length), cause);
 }
 }



[1/5] git commit: Add missing license header

2014-08-22 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 94f1107ec - e3a4fba4f


Add missing license header


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e28e7bf2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e28e7bf2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e28e7bf2

Branch: refs/heads/cassandra-2.1
Commit: e28e7bf2b1a6bed1a4d38d86a1738cb8159c3f92
Parents: 200b802
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 17:07:25 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 17:07:25 2014 +0200

--
 .../LimitedLocalNodeFirstLocalBalancingPolicy.java | 17 +
 1 file changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e28e7bf2/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
 
b/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
index 3aa7df0..8949892 100644
--- 
a/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
+++ 
b/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.cassandra.hadoop.cql3;
 
 import com.datastax.driver.core.Cluster;



[3/5] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6434ad8b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6434ad8b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6434ad8b

Branch: refs/heads/cassandra-2.1
Commit: 6434ad8bc901127b67d4b7efd95a36175326aa96
Parents: 94f1107 a5617d6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Aug 22 21:55:31 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:55:31 2014 +0300

--
 .../LimitedLocalNodeFirstLocalBalancingPolicy.java | 17 +
 .../io/compress/CompressedRandomAccessReader.java  |  2 +-
 .../io/compress/CorruptBlockException.java | 14 --
 3 files changed, 30 insertions(+), 3 deletions(-)
--




[3/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6434ad8b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6434ad8b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6434ad8b

Branch: refs/heads/trunk
Commit: 6434ad8bc901127b67d4b7efd95a36175326aa96
Parents: 94f1107 a5617d6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Aug 22 21:55:31 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:55:31 2014 +0300

--
 .../LimitedLocalNodeFirstLocalBalancingPolicy.java | 17 +
 .../io/compress/CompressedRandomAccessReader.java  |  2 +-
 .../io/compress/CorruptBlockException.java | 14 --
 3 files changed, 30 insertions(+), 3 deletions(-)
--




[4/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1.0

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35cfa61c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35cfa61c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35cfa61c

Branch: refs/heads/trunk
Commit: 35cfa61c53c4caac680630e8a7e118b0d4692e17
Parents: 8da1343 a5617d6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Aug 22 21:55:56 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:55:56 2014 +0300

--

--




[5/6] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e3a4fba4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e3a4fba4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e3a4fba4

Branch: refs/heads/trunk
Commit: e3a4fba4fc5888eeac3ff4bc0bf79809b7234b39
Parents: 6434ad8 35cfa61
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Aug 22 21:56:25 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:56:25 2014 +0300

--

--




[2/6] git commit: Save source exception in CorruptBlockException

2014-08-22 Thread aleksey
Save source exception in CorruptBlockException

ninja patch by Pavel Yaskevich; ninja reviewed by Aleksey Yeschenko


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a5617d68
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a5617d68
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a5617d68

Branch: refs/heads/trunk
Commit: a5617d689c8feee40f9412c373d02c9f1770d359
Parents: e28e7bf
Author: Pavel Yaskevich xe...@apache.org
Authored: Fri Aug 22 21:52:11 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:52:11 2014 +0300

--
 .../io/compress/CompressedRandomAccessReader.java |  2 +-
 .../cassandra/io/compress/CorruptBlockException.java  | 14 --
 2 files changed, 13 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5617d68/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 131a4d6..64495b8 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -119,7 +119,7 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 }
 catch (IOException e)
 {
-throw new CorruptBlockException(getPath(), chunk);
+throw new CorruptBlockException(getPath(), chunk, e);
 }
 
 if (metadata.parameters.getCrcCheckChance()  
FBUtilities.threadLocalRandom().nextDouble())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5617d68/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java 
b/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
index 60b4d1f..bcce6b9 100644
--- a/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
+++ b/src/java/org/apache/cassandra/io/compress/CorruptBlockException.java
@@ -23,11 +23,21 @@ public class CorruptBlockException extends IOException
 {
 public CorruptBlockException(String filePath, CompressionMetadata.Chunk 
chunk)
 {
-this(filePath, chunk.offset, chunk.length);
+this(filePath, chunk, null);
+}
+
+public CorruptBlockException(String filePath, CompressionMetadata.Chunk 
chunk, Throwable cause)
+{
+this(filePath, chunk.offset, chunk.length, cause);
 }
 
 public CorruptBlockException(String filePath, long offset, int length)
 {
-super(String.format((%s): corruption detected, chunk at %d of length 
%d., filePath, offset, length));
+this(filePath, offset, length, null);
+}
+
+public CorruptBlockException(String filePath, long offset, int length, 
Throwable cause)
+{
+super(String.format((%s): corruption detected, chunk at %d of length 
%d., filePath, offset, length), cause);
 }
 }



[6/6] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.1' into trunk

Conflicts:

src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3648549a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3648549a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3648549a

Branch: refs/heads/trunk
Commit: 3648549a96d367f11b15e743bfc16972c07c2d9e
Parents: bc63083 e3a4fba
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Aug 22 21:57:47 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Aug 22 21:57:47 2014 +0300

--
 .../LimitedLocalNodeFirstLocalBalancingPolicy.java | 17 +
 .../io/compress/CorruptBlockException.java | 14 --
 2 files changed, 29 insertions(+), 2 deletions(-)
--




[1/6] git commit: Add missing license header

2014-08-22 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk bc6308321 - 3648549a9


Add missing license header


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e28e7bf2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e28e7bf2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e28e7bf2

Branch: refs/heads/trunk
Commit: e28e7bf2b1a6bed1a4d38d86a1738cb8159c3f92
Parents: 200b802
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Aug 22 17:07:25 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Fri Aug 22 17:07:25 2014 +0200

--
 .../LimitedLocalNodeFirstLocalBalancingPolicy.java | 17 +
 1 file changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e28e7bf2/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
 
b/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
index 3aa7df0..8949892 100644
--- 
a/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
+++ 
b/src/java/org/apache/cassandra/hadoop/cql3/LimitedLocalNodeFirstLocalBalancingPolicy.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.cassandra.hadoop.cql3;
 
 import com.datastax.driver.core.Cluster;



[jira] [Commented] (CASSANDRA-7809) UDF cleanups (#7395 follow-up)

2014-08-22 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107424#comment-14107424
 ] 

Tyler Hobbs commented on CASSANDRA-7809:


This is pretty close to being good to go.

Some nitpicks:
* FunctionCall.java: Cannot assign result of function... should use 
receiver.name() in message
* Functions.java: none of its type signature matches should be none of its 
type signatures match
* NativeFunction.java: unused import of java.util.List
* AbstractFunction.java: unused import of java.util.Arrays
* CreateFunctionStatement.java: unused imports
* DropFunctionStatement.java: unused import of Schema
* UFTest.java: unused import, should put stopForcingPreparedValues() in a 
finally block

It also looks like we don't have any unit or dtest coverage for type casts 
(besides the few you added in UFTest).  Can you add unit tests here or open a 
ticket for that?  We also don't document typecasting in the CQL3 docs.

 UDF cleanups (#7395 follow-up)
 --

 Key: CASSANDRA-7809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7809
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 3.0


 The current code for UDF is largely not reusing the pre-existing 
 mechanics/code for native/hardcoded functions. I don't see a good reason for 
 that but I do see downsides: it's more code to maintain and makes it much 
 easier to have inconsitent behavior between hard-coded and user-defined 
 function. More concretely, {{UDFRegistery/UDFFunctionOverloads}} 
 fundamentally do the same thing than {{Functions}}, we should just merge 
 both. I'm also not sure there is a need for both {{UFMetadata}} and 
 {{UDFunction}} since {{UFMetadata}} really only store infos on a given 
 function (contrarly to what the javadoc pretends).  I suggest we consolidate 
 all this to cleanup the code, but also as a way to fix 2 problems that the 
 UDF code has but that the existing code for native functions don't:
 * if there is multiple overloads of a function, the UDF code picks the first 
 version whose argument types are compatible with the concrete arguments 
 provided. This is broken for bind markers: we don't know the type of markers 
 and so the first function match may not at all be what the user want. The 
 only sensible choice is to detect that type of ambiguity and reject the 
 query, asking the user to explicitly type-cast their bind marker (which is 
 what the code for hard-coded function does).
 * the UDF code builds a function signature using the CQL type names of the 
 arguments and use that to distinguish multiple overrides in the schema. This 
 means in particular that {{f(v text)}} and {{f(v varchar)}} are considered 
 distinct, which is wrong since CQL considers {{varchar}} as a simple alias of 
 {{text}}. And in fact, the function resolution does consider them aliases 
 leading to seemingly broken behavior.
 There is a few other small problems that I'm proposing to fix while doing 
 this cleanup:
 * Function creation only use the function name when checking if the function 
 exists, which is not enough since we allow multiple over-loadings. You can 
 bypass the check by using OR REPLACE but that's obviously broken.
 * {{IF NOT EXISTS}} for function creation is broken.
 * The code allows to replace a function (with {{OR REPLACE}}) by a new 
 function with an incompatible return type. Imo that's dodgy and we should 
 refuse it (users can still drop and re-create the method if they really want).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7810) tombstones gc'd before being locally applied

2014-08-22 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107538#comment-14107538
 ] 

Yuki Morishita commented on CASSANDRA-7810:
---

So this is broken since we removed PreCompactedRow(CASSANDRA-6142).
In LazilyCompactedRow, gcable RangeTombstone is just thrown away without 
removing cells it covers.

 tombstones gc'd before being locally applied
 

 Key: CASSANDRA-7810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7810
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.0.rc6
Reporter: Jonathan Halliday
Assignee: Marcus Eriksson
 Fix For: 2.1.0

 Attachments: range_tombstone_test.py


 # single node environment
 CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1 };
 use test;
 create table foo (a int, b int, primary key(a,b));
 alter table foo with gc_grace_seconds = 0;
 insert into foo (a,b) values (1,2);
 select * from foo;
 -- one row returned. so far, so good.
 delete from foo where a=1 and b=2;
 select * from foo;
 -- 0 rows. still rainbows and kittens.
 bin/nodetool flush;
 bin/nodetool compact;
 select * from foo;
  a | b
 ---+---
  1 | 2
 (1 rows)
 gahhh.
 looks like the tombstones were considered obsolete and thrown away before 
 being applied to the compaction?  gc_grace just means the interval after 
 which they won't be available to remote nodes repair - they should still 
 apply locally regardless (and do correctly in 2.0.9)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7818) Improve compaction logging

2014-08-22 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107547#comment-14107547
 ] 

Yuki Morishita commented on CASSANDRA-7818:
---

FYI each CompactionTask is already assigned task ID to keep track of 
system.compaction_in_progress.

 Improve compaction logging
 --

 Key: CASSANDRA-7818
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7818
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Priority: Minor
  Labels: compaction, lhf
 Fix For: 2.1.1


 We should log more information about compactions to be able to debug issues 
 more efficiently
 * give each CompactionTask an id that we log (so that you can relate the 
 start-compaction-messages to the finished-compaction ones)
 * log what level the sstables are taken from



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7606) Add IF [NOT] EXISTS to CREATE/DROP trigger

2014-08-22 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-7606:
--

Attachment: CASSANDRA-7606-V2.txt

 Add IF [NOT] EXISTS to CREATE/DROP trigger
 --

 Key: CASSANDRA-7606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7606
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql, docs
 Fix For: 2.1.1

 Attachments: CASSANDRA-7606-V2.txt, CASSANDRA-7606.txt


 All CREATE/DROP statements support IF [NOT] EXISTS - except CREATE/DROP 
 trigger.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/2] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dac984f1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dac984f1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dac984f1

Branch: refs/heads/cassandra-2.1
Commit: dac984f14644f1b344e58cf7fe21892142ab95f1
Parents: e3a4fba 508db1e
Author: Aleksey Yeschenko alek...@apache.org
Authored: Sat Aug 23 01:04:37 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Sat Aug 23 01:04:37 2014 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--




git commit: Fix failing unit tests (bad CASSANDRA-7744 merge)

2014-08-22 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1.0 35cfa61c5 - 508db1e9d


Fix failing unit tests (bad CASSANDRA-7744 merge)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/508db1e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/508db1e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/508db1e9

Branch: refs/heads/cassandra-2.1.0
Commit: 508db1e9ddfac1abb49cf29a5e6618fba4aab275
Parents: 35cfa61
Author: Aleksey Yeschenko alek...@apache.org
Authored: Sat Aug 23 01:04:09 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Sat Aug 23 01:04:09 2014 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/508db1e9/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 51f9b99..70cd648 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -2020,9 +2020,12 @@ public final class CFMetaData
 
 public CFMetaData rebuild()
 {
+if (isDense == null)
+setDense(isDense(comparator.asAbstractType(), allColumns()));
+
 ListColumnDefinition pkCols = 
nullInitializedList(keyValidator.componentsCount());
 ListColumnDefinition ckCols = 
nullInitializedList(comparator.clusteringPrefixSize());
-// We keep things sorted to get consistent/predicatable order in 
select queries
+// We keep things sorted to get consistent/predictable order in select 
queries
 SortedSetColumnDefinition regCols = new 
TreeSet(regularColumnComparator);
 SortedSetColumnDefinition statCols = new 
TreeSet(regularColumnComparator);
 ColumnDefinition compactCol = null;



[1/2] git commit: Fix failing unit tests (bad CASSANDRA-7744 merge)

2014-08-22 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 e3a4fba4f - dac984f14


Fix failing unit tests (bad CASSANDRA-7744 merge)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/508db1e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/508db1e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/508db1e9

Branch: refs/heads/cassandra-2.1
Commit: 508db1e9ddfac1abb49cf29a5e6618fba4aab275
Parents: 35cfa61
Author: Aleksey Yeschenko alek...@apache.org
Authored: Sat Aug 23 01:04:09 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Sat Aug 23 01:04:09 2014 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/508db1e9/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 51f9b99..70cd648 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -2020,9 +2020,12 @@ public final class CFMetaData
 
 public CFMetaData rebuild()
 {
+if (isDense == null)
+setDense(isDense(comparator.asAbstractType(), allColumns()));
+
 ListColumnDefinition pkCols = 
nullInitializedList(keyValidator.componentsCount());
 ListColumnDefinition ckCols = 
nullInitializedList(comparator.clusteringPrefixSize());
-// We keep things sorted to get consistent/predicatable order in 
select queries
+// We keep things sorted to get consistent/predictable order in select 
queries
 SortedSetColumnDefinition regCols = new 
TreeSet(regularColumnComparator);
 SortedSetColumnDefinition statCols = new 
TreeSet(regularColumnComparator);
 ColumnDefinition compactCol = null;



[1/3] git commit: Fix failing unit tests (bad CASSANDRA-7744 merge)

2014-08-22 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 3648549a9 - b743eabd7


Fix failing unit tests (bad CASSANDRA-7744 merge)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/508db1e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/508db1e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/508db1e9

Branch: refs/heads/trunk
Commit: 508db1e9ddfac1abb49cf29a5e6618fba4aab275
Parents: 35cfa61
Author: Aleksey Yeschenko alek...@apache.org
Authored: Sat Aug 23 01:04:09 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Sat Aug 23 01:04:09 2014 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/508db1e9/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 51f9b99..70cd648 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -2020,9 +2020,12 @@ public final class CFMetaData
 
 public CFMetaData rebuild()
 {
+if (isDense == null)
+setDense(isDense(comparator.asAbstractType(), allColumns()));
+
 ListColumnDefinition pkCols = 
nullInitializedList(keyValidator.componentsCount());
 ListColumnDefinition ckCols = 
nullInitializedList(comparator.clusteringPrefixSize());
-// We keep things sorted to get consistent/predicatable order in 
select queries
+// We keep things sorted to get consistent/predictable order in select 
queries
 SortedSetColumnDefinition regCols = new 
TreeSet(regularColumnComparator);
 SortedSetColumnDefinition statCols = new 
TreeSet(regularColumnComparator);
 ColumnDefinition compactCol = null;



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b743eabd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b743eabd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b743eabd

Branch: refs/heads/trunk
Commit: b743eabd7a12603b8e6a56cd818355560e5c0cf4
Parents: 3648549 dac984f
Author: Aleksey Yeschenko alek...@apache.org
Authored: Sat Aug 23 01:05:47 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Sat Aug 23 01:05:47 2014 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b743eabd/src/java/org/apache/cassandra/config/CFMetaData.java
--



[2/3] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1

2014-08-22 Thread aleksey
Merge branch 'cassandra-2.1.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dac984f1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dac984f1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dac984f1

Branch: refs/heads/trunk
Commit: dac984f14644f1b344e58cf7fe21892142ab95f1
Parents: e3a4fba 508db1e
Author: Aleksey Yeschenko alek...@apache.org
Authored: Sat Aug 23 01:04:37 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Sat Aug 23 01:04:37 2014 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--




[jira] [Commented] (CASSANDRA-7820) Remove fat client gossip mode

2014-08-22 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107632#comment-14107632
 ] 

Jonathan Ellis commented on CASSANDRA-7820:
---

/cc [~tjake] for comment because Jason thought he used fat clients at BMC.

 Remove fat client gossip mode
 -

 Key: CASSANDRA-7820
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7820
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Brandon Williams
 Fix For: 3.0


 Now that we support push notifications server - client, there's no reason to 
 have clients participating in gossip directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7820) Remove fat client gossip mode

2014-08-22 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-7820:
-

 Summary: Remove fat client gossip mode
 Key: CASSANDRA-7820
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7820
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Brandon Williams
 Fix For: 3.0


Now that we support push notifications server - client, there's no reason to 
have clients participating in gossip directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7256) Error when dropping keyspace.

2014-08-22 Thread Matt Stump (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107640#comment-14107640
 ] 

Matt Stump commented on CASSANDRA-7256:
---

Reproduced at customer site with Cassandra 2.0.7/DSE 4.0.3.

 Error when dropping keyspace.  
 ---

 Key: CASSANDRA-7256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: ubuntu 3 nodes (had 3 more in 2nd datacenter but removed 
 it)
Reporter: Steven Lowenthal
Assignee: Aleksey Yeschenko

 created a 3 node datacenter  called existing.
 ran cassandra-stress:
 cassandra-stress -R NetworkTopologyStrategy -O existing:2 -d existing0 -n 
 200 -k
 Added a 2nd datacenter called new with 3 nodes started it with 
 auto_bootstrap: false
 alter keyspace Keyspace1 with replication = 
 {'class':'NetworkTopologyStrategy','existing':2,'new':2};
 I then discovered that cassandra-stress --operation=read failed with 
 LOCAL_QUORUM if a node was down in the local datacenter - this occured in 
 both, but should not have, so decided to try again.
 I shut down the new datacenter and removed all 3 nodes.  I then tried to drop 
 the Keyspace1 keyspace.  cqlsh disconnected, and the log shows the error 
 below.
 ERROR [MigrationStage:1] 2014-05-16 23:57:03,085 CassandraDaemon.java (line 
 198) Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IllegalStateException: One row required, 0 found
 at org.apache.cassandra.cql3.UntypedResultSet.one(UntypedResultSet.java:53)
 at org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:263)
 at org.apache.cassandra.db.DefsTables.mergeKeyspaces(DefsTables.java:227)
 at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:182)
 at 
 org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:303)
 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: warnings

2014-08-22 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk b743eabd7 - 43e113b39


warnings


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/43e113b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/43e113b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/43e113b3

Branch: refs/heads/trunk
Commit: 43e113b39049b71db7706394aefe815e81ac5944
Parents: b743eab
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Aug 22 19:00:27 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Aug 22 19:00:27 2014 -0400

--
 src/java/org/apache/cassandra/db/RangeTombstone.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/43e113b3/src/java/org/apache/cassandra/db/RangeTombstone.java
--
diff --git a/src/java/org/apache/cassandra/db/RangeTombstone.java 
b/src/java/org/apache/cassandra/db/RangeTombstone.java
index 8aff8bb..f986095 100644
--- a/src/java/org/apache/cassandra/db/RangeTombstone.java
+++ b/src/java/org/apache/cassandra/db/RangeTombstone.java
@@ -69,16 +69,16 @@ public class RangeTombstone extends IntervalComposite, 
DeletionTime implements
 {
 digest.update(min.toByteBuffer().duplicate());
 digest.update(max.toByteBuffer().duplicate());
-DataOutputBuffer buffer = new DataOutputBuffer();
-try
+
+try (DataOutputBuffer buffer = new DataOutputBuffer())
 {
 buffer.writeLong(data.markedForDeleteAt);
+digest.update(buffer.getData(), 0, buffer.getLength());
 }
 catch (IOException e)
 {
 throw new RuntimeException(e);
 }
-digest.update(buffer.getData(), 0, buffer.getLength());
 }
 
 /**



git commit: warnings

2014-08-22 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 43e113b39 - 209d1dbd9


warnings


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/209d1dbd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/209d1dbd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/209d1dbd

Branch: refs/heads/trunk
Commit: 209d1dbd980178762019691071c07fb99e098b84
Parents: 43e113b
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Aug 22 19:04:06 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Aug 22 19:04:06 2014 -0400

--
 src/java/org/apache/cassandra/config/Config.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/209d1dbd/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index e2df89f..bb07449 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -263,8 +263,10 @@ public class Config
 
 public static ListString parseHintedHandoffEnabledDCs(final String 
dcNames) throws IOException
 {
-final CsvListReader csvListReader = new CsvListReader(new 
StringReader(dcNames), STANDARD_SURROUNDING_SPACES_NEED_QUOTES);
-return csvListReader.read();
+try (final CsvListReader csvListReader = new CsvListReader(new 
StringReader(dcNames), STANDARD_SURROUNDING_SPACES_NEED_QUOTES))
+{
+   return csvListReader.read();
+}
 }
 
 public static enum CommitLogSync



git commit: remove dead local

2014-08-22 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 209d1dbd9 - 0ce9abd78


remove dead local


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0ce9abd7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0ce9abd7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0ce9abd7

Branch: refs/heads/trunk
Commit: 0ce9abd78c8712ad4fba28734214b18f617a2aac
Parents: 209d1db
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Aug 22 19:07:15 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Aug 22 19:07:15 2014 -0400

--
 src/java/org/apache/cassandra/config/YamlConfigurationLoader.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0ce9abd7/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
--
diff --git a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java 
b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
index 174dd15..50991f2 100644
--- a/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
+++ b/src/java/org/apache/cassandra/config/YamlConfigurationLoader.java
@@ -79,6 +79,7 @@ public class YamlConfigurationLoader implements 
ConfigurationLoader
 return url;
 }
 
+@Override
 public Config loadConfig() throws ConfigurationException
 {
 return loadConfig(getStorageConfigURL());
@@ -86,7 +87,6 @@ public class YamlConfigurationLoader implements 
ConfigurationLoader
 
 public Config loadConfig(URL url) throws ConfigurationException
 {
-InputStream input = null;
 try
 {
 logger.info(Loading settings from {}, url);



[jira] [Updated] (CASSANDRA-7816) Updated the 4.2.6. EVENT section in the binary protocol specification

2014-08-22 Thread Michael Penick (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Penick updated CASSANDRA-7816:
--

Attachment: tcpdump_repeating_status_change.txt

 Updated the 4.2.6. EVENT section in the binary protocol specification
 ---

 Key: CASSANDRA-7816
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7816
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Reporter: Michael Penick
Priority: Trivial
 Attachments: tcpdump_repeating_status_change.txt, trunk-7816.txt


 Added MOVED_NODE as a possible type of topology change and also specified 
 that it is possible to receive the same event multiple times.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7816) Updated the 4.2.6. EVENT section in the binary protocol specification

2014-08-22 Thread Michael Penick (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107681#comment-14107681
 ] 

Michael Penick commented on CASSANDRA-7816:
---

It seems to be UP events that mostly repeat, but I've also seen it happen 
with DOWN events. I've reproduced the issue several times with the following: 

Create a two node cluster, with a control connection listening on node1.

{code}
ccm create test -n 2:0 -s -i 127.0.0. -b -v 2.0.9
ccm stop node2
ccm start node2
ccm stop node2
ccm start node2
...
{code}

I've ruled out the driver's code using tcpdump. I've attached a tcpdump file 
showing the case where an UP event has repeated several times. Here's the 
command I used:

{code}
sudo tcpdump -s 0 -i lo0 -X src host 127.0.0.1 and dst host 127.0.0.1 and tcp 
port 9042 and dst port the port the driver's connected on
{code}

This will happen after a couple rounds of starting and stopping the node. 
Sorry, this specific to my driver, but it shows that several unique events are 
sent as a result ofccm start node2. For some reason C* sends a status change 
node DOWN before sending the UP event when the node is started. The number 
of DOWN/UP events varies from run to run.

{code}
[INFO]: ControlConnection: Node'127.0.0.2:9042' is down
[INFO]: ControlConnection: Node'127.0.0.2:9042' is up (0x1033004e0)
[INFO]: ControlConnection: Node'127.0.0.2:9042' is up (0x10360)
[INFO]: ControlConnection: Node'127.0.0.2:9042' is up (0x1003023e0)
[INFO]: ControlConnection: Node'127.0.0.2:9042' is up (0x10020a150)
[INFO]: ControlConnection: Node'127.0.0.2:9042' is up (0x103300230)
[INFO]: ControlConnection: Node'127.0.0.2:9042' is up (0x103300230)
{code}


 Updated the 4.2.6. EVENT section in the binary protocol specification
 ---

 Key: CASSANDRA-7816
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7816
 Project: Cassandra
  Issue Type: Improvement
  Components: Documentation  website
Reporter: Michael Penick
Priority: Trivial
 Attachments: tcpdump_repeating_status_change.txt, trunk-7816.txt


 Added MOVED_NODE as a possible type of topology change and also specified 
 that it is possible to receive the same event multiple times.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: don't declare throwing exceptions that aren't

2014-08-22 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 0ce9abd78 - 0a0ee442f


don't declare throwing exceptions that aren't


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0a0ee442
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0a0ee442
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0a0ee442

Branch: refs/heads/trunk
Commit: 0a0ee442fdb13ec8a04c2990635dc389d3ba98a2
Parents: 0ce9abd
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Aug 22 19:24:04 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Aug 22 19:24:04 2014 -0400

--
 src/java/org/apache/cassandra/config/UFMetaData.java   | 1 -
 src/java/org/apache/cassandra/cql3/Tuples.java | 2 +-
 src/java/org/apache/cassandra/cql3/functions/Functions.java| 2 +-
 .../cassandra/cql3/statements/ModificationStatement.java   | 2 +-
 .../org/apache/cassandra/cql3/statements/SelectStatement.java  | 2 +-
 src/java/org/apache/cassandra/cql3/udf/UDFunction.java | 2 +-
 src/java/org/apache/cassandra/db/DefsTables.java   | 2 +-
 src/java/org/apache/cassandra/db/Memtable.java | 3 +--
 .../org/apache/cassandra/db/compaction/CompactionManager.java  | 2 +-
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java | 2 +-
 src/java/org/apache/cassandra/io/sstable/SSTableReader.java| 2 +-
 src/java/org/apache/cassandra/locator/CloudstackSnitch.java| 2 +-
 src/java/org/apache/cassandra/service/ActiveRepairService.java | 6 ++
 src/java/org/apache/cassandra/service/MigrationManager.java| 4 ++--
 src/java/org/apache/cassandra/tools/SSTableLevelResetter.java  | 3 +--
 src/java/org/apache/cassandra/tools/StandaloneSplitter.java| 2 +-
 src/java/org/apache/cassandra/tools/StandaloneUpgrader.java| 2 +-
 17 files changed, 18 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0a0ee442/src/java/org/apache/cassandra/config/UFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/UFMetaData.java 
b/src/java/org/apache/cassandra/config/UFMetaData.java
index 18484f3..f11d7ac 100644
--- a/src/java/org/apache/cassandra/config/UFMetaData.java
+++ b/src/java/org/apache/cassandra/config/UFMetaData.java
@@ -208,7 +208,6 @@ public final class UFMetaData
 }
 
 public static Mutation createOrReplaceFunction(long timestamp, UFMetaData 
f)
-throws ConfigurationException, SyntaxException
 {
 Mutation mutation = new Mutation(Keyspace.SYSTEM_KS, 
partKey.decompose(f.namespace, f.functionName));
 ColumnFamily cf = 
mutation.addOrGet(SystemKeyspace.SCHEMA_FUNCTIONS_CF);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0a0ee442/src/java/org/apache/cassandra/cql3/Tuples.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Tuples.java 
b/src/java/org/apache/cassandra/cql3/Tuples.java
index f0d7a13..cc04ebc 100644
--- a/src/java/org/apache/cassandra/cql3/Tuples.java
+++ b/src/java/org/apache/cassandra/cql3/Tuples.java
@@ -285,7 +285,7 @@ public class Tuples
 super(bindIndex);
 }
 
-private static ColumnSpecification makeReceiver(List? extends 
ColumnSpecification receivers) throws InvalidRequestException
+private static ColumnSpecification makeReceiver(List? extends 
ColumnSpecification receivers)
 {
 ListAbstractType? types = new ArrayList(receivers.size());
 StringBuilder inName = new StringBuilder(();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0a0ee442/src/java/org/apache/cassandra/cql3/functions/Functions.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/Functions.java 
b/src/java/org/apache/cassandra/cql3/functions/Functions.java
index 03dd13d..977d242 100644
--- a/src/java/org/apache/cassandra/cql3/functions/Functions.java
+++ b/src/java/org/apache/cassandra/cql3/functions/Functions.java
@@ -137,7 +137,7 @@ public abstract class Functions
 }
 }
 
-private static boolean isValidType(String keyspace, Function fun, List? 
extends AssignementTestable providedArgs, ColumnSpecification receiver) throws 
InvalidRequestException
+private static boolean isValidType(String keyspace, Function fun, List? 
extends AssignementTestable providedArgs, ColumnSpecification receiver)
 {
 if (!receiver.type.isValueCompatibleWith(fun.returnType()))
 return false;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0a0ee442/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java

git commit: imports cleanup

2014-08-22 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 0a0ee442f - 3ca9576c9


imports cleanup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ca9576c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ca9576c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ca9576c

Branch: refs/heads/trunk
Commit: 3ca9576c91386e920a0a3356fc9fa10e2f9c580b
Parents: 0a0ee44
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Aug 22 19:38:05 2014 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Aug 22 19:38:05 2014 -0400

--
 src/java/org/apache/cassandra/config/UFMetaData.java | 2 --
 .../org/apache/cassandra/cql3/statements/CQL3CasRequest.java | 1 -
 src/java/org/apache/cassandra/db/AtomicBTreeColumns.java | 1 -
 src/java/org/apache/cassandra/db/BufferDecoratedKey.java | 2 --
 src/java/org/apache/cassandra/db/DecoratedKey.java   | 3 ---
 src/java/org/apache/cassandra/db/DeletionInfo.java   | 1 -
 src/java/org/apache/cassandra/db/Memtable.java   | 1 -
 src/java/org/apache/cassandra/db/NativeDecoratedKey.java | 1 -
 src/java/org/apache/cassandra/db/RangeTombstoneList.java | 1 -
 .../org/apache/cassandra/db/compaction/LeveledManifest.java  | 2 --
 .../db/composites/AbstractCompoundCellNameType.java  | 2 --
 src/java/org/apache/cassandra/db/composites/SimpleCType.java | 2 --
 src/java/org/apache/cassandra/db/filter/ExtendedFilter.java  | 3 ---
 .../org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java| 8 
 .../org/apache/cassandra/io/sstable/ColumnNameHelper.java| 1 -
 .../org/apache/cassandra/locator/SimpleSeedProvider.java | 6 --
 src/java/org/apache/cassandra/net/ResponseVerbHandler.java   | 1 -
 src/java/org/apache/cassandra/service/MigrationManager.java  | 1 -
 .../org/apache/cassandra/streaming/StreamTransferTask.java   | 2 --
 src/java/org/apache/cassandra/thrift/CassandraServer.java| 1 -
 .../apache/cassandra/tools/BulkLoadConnectionFactory.java| 2 --
 src/java/org/apache/cassandra/tools/BulkLoader.java  | 1 -
 src/java/org/apache/cassandra/tools/StandaloneSplitter.java  | 1 -
 src/java/org/apache/cassandra/tools/StandaloneUpgrader.java  | 1 -
 src/java/org/apache/cassandra/utils/memory/MemoryUtil.java   | 1 -
 25 files changed, 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ca9576c/src/java/org/apache/cassandra/config/UFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/UFMetaData.java 
b/src/java/org/apache/cassandra/config/UFMetaData.java
index f11d7ac..4d5f6d3 100644
--- a/src/java/org/apache/cassandra/config/UFMetaData.java
+++ b/src/java/org/apache/cassandra/config/UFMetaData.java
@@ -50,9 +50,7 @@ import org.apache.cassandra.db.composites.Composite;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.CompositeType;
 import org.apache.cassandra.db.marshal.UTF8Type;
-import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.exceptions.InvalidRequestException;
-import org.apache.cassandra.exceptions.SyntaxException;
 
 /**
  * Defined (and loaded) user functions.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ca9576c/src/java/org/apache/cassandra/cql3/statements/CQL3CasRequest.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/CQL3CasRequest.java 
b/src/java/org/apache/cassandra/cql3/statements/CQL3CasRequest.java
index b04c624..06f80e0 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CQL3CasRequest.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CQL3CasRequest.java
@@ -25,7 +25,6 @@ import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.composites.Composite;
 import org.apache.cassandra.db.filter.*;
-import org.apache.cassandra.db.marshal.CompositeType;
 import org.apache.cassandra.exceptions.InvalidRequestException;
 import org.apache.cassandra.service.CASRequest;
 import org.apache.cassandra.utils.Pair;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ca9576c/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java 
b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
index 2ce4fda..0572c4a 100644
--- a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java
@@ -42,7 +42,6 @@ import org.apache.cassandra.utils.btree.UpdateFunction;
 import 

  1   2   >