[jira] [Comment Edited] (CASSANDRA-7688) Add data sizing to a system table

2015-02-05 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308699#comment-14308699
 ] 

mck edited comment on CASSANDRA-7688 at 2/6/15 7:46 AM:


{quote}Can you please elaborate on what the idea is behind storing this info in 
a system table?{quote}
I'm still curious on this question, as it wasn't about the removal of thrift 
(that's obvious, although it wasn't obvious that all metadata is only exposed 
via cql, eg ControlConnection.refreshSchema(..)) but around the reasoning for 
backgrounding/frequency-of the computation. 

{code}ScheduledExecutors.optionalTasks.schedule(runnable, 5, 
TimeUnit.MINUTES);{code}
Why 5 minutes? What's the trade-off here? 
 How do we (everyone) know the computation is expensive enough to warrant 
backgrounding it?
 And that 5 minutes will give us the best throughput (across c* and its 
hadoop/spark jobs)?

a) what about putting metrics around the code in SizeEstimatesRecorder.run() so 
we can get an idea for future adjustments?
(going a step further could be do get updateSizeEstimates() to diff the old 
rows with new rows and having a metric on change frequency).

b) what about making the frequency configurable?


was (Author: michaelsembwever):
{quote}Can you please elaborate on what the idea is behind storing this info in 
a system table?{quote}
I'm still curious on this question, as it wasn't about the removal of thrift 
(that's obvious) but around the reasoning for backgrounding the computation.

{code}ScheduledExecutors.optionalTasks.schedule(runnable, 5, 
TimeUnit.MINUTES);{code}
Why 5 minutes? What's the trade-off here? 
 How do we (everyone) know the computation is expensive enough to warrant 
backgrounding it?
 And that 5 minutes will give us the best throughput (across c* and its 
hadoop/spark jobs)?

a) what about putting metrics around the code in SizeEstimatesRecorder.run() so 
we can get an idea for future adjustments?
(going a step further could be do get updateSizeEstimates() to diff the old 
rows with new rows and having a metric on change frequency).

b) what about making the frequency configurable?

 Add data sizing to a system table
 -

 Key: CASSANDRA-7688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremiah Jordan
Assignee: Aleksey Yeschenko
 Fix For: 2.1.3

 Attachments: 7688.txt


 Currently you can't implement something similar to describe_splits_ex purely 
 from the a native protocol driver.  
 https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
 getting ownership information to a client in the java-driver.  But you still 
 need the data sizing part to get splits of a given size.  We should add the 
 sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7688) Add data sizing to a system table

2015-02-05 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308699#comment-14308699
 ] 

mck commented on CASSANDRA-7688:


{quote}Can you please elaborate on what the idea is behind storing this info in 
a system table?{quote}
I'm still curious on this question, as it wasn't about the removal of thrift 
(that's obvious) but around the reasoning for backgrounding the computation.

{code}ScheduledExecutors.optionalTasks.schedule(runnable, 5, 
TimeUnit.MINUTES);{code}
Why 5 minutes? What's the trade-off here? 
 How do we (everyone) know the computation is expensive enough to warrant 
backgrounding it?
 And that 5 minutes will give us the best throughput (across c* and its 
hadoop/spark jobs)?

a) what about putting metrics around the code in SizeEstimatesRecorder.run() so 
we can get an idea for future adjustments?
(going a step further could be do get updateSizeEstimates() to diff the old 
rows with new rows and having a metric on change frequency).

b) what about making the frequency configurable?

 Add data sizing to a system table
 -

 Key: CASSANDRA-7688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremiah Jordan
Assignee: Aleksey Yeschenko
 Fix For: 2.1.3

 Attachments: 7688.txt


 Currently you can't implement something similar to describe_splits_ex purely 
 from the a native protocol driver.  
 https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
 getting ownership information to a client in the java-driver.  But you still 
 need the data sizing part to get splits of a given size.  We should add the 
 sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7272) Add Major Compaction to LCS

2015-02-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308704#comment-14308704
 ] 

Marcus Eriksson commented on CASSANDRA-7272:


The data in that big sstable generated from a major compaction would never be 
compacted. It would contain old data and would most likely block us from 
dropping tombstones in the regular minor compactions. This does not help 100% 
but it is atleast better.

 Add Major Compaction to LCS 
 --

 Key: CASSANDRA-7272
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7272
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
Assignee: Marcus Eriksson
Priority: Minor
  Labels: compaction
 Fix For: 3.0


 LCS has a number of minor issues (maybe major depending on your perspective).
 LCS is primarily used for wide rows so for instance when you repair data in 
 LCS you end up with a copy of an entire repaired row in L0.  Over time if you 
 repair you end up with multiple copies of a row in L0 - L5.  This can make 
 predicting disk usage confusing.  
 Another issue is cleaning up tombstoned data.  If a tombstone lives in level 
 1 and data for the cell lives in level 5 the data will not be reclaimed from 
 disk until the tombstone reaches level 5.
 I propose we add a major compaction for LCS that forces consolidation of 
 data to level 5 to address these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-02-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308741#comment-14308741
 ] 

Marcus Eriksson commented on CASSANDRA-8535:


It looks to me like this would break leveled compaction (we use switchWriter to 
create a new sstable when we have written sstable_size_in_mb into an sstable)

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
 Fix For: 2.1.3

 Attachments: 8535_v1.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
 ~[na:1.7.0_45]
   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
   at 
 org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
 ~[main/:na]
   ... 18 common frames omitted
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-02-05 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8739:
---
Attachment: 0001-8739.patch

What I meant was that when we pick compaction candidates, we make sure that we 
don't cause overlap in the next level by checking the boundaries of the 
currently compacting sstables.

The issue is smaller than I first thought though, since 
DataTracker.getCompacting() actually contains the original SSTR instances which 
don't have their start positions moved.

The only time it fails is if L1 is empty, attaching patch to properly get the 
L0 compacting

 Don't check for overlap with sstables that have had their start positions 
 moved in LCS
 --

 Key: CASSANDRA-8739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-8739.patch


 When picking compaction candidates in LCS, we check that we won't cause any 
 overlap in the higher level. Problem is that we compare the files that have 
 had their start positions moved meaning we can cause overlap. We need to also 
 include the tmplink files when checking this.
 Note that in 2.1 overlap is not as big problem as earlier, if adding an 
 sstable would cause overlap, we send it back to L0 instead, meaning we do a 
 bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8744) Ensure SSTableReader.first/last are honoured universally

2015-02-05 Thread Benedict (JIRA)
Benedict created CASSANDRA-8744:
---

 Summary: Ensure SSTableReader.first/last are honoured universally
 Key: CASSANDRA-8744
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8744
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.3


Split out from CASSANDRA-8683; we don't honour the first/last properties of an 
sstablereader, and we tend to assume that we do. This can cause problems in LCS 
validation compactions, for instance, where a scanner is assumed to only cover 
the defined range, but may return data either side of that range. In general it 
is only wasteful to not honour these ranges anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307172#comment-14307172
 ] 

Benedict edited comment on CASSANDRA-8739 at 2/5/15 12:58 PM:
--

Oh, right. Yes, now you've spelled it out, I can see you were clearly stating 
it before and I was just missing it. (Thanks for bearing with me!)


was (Author: benedict):
Oh, right. Yes, now you've spelled it out, I can see you were clearly stating 
it before and I was just missing it.

 Don't check for overlap with sstables that have had their start positions 
 moved in LCS
 --

 Key: CASSANDRA-8739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-8739.patch


 When picking compaction candidates in LCS, we check that we won't cause any 
 overlap in the higher level. Problem is that we compare the files that have 
 had their start positions moved meaning we can cause overlap. We need to also 
 include the tmplink files when checking this.
 Note that in 2.1 overlap is not as big problem as earlier, if adding an 
 sstable would cause overlap, we send it back to L0 instead, meaning we do a 
 bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8308) Windows: Commitlog access violations on unit tests

2015-02-05 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307158#comment-14307158
 ] 

Robert Stupp commented on CASSANDRA-8308:
-

I've added {{o.a.c.db.MmapFileTest}} [(source 
code)|https://github.com/snazy/cassandra/blob/8308-post-fix/test/unit/org/apache/cassandra/db/MmapFileTest.java]
 to my branch https://github.com/snazy/cassandra/tree/8308-post-fix - and it 
fails because after {{sun.misc.Cleaner#clean}} the number of mapped buffers 
does not decrease. Could this mean that the file is still in use in Windows?

 Windows: Commitlog access violations on unit tests
 --

 Key: CASSANDRA-8308
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8308
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
  Labels: Windows
 Fix For: 3.0

 Attachments: 8308-post-fix.txt, 8308_v1.txt, 8308_v2.txt, 8308_v3.txt


 We have four unit tests failing on trunk on Windows, all with 
 FileSystemException's related to the SchemaLoader:
 {noformat}
 [junit] Test 
 org.apache.cassandra.db.compaction.DateTieredCompactionStrategyTest FAILED
 [junit] Test org.apache.cassandra.cql3.ThriftCompatibilityTest FAILED
 [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest FAILED
 [junit] Test org.apache.cassandra.repair.LocalSyncTaskTest FAILED
 {noformat}
 Example error:
 {noformat}
 [junit] Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\commitlog;0\CommitLog-5-1415908745965.log: The process 
 cannot access the file because it is being used by another process.
 [junit]
 [junit] at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
 [junit] at 
 sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
 [junit] at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 [junit] at java.nio.file.Files.delete(Files.java:1079)
 [junit] at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:125)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307159#comment-14307159
 ] 

Benedict commented on CASSANDRA-8739:
-

My question is more how this then causes problems, if it fails to 
markCompacting the candidates it selects?

 Don't check for overlap with sstables that have had their start positions 
 moved in LCS
 --

 Key: CASSANDRA-8739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-8739.patch


 When picking compaction candidates in LCS, we check that we won't cause any 
 overlap in the higher level. Problem is that we compare the files that have 
 had their start positions moved meaning we can cause overlap. We need to also 
 include the tmplink files when checking this.
 Note that in 2.1 overlap is not as big problem as earlier, if adding an 
 sstable would cause overlap, we send it back to L0 instead, meaning we do a 
 bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-02-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307164#comment-14307164
 ] 

Marcus Eriksson edited comment on CASSANDRA-8739 at 2/5/15 12:52 PM:
-

we pick a bunch of files for compaction (those are not currently compacting), 
then we make sure that by compacting those into L1, we don't cause any overlap 
in L1. We do that by comparing first/last keys in the candidates with the 
first/last keys of the sstables that are currently compacting.


was (Author: krummas):
we pick a bunch of files for compaction (those are not currently compacting), 
then we make sure that by compacting those into L1, we don't cause any overlap 
in L1. We do that by checking first/last keys in the sstables

 Don't check for overlap with sstables that have had their start positions 
 moved in LCS
 --

 Key: CASSANDRA-8739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-8739.patch


 When picking compaction candidates in LCS, we check that we won't cause any 
 overlap in the higher level. Problem is that we compare the files that have 
 had their start positions moved meaning we can cause overlap. We need to also 
 include the tmplink files when checking this.
 Note that in 2.1 overlap is not as big problem as earlier, if adding an 
 sstable would cause overlap, we send it back to L0 instead, meaning we do a 
 bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-8308) Windows: Commitlog access violations on unit tests

2015-02-05 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-8308:

Comment: was deleted

(was: I've added {{o.a.c.db.MmapFileTest}} [(source 
code)|https://github.com/snazy/cassandra/blob/8308-post-fix/test/unit/org/apache/cassandra/db/MmapFileTest.java]
 to my branch https://github.com/snazy/cassandra/tree/8308-post-fix - and it 
fails because after {{sun.misc.Cleaner#clean}} the number of mapped buffers 
does not decrease. Could this mean that the file is still in use in Windows?)

 Windows: Commitlog access violations on unit tests
 --

 Key: CASSANDRA-8308
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8308
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
  Labels: Windows
 Fix For: 3.0

 Attachments: 8308-post-fix.txt, 8308_v1.txt, 8308_v2.txt, 8308_v3.txt


 We have four unit tests failing on trunk on Windows, all with 
 FileSystemException's related to the SchemaLoader:
 {noformat}
 [junit] Test 
 org.apache.cassandra.db.compaction.DateTieredCompactionStrategyTest FAILED
 [junit] Test org.apache.cassandra.cql3.ThriftCompatibilityTest FAILED
 [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest FAILED
 [junit] Test org.apache.cassandra.repair.LocalSyncTaskTest FAILED
 {noformat}
 Example error:
 {noformat}
 [junit] Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\commitlog;0\CommitLog-5-1415908745965.log: The process 
 cannot access the file because it is being used by another process.
 [junit]
 [junit] at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
 [junit] at 
 sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
 [junit] at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 [junit] at java.nio.file.Files.delete(Files.java:1079)
 [junit] at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:125)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307172#comment-14307172
 ] 

Benedict commented on CASSANDRA-8739:
-

Oh, right. Yes, now you've spelled it out, I can see you were clearly stating 
it before and I was just missing it.

 Don't check for overlap with sstables that have had their start positions 
 moved in LCS
 --

 Key: CASSANDRA-8739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-8739.patch


 When picking compaction candidates in LCS, we check that we won't cause any 
 overlap in the higher level. Problem is that we compare the files that have 
 had their start positions moved meaning we can cause overlap. We need to also 
 include the tmplink files when checking this.
 Note that in 2.1 overlap is not as big problem as earlier, if adding an 
 sstable would cause overlap, we send it back to L0 instead, meaning we do a 
 bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Write partition size estimates into a system table

2015-02-05 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 90144989f - e60089db0


Write partition size estimates into a system table

patch by Aleksey Yeschenko; reviewed by Piotr Kołaczkowski for
CASSANDRA-7688


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e60089db
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e60089db
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e60089db

Branch: refs/heads/cassandra-2.1
Commit: e60089db08c7675dd507aa668bff862d437382d0
Parents: 9014498
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 5 16:06:56 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 5 16:06:56 2015 +0300

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/config/CFMetaData.java |  10 ++
 .../org/apache/cassandra/config/KSMetaData.java |   3 +-
 .../cassandra/db/SizeEstimatesRecorder.java | 121 +++
 .../org/apache/cassandra/db/SystemKeyspace.java |  42 +++
 .../cassandra/service/CassandraDaemon.java  |  12 +-
 6 files changed, 182 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e60089db/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 192939b..959a2de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Write partition size estimates into a system table (CASSANDRA-7688)
  * Upgrade libthrift to 0.9.2 (CASSANDRA-8685)
  * Don't use the shared ref in sstableloader (CASSANDRA-8704)
  * Purge internal prepared statements if related tables or

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e60089db/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index e75abb7..d55d1c0 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -290,6 +290,16 @@ public final class CFMetaData
  + PRIMARY 
KEY (id)
  + ) WITH 
COMMENT='show all compaction history' AND DEFAULT_TIME_TO_LIVE=604800);
 
+public static final CFMetaData SizeEstimatesCf = compile(CREATE TABLE  + 
SystemKeyspace.SIZE_ESTIMATES_CF +  (
+ + keyspace_name 
text,
+ + table_name 
text,
+ + range_start 
text,
+ + range_end 
text,
+ + 
mean_partition_size bigint,
+ + 
partitions_count bigint,
+ + PRIMARY KEY 
((keyspace_name), table_name, range_start, range_end)
+ + ) WITH 
COMMENT='per-table primary range size estimates');
+
 
 public static class SpeculativeRetry
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e60089db/src/java/org/apache/cassandra/config/KSMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/KSMetaData.java 
b/src/java/org/apache/cassandra/config/KSMetaData.java
index 8c99191..22c59ca 100644
--- a/src/java/org/apache/cassandra/config/KSMetaData.java
+++ b/src/java/org/apache/cassandra/config/KSMetaData.java
@@ -104,7 +104,8 @@ public final class KSMetaData
 CFMetaData.CompactionLogCf,
 CFMetaData.CompactionHistoryCf,
 CFMetaData.PaxosCf,
-CFMetaData.SSTableActivityCF);
+CFMetaData.SSTableActivityCF,
+CFMetaData.SizeEstimatesCf);
 return new KSMetaData(Keyspace.SYSTEM_KS, LocalStrategy.class, 
Collections.String, StringemptyMap(), true, cfDefs);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e60089db/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
new file mode 100644

cassandra git commit: Follow-up merge fix

2015-02-05 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk e70959dea - 49f1a629d


Follow-up merge fix


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/49f1a629
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/49f1a629
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/49f1a629

Branch: refs/heads/trunk
Commit: 49f1a629d0c79c5aa7bda1b8aadbde343e52ecb6
Parents: e70959d
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 5 16:34:21 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 5 16:34:21 2015 +0300

--
 src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/49f1a629/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
index dea5467..1b30db3 100644
--- a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
+++ b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
@@ -75,7 +75,10 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 ListSSTableReader sstables = null;
 RefsSSTableReader refs = null;
 while (refs == null)
-refs = 
Refs.tryRef(table.viewFilter(range.toRowBounds()).apply(table.getDataTracker().getView()));
+{
+sstables = 
table.viewFilter(range.toRowBounds()).apply(table.getDataTracker().getView());
+refs = Refs.tryRef(sstables);
+}
 
 long partitionsCount, meanPartitionSize;
 try



cassandra git commit: Switch references to Refs

2015-02-05 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 e60089db0 - 2d5d30114


Switch references to Refs


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d5d3011
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d5d3011
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d5d3011

Branch: refs/heads/cassandra-2.1
Commit: 2d5d30114107bfb6bb7b6f9571264eef6ad4985f
Parents: e60089d
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 5 16:41:47 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 5 16:41:47 2015 +0300

--
 .../org/apache/cassandra/db/SizeEstimatesRecorder.java  | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d5d3011/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
index b739ba5..b7e5715 100644
--- a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
+++ b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
@@ -29,6 +29,7 @@ import org.apache.cassandra.service.MigrationListener;
 import org.apache.cassandra.service.MigrationManager;
 import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.utils.Pair;
+import org.apache.cassandra.utils.concurrent.Refs;
 
 /**
  * A very simplistic/crude partition count/size estimator.
@@ -71,8 +72,13 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 for (RangeToken range : localRanges)
 {
 // filter sstables that have partitions in this range.
-ListSSTableReader sstables = 
table.viewFilter(range.toRowBounds()).apply(table.getDataTracker().getView());
-SSTableReader.acquireReferences(sstables);
+ListSSTableReader sstables = null;
+RefsSSTableReader refs = null;
+while (refs == null)
+{
+sstables = 
table.viewFilter(range.toRowBounds()).apply(table.getDataTracker().getView());
+refs = Refs.tryRef(sstables);
+}
 
 long partitionsCount, meanPartitionSize;
 try
@@ -83,7 +89,7 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 }
 finally
 {
-SSTableReader.releaseReferences(sstables);
+refs.release();
 }
 
 estimates.put(range, Pair.create(partitionsCount, 
meanPartitionSize));



[1/2] cassandra git commit: Switch references to Refs

2015-02-05 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 49f1a629d - 4231b4e2a


Switch references to Refs


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d5d3011
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d5d3011
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d5d3011

Branch: refs/heads/trunk
Commit: 2d5d30114107bfb6bb7b6f9571264eef6ad4985f
Parents: e60089d
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 5 16:41:47 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 5 16:41:47 2015 +0300

--
 .../org/apache/cassandra/db/SizeEstimatesRecorder.java  | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d5d3011/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
index b739ba5..b7e5715 100644
--- a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
+++ b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
@@ -29,6 +29,7 @@ import org.apache.cassandra.service.MigrationListener;
 import org.apache.cassandra.service.MigrationManager;
 import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.utils.Pair;
+import org.apache.cassandra.utils.concurrent.Refs;
 
 /**
  * A very simplistic/crude partition count/size estimator.
@@ -71,8 +72,13 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 for (RangeToken range : localRanges)
 {
 // filter sstables that have partitions in this range.
-ListSSTableReader sstables = 
table.viewFilter(range.toRowBounds()).apply(table.getDataTracker().getView());
-SSTableReader.acquireReferences(sstables);
+ListSSTableReader sstables = null;
+RefsSSTableReader refs = null;
+while (refs == null)
+{
+sstables = 
table.viewFilter(range.toRowBounds()).apply(table.getDataTracker().getView());
+refs = Refs.tryRef(sstables);
+}
 
 long partitionsCount, meanPartitionSize;
 try
@@ -83,7 +89,7 @@ public class SizeEstimatesRecorder extends MigrationListener 
implements Runnable
 }
 finally
 {
-SSTableReader.releaseReferences(sstables);
+refs.release();
 }
 
 estimates.put(range, Pair.create(partitionsCount, 
meanPartitionSize));



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-05 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4231b4e2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4231b4e2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4231b4e2

Branch: refs/heads/trunk
Commit: 4231b4e2a1be2d4f9107090949896f1ebc58e014
Parents: 49f1a62 2d5d301
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 5 16:42:06 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 5 16:42:06 2015 +0300

--

--




[jira] [Commented] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-02-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307164#comment-14307164
 ] 

Marcus Eriksson commented on CASSANDRA-8739:


we pick a bunch of files for compaction (those are not currently compacting), 
then we make sure that by compacting those into L1, we don't cause any overlap 
in L1. We do that by checking first/last keys in the sstables

 Don't check for overlap with sstables that have had their start positions 
 moved in LCS
 --

 Key: CASSANDRA-8739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-8739.patch


 When picking compaction candidates in LCS, we check that we won't cause any 
 overlap in the higher level. Problem is that we compare the files that have 
 had their start positions moved meaning we can cause overlap. We need to also 
 include the tmplink files when checking this.
 Note that in 2.1 overlap is not as big problem as earlier, if adding an 
 sstable would cause overlap, we send it back to L0 instead, meaning we do a 
 bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7688) Add data sizing to a system table

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307176#comment-14307176
 ] 

Piotr Kołaczkowski commented on CASSANDRA-7688:
---

Ok, +1 then.

 Add data sizing to a system table
 -

 Key: CASSANDRA-7688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremiah Jordan
Assignee: Aleksey Yeschenko
 Fix For: 2.1.3

 Attachments: 7688.txt


 Currently you can't implement something similar to describe_splits_ex purely 
 from the a native protocol driver.  
 https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
 getting ownership information to a client in the java-driver.  But you still 
 need the data sizing part to get splits of a given size.  We should add the 
 sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8683) Ensure early reopening has no overlap with replaced files

2015-02-05 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8683:

Summary: Ensure early reopening has no overlap with replaced files  (was: 
Ensure early reopening has no overlap with replaced files, and that 
SSTableReader.first/last are honoured universally)

 Ensure early reopening has no overlap with replaced files
 -

 Key: CASSANDRA-8683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8683
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.3

 Attachments: 0001-avoid-NPE-in-getPositionsForRanges.patch


 Incremental repairs holds a set of the sstables it started the repair on (we 
 need to know which sstables were actually validated to be able to anticompact 
 them). This includes any tmplink files that existed when the compaction 
 started (if we wouldn't include those, we would miss data since we move the 
 start point of the existing non-tmplink files)
 With CASSANDRA-6916 we swap out those instances with new ones 
 (SSTR.cloneWithNewStart / SSTW.openEarly), meaning that the underlying file 
 can get deleted even though we hold a reference.
 This causes the unit test error: 
 http://cassci.datastax.com/job/trunk_utest/1330/testReport/junit/org.apache.cassandra.db.compaction/LeveledCompactionStrategyTest/testValidationMultipleSSTablePerLevel/
 (note that it only fails on trunk though, in 2.1 we don't hold references to 
 the repairing files for non-incremental repairs, but the bug should exist in 
 2.1 as well)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-05 Thread aleksey
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/config/CFMetaData.java
src/java/org/apache/cassandra/config/KSMetaData.java
src/java/org/apache/cassandra/db/SystemKeyspace.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e70959de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e70959de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e70959de

Branch: refs/heads/trunk
Commit: e70959dea475928c8d87eea68b9fafb7a5ea0b62
Parents: 0fa19b7 e60089d
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 5 16:30:47 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 5 16:30:47 2015 +0300

--
 CHANGES.txt |   1 +
 .../cassandra/db/SizeEstimatesRecorder.java | 124 +++
 .../org/apache/cassandra/db/SystemKeyspace.java |  56 -
 .../cassandra/service/CassandraDaemon.java  |  12 +-
 4 files changed, 186 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e70959de/CHANGES.txt
--
diff --cc CHANGES.txt
index 61c57a3,959a2de..0aba61a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,63 -1,5 +1,64 @@@
 +3.0
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Add role based access control (CASSANDRA-7653)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * Improve concurrency of repair (CASSANDRA-6455, 8208)
 +
 +
  2.1.3
+  * Write 

[1/2] cassandra git commit: Write partition size estimates into a system table

2015-02-05 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 0fa19b7ce - e70959dea


Write partition size estimates into a system table

patch by Aleksey Yeschenko; reviewed by Piotr Kołaczkowski for
CASSANDRA-7688


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e60089db
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e60089db
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e60089db

Branch: refs/heads/trunk
Commit: e60089db08c7675dd507aa668bff862d437382d0
Parents: 9014498
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 5 16:06:56 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 5 16:06:56 2015 +0300

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/config/CFMetaData.java |  10 ++
 .../org/apache/cassandra/config/KSMetaData.java |   3 +-
 .../cassandra/db/SizeEstimatesRecorder.java | 121 +++
 .../org/apache/cassandra/db/SystemKeyspace.java |  42 +++
 .../cassandra/service/CassandraDaemon.java  |  12 +-
 6 files changed, 182 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e60089db/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 192939b..959a2de 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Write partition size estimates into a system table (CASSANDRA-7688)
  * Upgrade libthrift to 0.9.2 (CASSANDRA-8685)
  * Don't use the shared ref in sstableloader (CASSANDRA-8704)
  * Purge internal prepared statements if related tables or

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e60089db/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index e75abb7..d55d1c0 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -290,6 +290,16 @@ public final class CFMetaData
  + PRIMARY 
KEY (id)
  + ) WITH 
COMMENT='show all compaction history' AND DEFAULT_TIME_TO_LIVE=604800);
 
+public static final CFMetaData SizeEstimatesCf = compile(CREATE TABLE  + 
SystemKeyspace.SIZE_ESTIMATES_CF +  (
+ + keyspace_name 
text,
+ + table_name 
text,
+ + range_start 
text,
+ + range_end 
text,
+ + 
mean_partition_size bigint,
+ + 
partitions_count bigint,
+ + PRIMARY KEY 
((keyspace_name), table_name, range_start, range_end)
+ + ) WITH 
COMMENT='per-table primary range size estimates');
+
 
 public static class SpeculativeRetry
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e60089db/src/java/org/apache/cassandra/config/KSMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/KSMetaData.java 
b/src/java/org/apache/cassandra/config/KSMetaData.java
index 8c99191..22c59ca 100644
--- a/src/java/org/apache/cassandra/config/KSMetaData.java
+++ b/src/java/org/apache/cassandra/config/KSMetaData.java
@@ -104,7 +104,8 @@ public final class KSMetaData
 CFMetaData.CompactionLogCf,
 CFMetaData.CompactionHistoryCf,
 CFMetaData.PaxosCf,
-CFMetaData.SSTableActivityCF);
+CFMetaData.SSTableActivityCF,
+CFMetaData.SizeEstimatesCf);
 return new KSMetaData(Keyspace.SYSTEM_KS, LocalStrategy.class, 
Collections.String, StringemptyMap(), true, cfDefs);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e60089db/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
new file mode 100644
index 

[jira] [Created] (CASSANDRA-8745) Ambiguous WriteTimeoutException during atomic batch execution

2015-02-05 Thread Stefan Podkowinski (JIRA)
Stefan Podkowinski created CASSANDRA-8745:
-

 Summary: Ambiguous WriteTimeoutException during atomic batch 
execution
 Key: CASSANDRA-8745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8745
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.1.x
Reporter: Stefan Podkowinski


StorageProxy will handle atomic batches in mutateAtomically() the following way:
* syncWriteToBatchlog() - WriteTimeoutException
* syncWriteBatchedMutations() - WriteTimeoutException
* asyncRemoveFromBatchlog()

All WriteTimeoutExceptions for syncWrite will be catched and passed to the 
caller. Unfortunately the caller will not be able to tell if the timeout 
occured while creating/sending the batchlog or executing the individual batch 
statements.

# Timeout during batchlog creation: client must retry operation or batch might 
be lost
# Timout during mutations: client should not retry as a new batchlog will be 
created on every StorageProxy.mutateAtomically() call while previous batchlogs 
would not be deleted. This can have performance implications for large batches 
on stressed out clusters

There should be a way to tell if a batchlog was successfully created, so we can 
let the client move on and assume batch execution based on batchlog at some 
point in the future. 

See also CASSANDRA-8672 for similar error handling issue




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8689) Assertion error in 2.1.2: ERROR [IndexSummaryManager:1]

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307404#comment-14307404
 ] 

Benedict commented on CASSANDRA-8689:
-

Can we get a bisect run on this?

 Assertion error in 2.1.2: ERROR [IndexSummaryManager:1]
 ---

 Key: CASSANDRA-8689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8689
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu
Assignee: Benedict
 Fix For: 2.1.3


 After upgrading a 6 nodes cassandra from 2.1.0 to 2.1.2, start getting the 
 following assertion error.
 {noformat}
 ERROR [IndexSummaryManager:1] 2015-01-26 20:55:40,451 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[IndexSummaryManager:1,1,main]
 java.lang.AssertionError: null
 at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getOffHeapSize(IndexSummary.java:192)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getIndexSummaryOffHeapSize(SSTableReader.java:1070)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:292)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:238)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:139)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:77)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_45]
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
 [na:1.7.0_45]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {noformat}
 cassandra service is still running despite the issue. Node has total 8G 
 memory with 2G allocated to heap. We are basically running read queries to 
 retrieve data out of cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Disable early open compaction

2015-02-05 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 82a8c2372 - 98905809c


Disable early open compaction


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/98905809
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/98905809
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/98905809

Branch: refs/heads/cassandra-2.1
Commit: 98905809c87cc8fef01621bdcfd069d6edc75324
Parents: 82a8c23
Author: T Jake Luciani j...@apache.org
Authored: Thu Feb 5 10:51:22 2015 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Thu Feb 5 10:54:57 2015 -0500

--
 src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/98905809/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 1dd1688..1ee74e9 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1463,7 +1463,8 @@ public class DatabaseDescriptor
 
 public static int getSSTablePreempiveOpenIntervalInMB()
 {
-return conf.sstable_preemptive_open_interval_in_mb;
+//return conf.sstable_preemptive_open_interval_in_mb;
+return -1;
 }
 
 public static boolean getTrickleFsync()



cassandra git commit: docs and bump version

2015-02-05 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 98905809c - caba0a592


docs and bump version


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/caba0a59
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/caba0a59
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/caba0a59

Branch: refs/heads/cassandra-2.1
Commit: caba0a5920400f912a27dc8e73e887d09c450908
Parents: 9890580
Author: T Jake Luciani j...@apache.org
Authored: Thu Feb 5 11:11:14 2015 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Thu Feb 5 11:11:14 2015 -0500

--
 NEWS.txt  | 2 ++
 build.xml | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/caba0a59/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index e344acc..602770c 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -21,6 +21,8 @@ Upgrading
 - Prepending a list to a list collection was erroneously resulting in
   the prepended list being reversed upon insertion.  If you were depending
   on this buggy behavior, note that it has been corrected.
+- Incremental replacement of compacted SSTables has been disabled for this
+  release.
 
 2.1.2
 =

http://git-wip-us.apache.org/repos/asf/cassandra/blob/caba0a59/build.xml
--
diff --git a/build.xml b/build.xml
index 0aa8c01..eaef534 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=2.1.2/
+property name=base.version value=2.1.3/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/



[jira] [Commented] (CASSANDRA-8650) Creation and maintenance of roles should not require superuser status

2015-02-05 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307128#comment-14307128
 ] 

Sam Tunnicliffe commented on CASSANDRA-8650:


I've pushed a [branch|https://github.com/beobal/cassandra/tree/8650-v3] based 
on the v3 patch with further commits addressing your comments

{quote}
RoleResource#toString() needs an @Override annotation
{quote}

Added. There was also some inconsistency in DataResource, where some IResource 
methods had the @Override annotation, but not others. I've cleaned both classes 
up so that interface methods are not annotated, but those overriden from Object 
are. 

{quote}
does it make any sense to have IAuthorizer#revokeAll(String role) and 
IAuthorizer#revokeAll(IResource resource), now that roles are resources 
themselves?
{quote}

It does, though for the sake of clarity they could use some improvements:

revokeAll(IResource) removes all the permissions granted *on* a resource.
revokeAll(String) revokes all permissions granted *to* a role. 

The former is run whenever an IResource is removed - so originally by an 
AuthMigrationListener following a DROP TABLE or DROP KEYSPACE statement, but 
now also run for the RoleResource during a DROP ROLE.
The latter is only called when DROP ROLE is executed, to tidy up any 
permissions granted to that role. In 
[89a426bf|https://github.com/beobal/cassandra/commit/89a426bfa5a778bfa424c1b5148f8abc289db341]
 I've renamed the methods and changed the arguments to make this distinction 
clearer. 
Note that one additional complication is that we use the short form of the role 
name in the role_permissions table, so I've added a method to get that from a 
RoleResource, analagous to DataResource#table() etc

Doing this makes me wonder about IRoleManager  whether we ought to update that 
to deal with RoleResources rather than simple Strings. Any objections to that?

{quote}
the superuser check in GrantRoleStatement#checkAccess feels redundant to me. 
Just having AUTHORIZE there should be enough. Am I missing something? Same 
question w/ RevokeRoleStatement
{quote}

My reasoning for this was as an additional safety valve to prevent a 
non-superuser attaining superuser-granting powers unintentionally. For 
instance, running the following as a non-superuser with CREATE  AUTHORIZE on 
the root-level roles resource:

{code}
CREATE ROLE r1 NOSUPERUSER;
CREATE ROLE r2 NOSUPERUSER;
GRANT AUTHORIZE ON ROLE r2 TO r1;
{code}

Then, a superuser may run:
{code}
ALTER ROLE r2 SUPERUSER;
{code}

So r1 now has the ability to grant su powers to any role via r2.

So looking at this now, I've talked myself out of it. A superuser can basically 
do whatever they like, if that means being able to confer su-granting powers to 
non-superusers, then we should let them do it (remembering that  with great 
power comes great responsibility etc though). tl;dr I removed the su check from 
the two statements.

{quote}
for clarity, would be nice to rename GrantStatement to GrantPermissionStatement 
(to match GrantRoleStatement)
likewise with RevokeStatement. Neither of these two things have been introduced 
in the patch, but renaming them here kinda makes sense
{quote}

Done, though I made them plural as unlike with roles you may well be granting 
multiple permissions at a time - and both classes use a SetPermission


 Creation and maintenance of roles should not require superuser status
 -

 Key: CASSANDRA-8650
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8650
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
  Labels: cql, security
 Fix For: 3.0

 Attachments: 8650-v2.txt, 8650-v3.txt, 8650.txt


 Currently, only roles with superuser status are permitted to 
 create/drop/grant/revoke roles, which violates the principal of least 
 privilege. In addition, in order to run {{ALTER ROLE}} statements a user must 
 log in directly as that role or else be a superuser. This requirement 
 increases the (ab)use of superuser privileges, especially where roles are 
 created without {{LOGIN}} privileges to model groups of permissions granted 
 to individual db users. In this scenario, a superuser is always required if 
 such roles are to be granted and modified.
 We should add more granular permissions to allow administration of roles 
 without requiring superuser status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8741) Running a drain before a decommission apparently the wrong thing to do

2015-02-05 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-8741:
---
Tester: Alan Boudreault

 Running a drain before a decommission apparently the wrong thing to do
 --

 Key: CASSANDRA-8741
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8741
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 14.04; Cassandra 2.0.11.82 (Datastax Enterprise 
 4.5.3)
Reporter: Casey Marshall
Priority: Trivial
  Labels: lhf

 This might simply be a documentation issue. It appears that running nodetool 
 drain is a very wrong thing to do before running a nodetool decommission.
 The idea was that I was going to safely shut off writes and flush everything 
 to disk before beginning the decommission. What happens is the decommission 
 call appears to fail very early on after starting, and afterwards, the node 
 in question is stuck in state LEAVING, but all other nodes in the ring see 
 that node as NORMAL, but down. No streams are ever sent from the node being 
 decommissioned to other nodes.
 The drain command does indeed shut down the BatchlogTasks executor 
 (org/apache/cassandra/service/StorageService.java, line 3445 in git tag 
 cassandra-2.0.11) but the decommission process tries using that executor 
 when calling the startBatchlogReplay function 
 (org/apache/cassandra/db/BatchlogManager.java, line 123) called through 
 org.apache.cassandra.service.StorageService.unbootstrap (see the stack trace 
 pasted below).
 This also failed in a similar way on Cassandra 1.2.13-ish (DSE 3.2.4).
 So, either something is wrong with the drain/decommission commands, or it's 
 very wrong to run a drain before a decommission. What's worse, there seems to 
 be no way to recover this node once it is in this state; you need to shut it 
 down and run removenode.
 My terminal output:
 ubuntu@x:~$ nodetool drain
 ubuntu@x:~$ tail /var/log/^C
 ubuntu@x:~$ nodetool decommission
 Exception in thread main java.util.concurrent.RejectedExecutionException: 
 Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@3008fa33 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@1d6242e8[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 52]
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:629)
 at 
 org.apache.cassandra.db.BatchlogManager.startBatchlogReplay(BatchlogManager.java:123)
 at 
 org.apache.cassandra.service.StorageService.unbootstrap(StorageService.java:2966)
 at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2934)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 

[1/3] cassandra git commit: nit: SSTableReader.Operator.GE match GT representation

2015-02-05 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 b15113411 - 82a8c2372
  refs/heads/trunk f161318fd - 91e64231e


nit: SSTableReader.Operator.GE match GT representation


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82a8c237
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82a8c237
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82a8c237

Branch: refs/heads/cassandra-2.1
Commit: 82a8c2372388bb0289a6a57f6c01f9e04172f7c9
Parents: b151134
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Feb 5 15:54:19 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Feb 5 15:54:19 2015 +

--
 src/java/org/apache/cassandra/io/sstable/SSTableReader.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/82a8c237/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index c51b586..f34939a 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -1707,7 +1707,7 @@ public class SSTableReader extends SSTable implements 
RefCounted
 
 final static class GreaterThanOrEqualTo extends Operator
 {
-public int apply(int comparison) { return comparison = 0 ? 0 : 
-comparison; }
+public int apply(int comparison) { return comparison = 0 ? 0 : 1; 
}
 }
 
 final static class GreaterThan extends Operator



[2/3] cassandra git commit: nit: SSTableReader.Operator.GE match GT representation

2015-02-05 Thread benedict
nit: SSTableReader.Operator.GE match GT representation


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82a8c237
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82a8c237
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82a8c237

Branch: refs/heads/trunk
Commit: 82a8c2372388bb0289a6a57f6c01f9e04172f7c9
Parents: b151134
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Feb 5 15:54:19 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Feb 5 15:54:19 2015 +

--
 src/java/org/apache/cassandra/io/sstable/SSTableReader.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/82a8c237/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index c51b586..f34939a 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -1707,7 +1707,7 @@ public class SSTableReader extends SSTable implements 
RefCounted
 
 final static class GreaterThanOrEqualTo extends Operator
 {
-public int apply(int comparison) { return comparison = 0 ? 0 : 
-comparison; }
+public int apply(int comparison) { return comparison = 0 ? 0 : 1; 
}
 }
 
 final static class GreaterThan extends Operator



[jira] [Created] (CASSANDRA-8747) Make SSTableWriter.openEarly behaviour more obvious

2015-02-05 Thread Benedict (JIRA)
Benedict created CASSANDRA-8747:
---

 Summary: Make SSTableWriter.openEarly behaviour more obvious
 Key: CASSANDRA-8747
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8747
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-05 Thread benedict
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91e64231
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91e64231
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91e64231

Branch: refs/heads/trunk
Commit: 91e64231e5d4460cd5b2df04ce024c6a1800a192
Parents: f161318 82a8c23
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Feb 5 15:54:23 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Feb 5 15:54:23 2015 +

--
 src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/91e64231/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --cc src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index de65ca7,000..de2bbc6
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@@ -1,1908 -1,0 +1,1908 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.io.sstable.format;
 +
 +import java.io.*;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +import java.util.concurrent.*;
 +import java.util.concurrent.atomic.AtomicBoolean;
 +import java.util.concurrent.atomic.AtomicLong;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.base.Predicate;
 +import com.google.common.collect.Iterators;
 +import com.google.common.collect.Ordering;
 +import com.google.common.primitives.Longs;
 +import com.google.common.util.concurrent.RateLimiter;
 +
 +import com.clearspring.analytics.stream.cardinality.CardinalityMergeException;
 +import com.clearspring.analytics.stream.cardinality.HyperLogLogPlus;
 +import com.clearspring.analytics.stream.cardinality.ICardinality;
 +import org.apache.cassandra.cache.CachingOptions;
 +import org.apache.cassandra.cache.InstrumentingCache;
 +import org.apache.cassandra.cache.KeyCacheKey;
 +import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor;
 +import org.apache.cassandra.concurrent.ScheduledExecutors;
 +import org.apache.cassandra.config.*;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
 +import org.apache.cassandra.db.commitlog.ReplayPosition;
 +import org.apache.cassandra.db.composites.CellName;
 +import org.apache.cassandra.db.filter.ColumnSlice;
 +import org.apache.cassandra.db.index.SecondaryIndex;
 +import org.apache.cassandra.dht.*;
 +import org.apache.cassandra.io.compress.CompressedRandomAccessReader;
 +import org.apache.cassandra.io.compress.CompressedThrottledReader;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.sstable.*;
 +import org.apache.cassandra.io.sstable.metadata.*;
 +import org.apache.cassandra.io.util.*;
 +import org.apache.cassandra.metrics.RestorableMeter;
 +import org.apache.cassandra.metrics.StorageMetrics;
 +import org.apache.cassandra.service.ActiveRepairService;
 +import org.apache.cassandra.service.CacheService;
 +import org.apache.cassandra.service.StorageService;
 +import org.apache.cassandra.utils.*;
 +import org.apache.cassandra.utils.concurrent.OpOrder;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +import org.apache.cassandra.utils.concurrent.Ref;
 +import org.apache.cassandra.utils.concurrent.RefCounted;
 +
 +import static 
org.apache.cassandra.db.Directories.SECONDARY_INDEX_NAME_SEPARATOR;
 +
 +/**
 + * SSTableReaders are open()ed by Keyspace.onStart; after that they are 
created by SSTableWriter.renameAndOpen.
 + * Do not re-call open() on existing SSTable files; use the references kept 
by ColumnFamilyStore post-start instead.
 + */
 +public abstract class SSTableReader extends SSTable implements RefCounted
 +{
 +

Git Push Summary

2015-02-05 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.1.3-tentative [created] 98905809c


[jira] [Comment Edited] (CASSANDRA-7019) Improve tombstone compactions

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307963#comment-14307963
 ] 

Björn Hegerfors edited comment on CASSANDRA-7019 at 2/5/15 8:59 PM:


I posted a related ticked some time ago, CASSANDRA-8359. In particular, the 
side note at the end is essentially this ticket exactly, for DTCS. A solution 
to this ticket may or may not solve the main issue in that ticket, but that's a 
matter for that ticket.

Since DTCS SSTables are (supposed to be) separated into time windows, we have 
the concept of an _oldest_ SSTable in a way that we don't with STCS. To me it 
seems pretty clear that a multi-SSTable tombstone compaction on _n_ SSTables 
should always target the _n_ oldest ones. The oldest one alone is practically 
guaranteed to overlap with any other SSTable, in terms of tokens. So picking 
the right SSTables for multi-tombstone compaction should be as easy as sorting 
by age (min timestamp), taking the oldest one, and include the newer ones in 
succession, checking at which point the tombstone ratio is the highest. Or 
something close to that, anyway. Then we might as well write them back as a 
single SSTable, I don't see why not.

EDIT: moved the following to CASSANDRA-7272, where it belongs.

-As for the STCS case, I don't understand why major compaction for STCS isn't 
already optimal. I do see why one might want to compact some but not all 
SSTables in a multi-tombstone compaction (though DTCS should be a better fit 
for anyone wanting this). But if every single SSTable is being rewritten to 
disk, why not write them into one file? As far as I understand, the ultimate 
goal of STCS is to be one SSTable. STCS only gets there, the natural way, once 
in a blue moon. But that's the most optimal state that it can be in. Am I 
wrong?-

-The only explanation I can see for splitting the result of compacting all 
SSTables into fragments, is if those fragments are:-
-1. Partitioned smartly. For example into separate token ranges (à la LCS), 
timestamp ranges (à la DTCS) or clustering column ranges (which would be 
interesting). Or a combination of these.-
-2. The structure upheld by the resulting fragments is not subsequently 
demolished by the running compaction strategy going on with its usual business.-


was (Author: bj0rn):
I posted a related ticked some time ago, CASSANDRA-8359. In particular, the 
side note at the end is essentially this ticket exactly, for DTCS. A solution 
to this ticket may or may not solve the main issue in that ticket, but that's a 
matter for that ticket.

Since DTCS SSTables are (supposed to be) separated into time windows, we have 
the concept of an _oldest_ SSTable in a way that we don't with STCS. To me it 
seems pretty clear that a multi-SSTable tombstone compaction on _n_ SSTables 
should always target the _n_ oldest ones. The oldest one alone is practically 
guaranteed to overlap with any other SSTable, in terms of tokens. So picking 
the right SSTables for multi-tombstone compaction should be as easy as sorting 
by age (min timestamp), taking the oldest one, and include the newer ones in 
succession, checking at which point the tombstone ratio is the highest. Or 
something close to that, anyway. Then we might as well write them back as a 
single SSTable, I don't see why not.

EDIT: moved the all of the below to CASSANDRA-7272, where it belongs.

-As for the STCS case, I don't understand why major compaction for STCS isn't 
already optimal. I do see why one might want to compact some but not all 
SSTables in a multi-tombstone compaction (though DTCS should be a better fit 
for anyone wanting this). But if every single SSTable is being rewritten to 
disk, why not write them into one file? As far as I understand, the ultimate 
goal of STCS is to be one SSTable. STCS only gets there, the natural way, once 
in a blue moon. But that's the most optimal state that it can be in. Am I 
wrong?-

-The only explanation I can see for splitting the result of compacting all 
SSTables into fragments, is if those fragments are:-
-1. Partitioned smartly. For example into separate token ranges (à la LCS), 
timestamp ranges (à la DTCS) or clustering column ranges (which would be 
interesting). Or a combination of these.-
-2. The structure upheld by the resulting fragments is not subsequently 
demolished by the running compaction strategy going on with its usual business.-

 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Branimir Lambov
  Labels: compaction
 Fix For: 3.0


 When there are no other compactions to do, we trigger a single-sstable 
 compaction 

[jira] [Commented] (CASSANDRA-7272) Add Major Compaction to LCS

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307969#comment-14307969
 ] 

Björn Hegerfors commented on CASSANDRA-7272:


I don't understand why major compaction for STCS isn't already optimal. I do 
see why one might want to compact some but not all SSTables in a 
multi-tombstone compaction (CASSANDRA-7019) (though DTCS should be a better fit 
for anyone wanting this). But if every single SSTable is being rewritten to 
disk, why not write them into one file? As far as I understand, the ultimate 
goal of STCS is to be one SSTable. STCS only gets there, the natural way, once 
in a blue moon. But that's the most optimal state that it can be in. Am I wrong?

The only explanation I can see for splitting the result of compacting all 
SSTables into fragments, is if those fragments are:
1. Partitioned smartly. For example into separate token ranges (à la LCS), 
timestamp ranges (à la DTCS) or clustering column ranges (which would be 
interesting). Or a combination of these.
2. The structure upheld by the resulting fragments is not subsequently 
demolished by the running compaction strategy going on with its usual business.

 Add Major Compaction to LCS 
 --

 Key: CASSANDRA-7272
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7272
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
Assignee: Marcus Eriksson
Priority: Minor
  Labels: compaction
 Fix For: 3.0


 LCS has a number of minor issues (maybe major depending on your perspective).
 LCS is primarily used for wide rows so for instance when you repair data in 
 LCS you end up with a copy of an entire repaired row in L0.  Over time if you 
 repair you end up with multiple copies of a row in L0 - L5.  This can make 
 predicting disk usage confusing.  
 Another issue is cleaning up tombstoned data.  If a tombstone lives in level 
 1 and data for the cell lives in level 5 the data will not be reclaimed from 
 disk until the tombstone reaches level 5.
 I propose we add a major compaction for LCS that forces consolidation of 
 data to level 5 to address these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7019) Improve tombstone compactions

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307963#comment-14307963
 ] 

Björn Hegerfors edited comment on CASSANDRA-7019 at 2/5/15 8:58 PM:


I posted a related ticked some time ago, CASSANDRA-8359. In particular, the 
side note at the end is essentially this ticket exactly, for DTCS. A solution 
to this ticket may or may not solve the main issue in that ticket, but that's a 
matter for that ticket.

Since DTCS SSTables are (supposed to be) separated into time windows, we have 
the concept of an _oldest_ SSTable in a way that we don't with STCS. To me it 
seems pretty clear that a multi-SSTable tombstone compaction on _n_ SSTables 
should always target the _n_ oldest ones. The oldest one alone is practically 
guaranteed to overlap with any other SSTable, in terms of tokens. So picking 
the right SSTables for multi-tombstone compaction should be as easy as sorting 
by age (min timestamp), taking the oldest one, and include the newer ones in 
succession, checking at which point the tombstone ratio is the highest. Or 
something close to that, anyway. Then we might as well write them back as a 
single SSTable, I don't see why not.

EDIT: moved the all of the below to CASSANDRA-7272, where it belongs.

-As for the STCS case, I don't understand why major compaction for STCS isn't 
already optimal. I do see why one might want to compact some but not all 
SSTables in a multi-tombstone compaction (though DTCS should be a better fit 
for anyone wanting this). But if every single SSTable is being rewritten to 
disk, why not write them into one file? As far as I understand, the ultimate 
goal of STCS is to be one SSTable. STCS only gets there, the natural way, once 
in a blue moon. But that's the most optimal state that it can be in. Am I 
wrong?-

-The only explanation I can see for splitting the result of compacting all 
SSTables into fragments, is if those fragments are:-
-1. Partitioned smartly. For example into separate token ranges (à la LCS), 
timestamp ranges (à la DTCS) or clustering column ranges (which would be 
interesting). Or a combination of these.-
-2. The structure upheld by the resulting fragments is not subsequently 
demolished by the running compaction strategy going on with its usual business.-


was (Author: bj0rn):
I posted a related ticked some time ago, CASSANDRA-8359. In particular, the 
side note at the end is essentially this ticket exactly, for DTCS. A solution 
to this ticket may or may not solve the main issue in that ticket, but that's a 
matter for that ticket.

Since DTCS SSTables are (supposed to be) separated into time windows, we have 
the concept of an _oldest_ SSTable in a way that we don't with STCS. To me it 
seems pretty clear that a multi-SSTable tombstone compaction on _n_ SSTables 
should always target the _n_ oldest ones. The oldest one alone is practically 
guaranteed to overlap with any other SSTable, in terms of tokens. So picking 
the right SSTables for multi-tombstone compaction should be as easy as sorting 
by age (min timestamp), taking the oldest one, and include the newer ones in 
succession, checking at which point the tombstone ratio is the highest. Or 
something close to that, anyway. Then we might as well write them back as a 
single SSTable, I don't see why not.

As for the STCS case, I don't understand why major compaction for STCS isn't 
already optimal. I do see why one might want to compact some but not all 
SSTables in a multi-tombstone compaction (though DTCS should be a better fit 
for anyone wanting this). But if every single SSTable is being rewritten to 
disk, why not write them into one file? As far as I understand, the ultimate 
goal of STCS is to be one SSTable. STCS only gets there, the natural way, once 
in a blue moon. But that's the most optimal state that it can be in. Am I wrong?

The only explanation I can see for splitting the result of compacting all 
SSTables into fragments, is if those fragments are:
1. Partitioned smartly. For example into separate token ranges (à la LCS), 
timestamp ranges (à la DTCS) or clustering column ranges (which would be 
interesting). Or a combination of these.
2. The structure upheld by the resulting fragments is not subsequently 
demolished by the running compaction strategy going on with its usual business.

 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Branimir Lambov
  Labels: compaction
 Fix For: 3.0


 When there are no other compactions to do, we trigger a single-sstable 
 compaction if there is more than X% droppable tombstones in the sstable.
 In this 

[jira] [Commented] (CASSANDRA-7019) Improve tombstone compactions

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307963#comment-14307963
 ] 

Björn Hegerfors commented on CASSANDRA-7019:


I posted a related ticked some time ago, CASSANDRA-8359. In particular, the 
side note at the end is essentially this ticket exactly, for DTCS. A solution 
to this ticket may or may not solve the main issue in that ticket, but that's a 
matter for that ticket.

Since DTCS SSTables are (supposed to be) separated into time windows, we have 
the concept of an _oldest_ SSTable in a way that we don't with STCS. To me it 
seems pretty clear that a multi-SSTable tombstone compaction on _n_ SSTables 
should always target the _n_ oldest ones. The oldest one alone is practically 
guaranteed to overlap with any other SSTable, in terms of tokens. So picking 
the right SSTables for multi-tombstone compaction should be as easy as sorting 
by age (min timestamp), taking the oldest one, and include the newer ones in 
succession, checking at which point the tombstone ratio is the highest. Or 
something close to that, anyway. Then we might as well write them back as a 
single SSTable, I don't see why not.

As for the STCS case, I don't understand why major compaction for STCS isn't 
already optimal. I do see why one might want to compact some but not all 
SSTables in a multi-tombstone compaction (though DTCS should be a better fit 
for anyone wanting this). But if every single SSTable is being rewritten to 
disk, why not write them into one file? As far as I understand, the ultimate 
goal of STCS is to be one SSTable. STCS only gets there, the natural way, once 
in a blue moon. But that's the most optimal state that it can be in. Am I wrong?

The only explanation I can see for splitting the result of compacting all 
SSTables into fragments, is if those fragments are:
1. Partitioned smartly. For example into separate token ranges (à la LCS), 
timestamp ranges (à la DTCS) or clustering column ranges (which would be 
interesting). Or a combination of these.
2. The structure upheld by the resulting fragments is not subsequently 
demolished by the running compaction strategy going on with its usual business.

 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Branimir Lambov
  Labels: compaction
 Fix For: 3.0


 When there are no other compactions to do, we trigger a single-sstable 
 compaction if there is more than X% droppable tombstones in the sstable.
 In this ticket we should try to include overlapping sstables in those 
 compactions to be able to actually drop the tombstones. Might only be doable 
 with LCS (with STCS we would probably end up including all sstables)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8751) C* should always listen to both ssl/non-ssl ports

2015-02-05 Thread Minh Do (JIRA)
Minh Do created CASSANDRA-8751:
--

 Summary: C* should always listen to both ssl/non-ssl ports
 Key: CASSANDRA-8751
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8751
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Minh Do
Assignee: Minh Do
Priority: Critical


Since there is always one thread dedicated on server socket listener and it 
does not use much resource, we should always have these two listeners up no 
matter what users set for internode_encryption.

The reason behind this is that we need to switch back and forth between 
different internode_encryption modes and we need C* servers to keep running in 
transient state or during mode switching.  Currently this is not possible.

For example, we have a internode_encryption=dc cluster in a multi-region AWS 
environment and want to set internode_encryption=all by rolling restart C* 
nodes.  However, the node with internode_encryption=all does not open to listen 
to non-ssl port.  As a result, we have a splitted brain cluster here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8689) Assertion error in 2.1.2: ERROR [IndexSummaryManager:1]

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308004#comment-14308004
 ] 

Benedict commented on CASSANDRA-8689:
-

Patch available 
[here|https://github.com/belliottsmith/cassandra/tree/8689-racemarkcompact]

This converts the Set of live sstables in DataTracker.View to a Map, so that we 
can easily perform identity checks as well as equality checks. When marking 
compacting, we now indicate if we expect the sstables to be present (by default 
we do), and we then check that not only are they all present in the live set, 
but that the exact instance present is the one we made our decision to compact 
against.

 Assertion error in 2.1.2: ERROR [IndexSummaryManager:1]
 ---

 Key: CASSANDRA-8689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8689
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu
Assignee: Benedict
 Fix For: 2.1.3


 After upgrading a 6 nodes cassandra from 2.1.0 to 2.1.2, start getting the 
 following assertion error.
 {noformat}
 ERROR [IndexSummaryManager:1] 2015-01-26 20:55:40,451 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[IndexSummaryManager:1,1,main]
 java.lang.AssertionError: null
 at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getOffHeapSize(IndexSummary.java:192)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getIndexSummaryOffHeapSize(SSTableReader.java:1070)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:292)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:238)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:139)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:77)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_45]
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
 [na:1.7.0_45]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {noformat}
 cassandra service is still running despite the issue. Node has total 8G 
 memory with 2G allocated to heap. We are basically running read queries to 
 retrieve data out of cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8750) Ensure SSTableReader.last corresponds exactly the file end

2015-02-05 Thread Benedict (JIRA)
Benedict created CASSANDRA-8750:
---

 Summary: Ensure SSTableReader.last corresponds exactly the file end
 Key: CASSANDRA-8750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8750
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4


Following on from CASSANDRA-8744, CASSANDRA-8749 and CASSANDRA-8747, this patch 
attempts to make the whole opening early of compaction results more robust and 
with more clearly understood behaviour. The improvements of CASSANDRA-8747 
permit is to easily align the last key with a summary boundary, and an index 
and data file end position. This patch modifies SegmentedFile to permit the 
provision of an explicit length, which is then provided to any readers, which 
enforce it, ensuring no code may accidentally see an end inconsistent with the 
one advertised. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8750) Ensure SSTableReader.last corresponds exactly with the file end

2015-02-05 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8750:

Summary: Ensure SSTableReader.last corresponds exactly with the file end  
(was: Ensure SSTableReader.last corresponds exactly the file end)

 Ensure SSTableReader.last corresponds exactly with the file end
 ---

 Key: CASSANDRA-8750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8750
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4


 Following on from CASSANDRA-8744, CASSANDRA-8749 and CASSANDRA-8747, this 
 patch attempts to make the whole opening early of compaction results more 
 robust and with more clearly understood behaviour. The improvements of 
 CASSANDRA-8747 permit is to easily align the last key with a summary 
 boundary, and an index and data file end position. This patch modifies 
 SegmentedFile to permit the provision of an explicit length, which is then 
 provided to any readers, which enforce it, ensuring no code may accidentally 
 see an end inconsistent with the one advertised. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8406) Add option to set max_sstable_age in seconds in DTCS

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308037#comment-14308037
 ] 

Björn Hegerfors commented on CASSANDRA-8406:


[~krummas] well we did try max_sstable_age_days=1 in a new table, just the 
other day. This table has a default_time_to_live at 1 day, and the hope was 
that SSTables older than that would just disappear. That's not the case, and it 
may be timely for me to address this ticket now: CASSANDRA-8359. With that 
fixed, I think that having max_sstable_age equal to the default_time_to_live of 
the table is the obviously right setting, giving very efficient behavior. Why 
compact something that's about to go away? If it works as well as I hope, the 
next logical step might be to set max_sstable_age automatically when 
default_time_to_live is set. And if someone wants to use default_time_to_live  
1 day, I don't see why they shouldn't be able to use max_sstable_age  1 day. 
So I do see a case for max_sstable_age being less than days. And for something 
like this, having the setting in seconds seems most appropriate. But the 
floating point solution is probably a viable compromise.

I suppose the main argument for having the setting in seconds is: why is 
default_time_to_live in seconds when max_sstable_age is not?

 Add option to set max_sstable_age in seconds in DTCS
 

 Key: CASSANDRA-8406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8406
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.0.13

 Attachments: 0001-8406.patch, 0001-patch.patch


 Using days as the unit for max_sstable_age in DTCS might be too much, add 
 option to set it in seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8744) Ensure SSTableReader.first/last are honoured universally

2015-02-05 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8744:

Fix Version/s: (was: 2.1.3)
   2.1.4

 Ensure SSTableReader.first/last are honoured universally
 

 Key: CASSANDRA-8744
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8744
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.4


 Split out from CASSANDRA-8683; we don't honour the first/last properties of 
 an sstablereader, and we tend to assume that we do. This can cause problems 
 in LCS validation compactions, for instance, where a scanner is assumed to 
 only cover the defined range, but may return data either side of that range. 
 In general it is only wasteful to not honour these ranges anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8275) Some queries with multicolumn relation do not behave properly when secondary index is used

2015-02-05 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8275:
--
Attachment: CASSANDRA-8275-trunk-V2.txt
CASSANDRA-8275-2.1-V2.txt
CASSANDRA-8275-2.0-V2.txt

The previous patch for 2.0 was the only one affected by the regression problem. 
I added some explanation in all the other patches.
I have run the DTests and I did not see any other errors related to the patches.

 Some queries with multicolumn relation do not behave properly when secondary 
 index is used
 --

 Key: CASSANDRA-8275
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8275
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer
 Fix For: 3.0, 2.1.3, 2.0.13

 Attachments: CASSANDRA-8275-2.0-V2.txt, CASSANDRA-8275-2.0.txt, 
 CASSANDRA-8275-2.1-V2.txt, CASSANDRA-8275-2.1.txt, 
 CASSANDRA-8275-trunk-V2.txt, CASSANDRA-8275-trunk.txt


 In the case where we perform a select using a multicolumn relation over 
 multiple columns that use a secondary index the error message returned is 
 wrong.
 The following unit test can be use to reproduce the problem:
 {code}
 @Test
 public void testMultipleClusteringWithIndex() throws Throwable
 {
 createTable(CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
 (a, b, c, d)));
 createIndex(CREATE INDEX ON %s (b)); 
 
 execute(INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?), 0, 0, 0, 
 0);
 execute(INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?), 0, 0, 1, 
 0);
 execute(INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?), 0, 0, 1, 
 1);
 execute(INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?), 0, 1, 0, 
 0);
 execute(INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?), 0, 1, 1, 
 0);
 execute(INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?), 0, 1, 1, 
 1);
 assertRows(execute(SELECT * FROM %s WHERE (b) = (?), 1),
row(0, 1, 0, 0),
row(0, 1, 1, 0),
row(0, 1, 1, 1)
);
 
 assertRows(execute(SELECT * FROM %s WHERE (b, c) = (?, ?) ALLOW 
 FILTERING, 1, 1),
row(0, 1, 1, 0),
row(0, 1, 1, 1)
);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8744) Ensure SSTableReader.first/last are honoured universally

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307796#comment-14307796
 ] 

Benedict commented on CASSANDRA-8744:
-

Patch available 
[here|https://github.com/belliottsmith/cassandra/commits/8744-honourfirstlast]

 Ensure SSTableReader.first/last are honoured universally
 

 Key: CASSANDRA-8744
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8744
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.3


 Split out from CASSANDRA-8683; we don't honour the first/last properties of 
 an sstablereader, and we tend to assume that we do. This can cause problems 
 in LCS validation compactions, for instance, where a scanner is assumed to 
 only cover the defined range, but may return data either side of that range. 
 In general it is only wasteful to not honour these ranges anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8683) Ensure early reopening has no overlap with replaced files

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307797#comment-14307797
 ] 

Benedict commented on CASSANDRA-8683:
-

Patch available [here|https://github.com/belliottsmith/cassandra/commits/8683]

 Ensure early reopening has no overlap with replaced files
 -

 Key: CASSANDRA-8683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8683
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.3

 Attachments: 0001-avoid-NPE-in-getPositionsForRanges.patch


 Incremental repairs holds a set of the sstables it started the repair on (we 
 need to know which sstables were actually validated to be able to anticompact 
 them). This includes any tmplink files that existed when the compaction 
 started (if we wouldn't include those, we would miss data since we move the 
 start point of the existing non-tmplink files)
 With CASSANDRA-6916 we swap out those instances with new ones 
 (SSTR.cloneWithNewStart / SSTW.openEarly), meaning that the underlying file 
 can get deleted even though we hold a reference.
 This causes the unit test error: 
 http://cassci.datastax.com/job/trunk_utest/1330/testReport/junit/org.apache.cassandra.db.compaction/LeveledCompactionStrategyTest/testValidationMultipleSSTablePerLevel/
 (note that it only fails on trunk though, in 2.1 we don't hold references to 
 the repairing files for non-incremental repairs, but the bug should exist in 
 2.1 as well)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8748) Backport memory leak fix from CASSANDRA-8707 to 2.0

2015-02-05 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-8748:
---

 Summary: Backport memory leak fix from CASSANDRA-8707 to 2.0
 Key: CASSANDRA-8748
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8748
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jeremy Hanna
Assignee: Benedict


There are multiple elements in CASSANDRA-8707 but the memory leak is common to 
Cassandra 2.0.  This ticket is to fix the memory leak specifically for 2.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8712) Out-of-sync secondary index

2015-02-05 Thread mlowicki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306835#comment-14306835
 ] 

mlowicki commented on CASSANDRA-8712:
-

[~slebresne] don't have repro steps yet. What I've found on our production 
though is that index returns always (17340/17340 cases) superset of what we get 
from table directly without supporting index. After reading 
www.datastax.com/dev/blog/improving-secondary-index-write-performance-in-1-2 I 
would suspect that there is problem with removing stale items from the index. 
What do you think? Should {{rebuild_index}} help with such issue or it just 
re-adds missing items and do not remove old ones?

 Out-of-sync secondary index
 ---

 Key: CASSANDRA-8712
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8712
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.2
Reporter: mlowicki
 Fix For: 2.1.3


 I've such table with index:
 {code}
 CREATE TABLE entity (
 user_id text,
 data_type_id int,
 version bigint,
 id text,
 cache_guid text,
 client_defined_unique_tag text,
 ctime timestamp,
 deleted boolean,
 folder boolean,
 mtime timestamp,
 name text,
 originator_client_item_id text,
 parent_id text,
 position blob,
 server_defined_unique_tag text,
 specifics blob,
 PRIMARY KEY (user_id, data_type_id, version, id)
 ) WITH CLUSTERING ORDER BY (data_type_id ASC, version ASC, id ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 CREATE INDEX index_entity_parent_id ON entity (parent_id);
 {code}
 It turned out that index became out of sync:
 {code}
  Entity.objects.filter(user_id='255824802', 
  parent_id=parent_id).consistency(6).count()
 16
  
  counter = 0
  for e in Entity.objects.filter(user_id='255824802'):
 ... if e.parent_id and e.parent_id == parent_id:
 ... counter += 1
 ... 
  counter
 10
 {code}
 After couple of hours it was fine (at night) but then when user probably 
 started to interact with DB we got the same problem. As a temporary solution 
 we'll try to rebuild indexes from time to time as suggested in 
 http://dev.nuclearrooster.com/2013/01/20/using-nodetool-to-rebuild-secondary-indexes-in-cassandra/
 Launched simple script for checking such anomaly and before rebuilding index 
 for 4024856 folders 10378 had this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: remove extraneous Range.normalize() in SSTableScanner

2015-02-05 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4231b4e2a - f161318fd


remove extraneous Range.normalize() in SSTableScanner


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1511341
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1511341
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1511341

Branch: refs/heads/trunk
Commit: b15113411a6efa725d3f420a7e0f6bc796aa9780
Parents: 2d5d301
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Feb 5 14:32:54 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Feb 5 14:32:54 2015 +

--
 src/java/org/apache/cassandra/io/sstable/SSTableScanner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1511341/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java
index dc065af..676f87d 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java
@@ -59,7 +59,7 @@ public class SSTableScanner implements ISSTableScanner
 public static ISSTableScanner getScanner(SSTableReader sstable, 
CollectionRangeToken tokenRanges, RateLimiter limiter)
 {
 // We want to avoid allocating a SSTableScanner if the range don't 
overlap the sstable (#5249)
-ListPairLong, Long positions = 
sstable.getPositionsForRanges(Range.normalize(tokenRanges));
+ListPairLong, Long positions = 
sstable.getPositionsForRanges(tokenRanges);
 if (positions.isEmpty())
 return new EmptySSTableScanner(sstable.getFilename());
 



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-02-05 Thread benedict
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f161318f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f161318f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f161318f

Branch: refs/heads/trunk
Commit: f161318fdc64a3a2ead6d25a650c51eb010cd4a5
Parents: 4231b4e b151134
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Feb 5 14:33:02 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Feb 5 14:33:02 2015 +

--
 .../apache/cassandra/io/sstable/format/big/BigTableScanner.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f161318f/src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
--
diff --cc 
src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
index 85bc37d,000..1e187ff
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java
@@@ -1,350 -1,0 +1,350 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an AS IS BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.io.sstable.format.big;
 +
 +import java.io.IOException;
 +import java.util.*;
 +
 +import com.google.common.collect.AbstractIterator;
 +import com.google.common.util.concurrent.RateLimiter;
 +
 +import org.apache.cassandra.db.DataRange;
 +import org.apache.cassandra.db.DecoratedKey;
 +import org.apache.cassandra.db.RowIndexEntry;
 +import org.apache.cassandra.db.RowPosition;
 +import org.apache.cassandra.db.columniterator.IColumnIteratorFactory;
 +import org.apache.cassandra.db.columniterator.LazyColumnIterator;
 +import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
 +import org.apache.cassandra.dht.AbstractBounds;
 +import org.apache.cassandra.dht.Bounds;
 +import org.apache.cassandra.dht.Range;
 +import org.apache.cassandra.dht.Token;
 +import org.apache.cassandra.io.sstable.CorruptSSTableException;
 +import org.apache.cassandra.io.sstable.ISSTableScanner;
 +import org.apache.cassandra.io.sstable.SSTableIdentityIterator;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.io.util.FileUtils;
 +import org.apache.cassandra.io.util.RandomAccessReader;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +import org.apache.cassandra.utils.Pair;
 +
 +public class BigTableScanner implements ISSTableScanner
 +{
 +protected final RandomAccessReader dfile;
 +protected final RandomAccessReader ifile;
 +public final SSTableReader sstable;
 +
 +private final IteratorAbstractBoundsRowPosition rangeIterator;
 +private AbstractBoundsRowPosition currentRange;
 +
 +private final DataRange dataRange;
 +private final RowIndexEntry.IndexSerializer rowIndexEntrySerializer;
 +
 +protected IteratorOnDiskAtomIterator iterator;
 +
 +public static ISSTableScanner getScanner(SSTableReader sstable, DataRange 
dataRange, RateLimiter limiter)
 +{
 +return new BigTableScanner(sstable, dataRange, limiter);
 +}
 +public static ISSTableScanner getScanner(SSTableReader sstable, 
CollectionRangeToken tokenRanges, RateLimiter limiter)
 +{
 +// We want to avoid allocating a SSTableScanner if the range don't 
overlap the sstable (#5249)
- ListPairLong, Long positions = 
sstable.getPositionsForRanges(Range.normalize(tokenRanges));
++ListPairLong, Long positions = 
sstable.getPositionsForRanges(tokenRanges);
 +if (positions.isEmpty())
 +return new EmptySSTableScanner(sstable.getFilename());
 +
 +return new BigTableScanner(sstable, tokenRanges, limiter);
 +}
 +
 +/**
 + * @param sstable SSTable to scan; must not be null
 + * @param dataRange a single range to scan; must not be null
 + * @param limiter background i/o RateLimiter; may be 

[jira] [Updated] (CASSANDRA-8743) Repair on NFS in version 2.1.2

2015-02-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8743:

Assignee: Joshua McKenzie

 Repair on NFS in version 2.1.2
 --

 Key: CASSANDRA-8743
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8743
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tamar Nirenberg
Assignee: Joshua McKenzie
Priority: Minor

 Running repair over NFS in Cassandra 2.1.2 encounters this error and crashes 
 the ring:
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,811 Validator.java:232 - 
 Failed creating a merkle tree for [repair 
 #c84c7c70-a21b-11e4-aeca-19e6d7fa2595 on ATTRIBUTES/LINKS, 
 (11621838520493020277529637175352775759,11853478749048239324667887059881170862]],
  /10.1.234.63 (see log for details)
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,827 CassandraDaemon.java:153 
 - Exception in thread Thread[ValidationExecutor:2,1,main]
 org.apache.cassandra.io.FSWriteError: 
 java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:381) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:547) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:2223)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:939)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:97)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:557)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_71]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 Caused by: java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242) 
 ~[na:1.7.0_71]
 at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
  ~[na:1.7.0_71]
 at java.nio.file.Files.delete(Files.java:1079) ~[na:1.7.0_71]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 ... 10 common frames omitted
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:383 
 - Stopping gossiper
 WARN  [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:291 
 - Stopping gossip by operator request
 INFO  [ValidationExecutor:2] 2015-01-22 11:48:14,829 Gossiper.java:1318 - 
 Announcing shutdown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8741) Running a drain before a decommission apparently the wrong thing to do

2015-02-05 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-8741:

Labels: lhf  (was: )

 Running a drain before a decommission apparently the wrong thing to do
 --

 Key: CASSANDRA-8741
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8741
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
 Environment: Ubuntu 14.04; Cassandra 2.0.11.82 (Datastax Enterprise 
 4.5.3)
Reporter: Casey Marshall
  Labels: lhf

 This might simply be a documentation issue. It appears that running nodetool 
 drain is a very wrong thing to do before running a nodetool decommission.
 The idea was that I was going to safely shut off writes and flush everything 
 to disk before beginning the decommission. What happens is the decommission 
 call appears to fail very early on after starting, and afterwards, the node 
 in question is stuck in state LEAVING, but all other nodes in the ring see 
 that node as NORMAL, but down. No streams are ever sent from the node being 
 decommissioned to other nodes.
 The drain command does indeed shut down the BatchlogTasks executor 
 (org/apache/cassandra/service/StorageService.java, line 3445 in git tag 
 cassandra-2.0.11) but the decommission process tries using that executor 
 when calling the startBatchlogReplay function 
 (org/apache/cassandra/db/BatchlogManager.java, line 123) called through 
 org.apache.cassandra.service.StorageService.unbootstrap (see the stack trace 
 pasted below).
 This also failed in a similar way on Cassandra 1.2.13-ish (DSE 3.2.4).
 So, either something is wrong with the drain/decommission commands, or it's 
 very wrong to run a drain before a decommission. What's worse, there seems to 
 be no way to recover this node once it is in this state; you need to shut it 
 down and run removenode.
 My terminal output:
 ubuntu@x:~$ nodetool drain
 ubuntu@x:~$ tail /var/log/^C
 ubuntu@x:~$ nodetool decommission
 Exception in thread main java.util.concurrent.RejectedExecutionException: 
 Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@3008fa33 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@1d6242e8[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 52]
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:629)
 at 
 org.apache.cassandra.db.BatchlogManager.startBatchlogReplay(BatchlogManager.java:123)
 at 
 org.apache.cassandra.service.StorageService.unbootstrap(StorageService.java:2966)
 at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2934)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 

[jira] [Commented] (CASSANDRA-7019) Improve tombstone compactions

2015-02-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307369#comment-14307369
 ] 

Marcus Eriksson commented on CASSANDRA-7019:


What we want is to be able to drop more tombstones by doing a specific 
tombstone removal compaction.

To be able to drop as many tombstones as possible, we want to include as many 
overlapping sstables as we can in this compaction. 

Currently we do this with a single sstable - we find one single sstable, 
estimate how many droppable tombstones we have and if more than X% (20 iirc) of 
all keys in the sstables are droppable tombstones, we trigger a single sstable 
compaction including that. This is often quite ineffective as the tombstones 
can cover data in other sstables.

Start by reading up on SizeTieredCompactionStrategy#worthDroppingTombstones()

So, we need to
# Find a good candidate sstable
# Include all sstables that overlap that sstable and contain older data (a 
tombstone can only cover older data in other sstables)
# Start a compaction
# Figure out a good way to write out the data to disk (for STCS for example, 
all sstables might overlap eachother, which would cause a major compaction, for 
LCS we need to distribute the result in the leveled hierarchy somehow). This is 
the trickiest part of the ticket. One way I've though about is to track which 
sstable the data is coming from and map each input sstable to an output 
sstable, and write all non-tombstone data to those. The result would be the 
same number of input sstables, minus tombstones (and any covered data)

 Improve tombstone compactions
 -

 Key: CASSANDRA-7019
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7019
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Branimir Lambov
  Labels: compaction
 Fix For: 3.0


 When there are no other compactions to do, we trigger a single-sstable 
 compaction if there is more than X% droppable tombstones in the sstable.
 In this ticket we should try to include overlapping sstables in those 
 compactions to be able to actually drop the tombstones. Might only be doable 
 with LCS (with STCS we would probably end up including all sstables)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: remove extraneous Range.normalize() in SSTableScanner

2015-02-05 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 2d5d30114 - b15113411


remove extraneous Range.normalize() in SSTableScanner


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1511341
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1511341
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1511341

Branch: refs/heads/cassandra-2.1
Commit: b15113411a6efa725d3f420a7e0f6bc796aa9780
Parents: 2d5d301
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Feb 5 14:32:54 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Thu Feb 5 14:32:54 2015 +

--
 src/java/org/apache/cassandra/io/sstable/SSTableScanner.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1511341/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java
index dc065af..676f87d 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableScanner.java
@@ -59,7 +59,7 @@ public class SSTableScanner implements ISSTableScanner
 public static ISSTableScanner getScanner(SSTableReader sstable, 
CollectionRangeToken tokenRanges, RateLimiter limiter)
 {
 // We want to avoid allocating a SSTableScanner if the range don't 
overlap the sstable (#5249)
-ListPairLong, Long positions = 
sstable.getPositionsForRanges(Range.normalize(tokenRanges));
+ListPairLong, Long positions = 
sstable.getPositionsForRanges(tokenRanges);
 if (positions.isEmpty())
 return new EmptySSTableScanner(sstable.getFilename());
 



[jira] [Commented] (CASSANDRA-8743) Repair on NFS in version 2.1.2

2015-02-05 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307318#comment-14307318
 ] 

Sylvain Lebresne commented on CASSANDRA-8743:
-

We'll look, but as using NFS with Cassandra is probably not a very good idea, 
I'm gonna reduce the priority a bit.

 Repair on NFS in version 2.1.2
 --

 Key: CASSANDRA-8743
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8743
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tamar Nirenberg
Priority: Minor

 Running repair over NFS in Cassandra 2.1.2 encounters this error and crashes 
 the ring:
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,811 Validator.java:232 - 
 Failed creating a merkle tree for [repair 
 #c84c7c70-a21b-11e4-aeca-19e6d7fa2595 on ATTRIBUTES/LINKS, 
 (11621838520493020277529637175352775759,11853478749048239324667887059881170862]],
  /10.1.234.63 (see log for details)
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,827 CassandraDaemon.java:153 
 - Exception in thread Thread[ValidationExecutor:2,1,main]
 org.apache.cassandra.io.FSWriteError: 
 java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:381) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:547) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:2223)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:939)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:97)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:557)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_71]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 Caused by: java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242) 
 ~[na:1.7.0_71]
 at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
  ~[na:1.7.0_71]
 at java.nio.file.Files.delete(Files.java:1079) ~[na:1.7.0_71]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 ... 10 common frames omitted
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:383 
 - Stopping gossiper
 WARN  [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:291 
 - Stopping gossip by operator request
 INFO  [ValidationExecutor:2] 2015-01-22 11:48:14,829 Gossiper.java:1318 - 
 Announcing shutdown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8746) SSTableReader.cloneWithNewStart can drop too much page cache for compressed files

2015-02-05 Thread Benedict (JIRA)
Benedict created CASSANDRA-8746:
---

 Summary: SSTableReader.cloneWithNewStart can drop too much page 
cache for compressed files
 Key: CASSANDRA-8746
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8746
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8743) Repair on NFS in version 2.1.2

2015-02-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8743:

Priority: Minor  (was: Critical)

 Repair on NFS in version 2.1.2
 --

 Key: CASSANDRA-8743
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8743
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tamar Nirenberg
Priority: Minor

 Running repair over NFS in Cassandra 2.1.2 encounters this error and crashes 
 the ring:
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,811 Validator.java:232 - 
 Failed creating a merkle tree for [repair 
 #c84c7c70-a21b-11e4-aeca-19e6d7fa2595 on ATTRIBUTES/LINKS, 
 (11621838520493020277529637175352775759,11853478749048239324667887059881170862]],
  /10.1.234.63 (see log for details)
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,827 CassandraDaemon.java:153 
 - Exception in thread Thread[ValidationExecutor:2,1,main]
 org.apache.cassandra.io.FSWriteError: 
 java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:381) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:547) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:2223)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:939)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:97)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:557)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_71]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 Caused by: java.nio.file.DirectoryNotEmptyException: 
 /exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
 at 
 sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242) 
 ~[na:1.7.0_71]
 at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
  ~[na:1.7.0_71]
 at java.nio.file.Files.delete(Files.java:1079) ~[na:1.7.0_71]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 ... 10 common frames omitted
 ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:383 
 - Stopping gossiper
 WARN  [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:291 
 - Stopping gossip by operator request
 INFO  [ValidationExecutor:2] 2015-01-22 11:48:14,829 Gossiper.java:1318 - 
 Announcing shutdown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8707) Move SegmentedFile, IndexSummary and BloomFilter to utilising RefCounted

2015-02-05 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307337#comment-14307337
 ] 

T Jake Luciani commented on CASSANDRA-8707:
---

I want to give this code time to bake and give users some relief in the 
meantime with the other ~100 fixes in 2.1.3. 

I suggest we push this to 2.1.4 as well as CASSANDRA-8683 and CASSANDRA-8744.  
I suggest we disable incremental compaction for 2.1.3 which should avoid most 
of these bugs.  


 Move SegmentedFile, IndexSummary and BloomFilter to utilising RefCounted
 

 Key: CASSANDRA-8707
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8707
 Project: Cassandra
  Issue Type: Bug
Reporter: Benedict
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.3


 There are still a few bugs with resource management, especially around 
 SSTableReader cleanup, esp. when intermixing with compaction. This migration 
 should help. We can simultaneously simplify the logic in SSTableReader to 
 not track the replacement chain, only to take a new reference to each of the 
 underlying resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8741) Running a drain before a decommission apparently the wrong thing to do

2015-02-05 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-8741:

Priority: Trivial  (was: Major)

 Running a drain before a decommission apparently the wrong thing to do
 --

 Key: CASSANDRA-8741
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8741
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 14.04; Cassandra 2.0.11.82 (Datastax Enterprise 
 4.5.3)
Reporter: Casey Marshall
Priority: Trivial
  Labels: lhf

 This might simply be a documentation issue. It appears that running nodetool 
 drain is a very wrong thing to do before running a nodetool decommission.
 The idea was that I was going to safely shut off writes and flush everything 
 to disk before beginning the decommission. What happens is the decommission 
 call appears to fail very early on after starting, and afterwards, the node 
 in question is stuck in state LEAVING, but all other nodes in the ring see 
 that node as NORMAL, but down. No streams are ever sent from the node being 
 decommissioned to other nodes.
 The drain command does indeed shut down the BatchlogTasks executor 
 (org/apache/cassandra/service/StorageService.java, line 3445 in git tag 
 cassandra-2.0.11) but the decommission process tries using that executor 
 when calling the startBatchlogReplay function 
 (org/apache/cassandra/db/BatchlogManager.java, line 123) called through 
 org.apache.cassandra.service.StorageService.unbootstrap (see the stack trace 
 pasted below).
 This also failed in a similar way on Cassandra 1.2.13-ish (DSE 3.2.4).
 So, either something is wrong with the drain/decommission commands, or it's 
 very wrong to run a drain before a decommission. What's worse, there seems to 
 be no way to recover this node once it is in this state; you need to shut it 
 down and run removenode.
 My terminal output:
 ubuntu@x:~$ nodetool drain
 ubuntu@x:~$ tail /var/log/^C
 ubuntu@x:~$ nodetool decommission
 Exception in thread main java.util.concurrent.RejectedExecutionException: 
 Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@3008fa33 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@1d6242e8[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 52]
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:629)
 at 
 org.apache.cassandra.db.BatchlogManager.startBatchlogReplay(BatchlogManager.java:123)
 at 
 org.apache.cassandra.service.StorageService.unbootstrap(StorageService.java:2966)
 at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2934)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 

[jira] [Updated] (CASSANDRA-8741) Running a drain before a decommission apparently the wrong thing to do

2015-02-05 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-8741:

Component/s: (was: Documentation  website)
 Core

 Running a drain before a decommission apparently the wrong thing to do
 --

 Key: CASSANDRA-8741
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8741
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 14.04; Cassandra 2.0.11.82 (Datastax Enterprise 
 4.5.3)
Reporter: Casey Marshall
Priority: Trivial
  Labels: lhf

 This might simply be a documentation issue. It appears that running nodetool 
 drain is a very wrong thing to do before running a nodetool decommission.
 The idea was that I was going to safely shut off writes and flush everything 
 to disk before beginning the decommission. What happens is the decommission 
 call appears to fail very early on after starting, and afterwards, the node 
 in question is stuck in state LEAVING, but all other nodes in the ring see 
 that node as NORMAL, but down. No streams are ever sent from the node being 
 decommissioned to other nodes.
 The drain command does indeed shut down the BatchlogTasks executor 
 (org/apache/cassandra/service/StorageService.java, line 3445 in git tag 
 cassandra-2.0.11) but the decommission process tries using that executor 
 when calling the startBatchlogReplay function 
 (org/apache/cassandra/db/BatchlogManager.java, line 123) called through 
 org.apache.cassandra.service.StorageService.unbootstrap (see the stack trace 
 pasted below).
 This also failed in a similar way on Cassandra 1.2.13-ish (DSE 3.2.4).
 So, either something is wrong with the drain/decommission commands, or it's 
 very wrong to run a drain before a decommission. What's worse, there seems to 
 be no way to recover this node once it is in this state; you need to shut it 
 down and run removenode.
 My terminal output:
 ubuntu@x:~$ nodetool drain
 ubuntu@x:~$ tail /var/log/^C
 ubuntu@x:~$ nodetool decommission
 Exception in thread main java.util.concurrent.RejectedExecutionException: 
 Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@3008fa33 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@1d6242e8[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 52]
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:629)
 at 
 org.apache.cassandra.db.BatchlogManager.startBatchlogReplay(BatchlogManager.java:123)
 at 
 org.apache.cassandra.service.StorageService.unbootstrap(StorageService.java:2966)
 at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2934)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 

[jira] [Resolved] (CASSANDRA-8745) Ambiguous WriteTimeoutException during atomic batch execution

2015-02-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-8745.
-
Resolution: Not a Problem

There *is* a way to distinguish those two case, and that is through the 
writeType argument that a WriteTimeoutException contains. That writeType will 
be BATCH for syncWriteBatchedMutations and BATCH_LOG for syncWriteToBatchlog.

 Ambiguous WriteTimeoutException during atomic batch execution
 -

 Key: CASSANDRA-8745
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8745
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.1.x
Reporter: Stefan Podkowinski

 StorageProxy will handle atomic batches in mutateAtomically() the following 
 way:
 * syncWriteToBatchlog() - WriteTimeoutException
 * syncWriteBatchedMutations() - WriteTimeoutException
 * asyncRemoveFromBatchlog()
 All WriteTimeoutExceptions for syncWrite will be catched and passed to the 
 caller. Unfortunately the caller will not be able to tell if the timeout 
 occured while creating/sending the batchlog or executing the individual batch 
 statements.
 # Timeout during batchlog creation: client must retry operation or batch 
 might be lost
 # Timout during mutations: client should not retry as a new batchlog will be 
 created on every StorageProxy.mutateAtomically() call while previous 
 batchlogs would not be deleted. This can have performance implications for 
 large batches on stressed out clusters
 There should be a way to tell if a batchlog was successfully created, so we 
 can let the client move on and assume batch execution based on batchlog at 
 some point in the future. 
 See also CASSANDRA-8672 for similar error handling issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8741) Running a drain before a decommission apparently the wrong thing to do

2015-02-05 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307366#comment-14307366
 ] 

Brandon Williams commented on CASSANDRA-8741:
-

bq. either something is wrong with the drain/decommission commands, or it's 
very wrong to run a drain before a decommission.

It's basically the latter.  Both put the node in an unusable state when done, 
requiring a restart to be usable again, but decom needs to do more work than 
drain.

bq. What's worse, there seems to be no way to recover this node once it is in 
this state; you need to shut it down and run removenode.

You can just restart and then decom.  That said, it should be simple to add a 
check to decom to see if we're in a normal state and throw a better error.

 Running a drain before a decommission apparently the wrong thing to do
 --

 Key: CASSANDRA-8741
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8741
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
 Environment: Ubuntu 14.04; Cassandra 2.0.11.82 (Datastax Enterprise 
 4.5.3)
Reporter: Casey Marshall
  Labels: lhf

 This might simply be a documentation issue. It appears that running nodetool 
 drain is a very wrong thing to do before running a nodetool decommission.
 The idea was that I was going to safely shut off writes and flush everything 
 to disk before beginning the decommission. What happens is the decommission 
 call appears to fail very early on after starting, and afterwards, the node 
 in question is stuck in state LEAVING, but all other nodes in the ring see 
 that node as NORMAL, but down. No streams are ever sent from the node being 
 decommissioned to other nodes.
 The drain command does indeed shut down the BatchlogTasks executor 
 (org/apache/cassandra/service/StorageService.java, line 3445 in git tag 
 cassandra-2.0.11) but the decommission process tries using that executor 
 when calling the startBatchlogReplay function 
 (org/apache/cassandra/db/BatchlogManager.java, line 123) called through 
 org.apache.cassandra.service.StorageService.unbootstrap (see the stack trace 
 pasted below).
 This also failed in a similar way on Cassandra 1.2.13-ish (DSE 3.2.4).
 So, either something is wrong with the drain/decommission commands, or it's 
 very wrong to run a drain before a decommission. What's worse, there seems to 
 be no way to recover this node once it is in this state; you need to shut it 
 down and run removenode.
 My terminal output:
 ubuntu@x:~$ nodetool drain
 ubuntu@x:~$ tail /var/log/^C
 ubuntu@x:~$ nodetool decommission
 Exception in thread main java.util.concurrent.RejectedExecutionException: 
 Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@3008fa33 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@1d6242e8[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 52]
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530)
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:629)
 at 
 org.apache.cassandra.db.BatchlogManager.startBatchlogReplay(BatchlogManager.java:123)
 at 
 org.apache.cassandra.service.StorageService.unbootstrap(StorageService.java:2966)
 at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2934)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 

[jira] [Updated] (CASSANDRA-8730) Optimize UUIDType comparisons

2015-02-05 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8730:

Fix Version/s: 2.1.4

 Optimize UUIDType comparisons
 -

 Key: CASSANDRA-8730
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8730
 Project: Cassandra
  Issue Type: Improvement
Reporter: J.B. Langston
Assignee: Benedict
 Fix For: 2.1.4


 Compaction is slow on tables using compound keys containing UUIDs due to 
 being CPU bound by key comparison.  [~benedict] said he sees some easy 
 optimizations that could be made for UUID comparison.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8730) Optimize UUIDType comparisons

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308110#comment-14308110
 ] 

Benedict commented on CASSANDRA-8730:
-

Patch available 
[here|https://github.com/belliottsmith/cassandra/tree/8730-uuidoptim]

 Optimize UUIDType comparisons
 -

 Key: CASSANDRA-8730
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8730
 Project: Cassandra
  Issue Type: Bug
Reporter: J.B. Langston
Assignee: Benedict
 Fix For: 2.1.4


 Compaction is slow on tables using compound keys containing UUIDs due to 
 being CPU bound by key comparison.  [~benedict] said he sees some easy 
 optimizations that could be made for UUID comparison.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8730) Optimize UUIDType comparisons

2015-02-05 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8730:

Issue Type: Improvement  (was: Bug)

 Optimize UUIDType comparisons
 -

 Key: CASSANDRA-8730
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8730
 Project: Cassandra
  Issue Type: Improvement
Reporter: J.B. Langston
Assignee: Benedict
 Fix For: 2.1.4


 Compaction is slow on tables using compound keys containing UUIDs due to 
 being CPU bound by key comparison.  [~benedict] said he sees some easy 
 optimizations that could be made for UUID comparison.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8750) Ensure SSTableReader.last corresponds exactly with the file end

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308083#comment-14308083
 ] 

Benedict commented on CASSANDRA-8750:
-

Patch available [here|https://github.com/belliottsmith/cassandra/tree/8750]

 Ensure SSTableReader.last corresponds exactly with the file end
 ---

 Key: CASSANDRA-8750
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8750
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.4


 Following on from CASSANDRA-8744, CASSANDRA-8749 and CASSANDRA-8747, this 
 patch attempts to make the whole opening early of compaction results more 
 robust and with more clearly understood behaviour. The improvements of 
 CASSANDRA-8747 permit is to easily align the last key with a summary 
 boundary, and an index and data file end position. This patch modifies 
 SegmentedFile to permit the provision of an explicit length, which is then 
 provided to any readers, which enforce it, ensuring no code may accidentally 
 see an end inconsistent with the one advertised. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8703) incremental repair vs. bitrot

2015-02-05 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308231#comment-14308231
 ] 

Jeff Jirsa commented on CASSANDRA-8703:
---

I've got a version at 
https://github.com/jeffjirsa/cassandra/commits/cassandra-8703 that follows the 
scrub read path and implements nodetool verify / sstableverify. This works, for 
both compressed and uncompressed, but requires walking the entire sstable and 
verifies each on disk atom.  This works, it just isn't very fast (though it is 
thorough). 

The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify (-e) or similar, unless someone objects.

 incremental repair vs. bitrot
 -

 Key: CASSANDRA-8703
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8703
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Coli
Assignee: Jeff Jirsa

 Incremental repair is a great improvement in Cassandra, but it does not 
 contain a feature that non-incremental repair does : protection against 
 bitrot.
 Scenario :
 1) repair SSTable, marking it repaired
 2) cosmic ray hits hard drive, corrupting a record in SSTable
 3) range is actually unrepaired as of the time that SSTable was repaired, but 
 thinks it is repaired
 From my understanding, if bitrot is detected (via eg the CRC on the read 
 path) then all SSTables containing the corrupted range needs to be marked 
 unrepaired on all replicas. Per marcuse@IRC, the naive/simplest response 
 would be to just trigger a full repair in this case.
 I am concerned about incremental repair as an operational default while it 
 does not handle this case. As an aside, this would also seem to require a new 
 CRC on the uncompressed read path, as otherwise one cannot detect the 
 corruption without periodic checksumming of SSTables. Alternately, a 
 nodetool checksum function which verified table checksums, marking ranges 
 unrepaired on failure, and which could be run every gc_grace_seconds would 
 seem to meet the requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8703) incremental repair vs. bitrot

2015-02-05 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reassigned CASSANDRA-8703:
-

Assignee: Jeff Jirsa

 incremental repair vs. bitrot
 -

 Key: CASSANDRA-8703
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8703
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Coli
Assignee: Jeff Jirsa

 Incremental repair is a great improvement in Cassandra, but it does not 
 contain a feature that non-incremental repair does : protection against 
 bitrot.
 Scenario :
 1) repair SSTable, marking it repaired
 2) cosmic ray hits hard drive, corrupting a record in SSTable
 3) range is actually unrepaired as of the time that SSTable was repaired, but 
 thinks it is repaired
 From my understanding, if bitrot is detected (via eg the CRC on the read 
 path) then all SSTables containing the corrupted range needs to be marked 
 unrepaired on all replicas. Per marcuse@IRC, the naive/simplest response 
 would be to just trigger a full repair in this case.
 I am concerned about incremental repair as an operational default while it 
 does not handle this case. As an aside, this would also seem to require a new 
 CRC on the uncompressed read path, as otherwise one cannot detect the 
 corruption without periodic checksumming of SSTables. Alternately, a 
 nodetool checksum function which verified table checksums, marking ranges 
 unrepaired on failure, and which could be run every gc_grace_seconds would 
 seem to meet the requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Add new Role management permissions

2015-02-05 Thread aleksey
Add new Role management permissions

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-8650


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/217721ae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/217721ae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/217721ae

Branch: refs/heads/trunk
Commit: 217721ae95ce1a48d9cedbb8de8f3eb76c77d88c
Parents: 91e6423
Author: Sam Tunnicliffe s...@beobal.com
Authored: Tue Feb 3 11:56:04 2015 +
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Feb 6 03:40:12 2015 +0300

--
 CHANGES.txt |   2 +-
 NEWS.txt|   4 +
 pylib/cqlshlib/cql3handling.py  |   8 +-
 .../cassandra/auth/AllowAllAuthorizer.java  |  10 +-
 .../cassandra/auth/AuthMigrationListener.java   |   4 +-
 .../cassandra/auth/AuthenticatedUser.java   |  38 ++--
 .../cassandra/auth/CassandraAuthorizer.java |  92 +
 .../cassandra/auth/CassandraRoleManager.java|  90 -
 .../org/apache/cassandra/auth/DataResource.java |  32 +++-
 .../org/apache/cassandra/auth/IAuthorizer.java  |  20 +-
 .../org/apache/cassandra/auth/IResource.java|  15 ++
 .../org/apache/cassandra/auth/IRoleManager.java |  42 ++---
 .../org/apache/cassandra/auth/Permission.java   |  31 ++--
 .../org/apache/cassandra/auth/Resources.java|  17 ++
 .../org/apache/cassandra/auth/RoleResource.java | 185 +++
 src/java/org/apache/cassandra/cql3/Cql.g|  43 +++--
 .../org/apache/cassandra/cql3/RoleName.java |   5 +
 .../cql3/statements/AlterRoleStatement.java |  21 ++-
 .../statements/AuthenticationStatement.java |  41 +++-
 .../cql3/statements/AuthorizationStatement.java |  15 +-
 .../cql3/statements/CreateRoleStatement.java|  21 ++-
 .../cql3/statements/DropRoleStatement.java  |  28 +--
 .../statements/GrantPermissionsStatement.java   |  43 +
 .../cql3/statements/GrantStatement.java |  43 -
 .../statements/ListPermissionsStatement.java|   8 +-
 .../cql3/statements/ListRolesStatement.java |  30 +--
 .../cql3/statements/ListUsersStatement.java |   7 +-
 .../statements/PermissionAlteringStatement.java |  66 ---
 .../PermissionsManagementStatement.java |  67 +++
 .../statements/RevokePermissionsStatement.java  |  43 +
 .../cql3/statements/RevokeRoleStatement.java|   1 -
 .../cql3/statements/RevokeStatement.java|  43 -
 .../statements/RoleManagementStatement.java |  21 ++-
 .../apache/cassandra/service/ClientState.java   |   2 +-
 34 files changed, 762 insertions(+), 376 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/217721ae/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0aba61a..c44d284 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Add role based access control (CASSANDRA-7653, 8650)
  * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
  * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
  * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
@@ -6,7 +7,6 @@
  * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
  * Support direct buffer decompression for reads (CASSANDRA-8464)
  * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
- * Add role based access control (CASSANDRA-7653)
  * Group sstables for anticompaction correctly (CASSANDRA-8578)
  * Add ReadFailureException to native protocol, respond
immediately when replicas encounter errors while handling

http://git-wip-us.apache.org/repos/asf/cassandra/blob/217721ae/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index a4391b9..00afc7e 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -26,6 +26,10 @@ New features
  even when auth is handled by an external system has been removed, so
  authentication  authorization can be delegated to such systems in their
  entirety.
+   - In addition to the above, Roles are also first class resources and can be 
the
+ subject of permissions. Users (roles) can now be granted permissions on 
other
+ roles, including CREATE, ALTER, DROP  AUTHORIZE, which removesthe need 
for
+ superuser privileges in order to perform user/role management operations.
- SSTable file name is changed. Now you don't have Keyspace/CF name
  in file name. Also, secondary index has its own directory under parent's
  directory.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/217721ae/pylib/cqlshlib/cql3handling.py

[1/2] cassandra git commit: Add new Role management permissions

2015-02-05 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 91e64231e - 217721ae9


http://git-wip-us.apache.org/repos/asf/cassandra/blob/217721ae/src/java/org/apache/cassandra/cql3/statements/GrantPermissionsStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/GrantPermissionsStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/GrantPermissionsStatement.java
new file mode 100644
index 000..06a53e2
--- /dev/null
+++ 
b/src/java/org/apache/cassandra/cql3/statements/GrantPermissionsStatement.java
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.statements;
+
+import java.util.Set;
+
+import org.apache.cassandra.auth.IResource;
+import org.apache.cassandra.auth.Permission;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.cql3.RoleName;
+import org.apache.cassandra.exceptions.RequestExecutionException;
+import org.apache.cassandra.exceptions.RequestValidationException;
+import org.apache.cassandra.service.ClientState;
+import org.apache.cassandra.transport.messages.ResultMessage;
+
+public class GrantPermissionsStatement extends PermissionsManagementStatement
+{
+public GrantPermissionsStatement(SetPermission permissions, IResource 
resource, RoleName grantee)
+{
+super(permissions, resource, grantee);
+}
+
+public ResultMessage execute(ClientState state) throws 
RequestValidationException, RequestExecutionException
+{
+DatabaseDescriptor.getAuthorizer().grant(state.getUser(), permissions, 
resource, grantee);
+return null;
+}
+}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/217721ae/src/java/org/apache/cassandra/cql3/statements/GrantStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/GrantStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/GrantStatement.java
deleted file mode 100644
index 561fee6..000
--- a/src/java/org/apache/cassandra/cql3/statements/GrantStatement.java
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3.statements;
-
-import java.util.Set;
-
-import org.apache.cassandra.auth.DataResource;
-import org.apache.cassandra.auth.Permission;
-import org.apache.cassandra.config.DatabaseDescriptor;
-import org.apache.cassandra.cql3.RoleName;
-import org.apache.cassandra.exceptions.RequestExecutionException;
-import org.apache.cassandra.exceptions.RequestValidationException;
-import org.apache.cassandra.service.ClientState;
-import org.apache.cassandra.transport.messages.ResultMessage;
-
-public class GrantStatement extends PermissionAlteringStatement
-{
-public GrantStatement(SetPermission permissions, DataResource resource, 
RoleName grantee)
-{
-super(permissions, resource, grantee);
-}
-
-public ResultMessage execute(ClientState state) throws 
RequestValidationException, RequestExecutionException
-{
-DatabaseDescriptor.getAuthorizer().grant(state.getUser(), permissions, 
resource, grantee);
-return null;
-}
-}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/217721ae/src/java/org/apache/cassandra/cql3/statements/ListPermissionsStatement.java
--
diff 

[jira] [Comment Edited] (CASSANDRA-8703) incremental repair vs. bitrot

2015-02-05 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308231#comment-14308231
 ] 

Jeff Jirsa edited comment on CASSANDRA-8703 at 2/5/15 11:24 PM:


I've got a version at 
https://github.com/jeffjirsa/cassandra/commits/cassandra-8703 that follows the 
scrub read path and implements nodetool verify / sstableverify. This works, for 
both compressed and uncompressed, but requires walking the entire sstable and 
verifies each on disk atom.  This works, it just isn't very fast (though it is 
thorough). 

The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify (-e) or similar, unless someone objects. Also going to rename the DIGEST 
sstable component to Digest.adler32 since it's definitely not sha1 anymore. 


was (Author: jjirsa):
I've got a version at 
https://github.com/jeffjirsa/cassandra/commits/cassandra-8703 that follows the 
scrub read path and implements nodetool verify / sstableverify. This works, for 
both compressed and uncompressed, but requires walking the entire sstable and 
verifies each on disk atom.  This works, it just isn't very fast (though it is 
thorough). 

The faster method will be checking against the Digest.sha1 file (which actually 
contains an adler32 hash), and skipping the full iteration. I'll rebase and 
work that in, using the 'walk all atoms' approach above as an optional extended 
verify (-e) or similar, unless someone objects.

 incremental repair vs. bitrot
 -

 Key: CASSANDRA-8703
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8703
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Coli
Assignee: Jeff Jirsa

 Incremental repair is a great improvement in Cassandra, but it does not 
 contain a feature that non-incremental repair does : protection against 
 bitrot.
 Scenario :
 1) repair SSTable, marking it repaired
 2) cosmic ray hits hard drive, corrupting a record in SSTable
 3) range is actually unrepaired as of the time that SSTable was repaired, but 
 thinks it is repaired
 From my understanding, if bitrot is detected (via eg the CRC on the read 
 path) then all SSTables containing the corrupted range needs to be marked 
 unrepaired on all replicas. Per marcuse@IRC, the naive/simplest response 
 would be to just trigger a full repair in this case.
 I am concerned about incremental repair as an operational default while it 
 does not handle this case. As an aside, this would also seem to require a new 
 CRC on the uncompressed read path, as otherwise one cannot detect the 
 corruption without periodic checksumming of SSTables. Alternately, a 
 nodetool checksum function which verified table checksums, marking ranges 
 unrepaired on failure, and which could be run every gc_grace_seconds would 
 seem to meet the requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8711) cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled

2015-02-05 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307582#comment-14307582
 ] 

Mikhail Stepura commented on CASSANDRA-8711:


Why 9160? Cqlsh in 2.1 doesn't use Thrift anymore 

 cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled
 

 Key: CASSANDRA-8711
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8711
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu
 Fix For: 2.1.3


 I have been trying to setup client encryption on a three nodes 2.1.2 version 
 cassandra cluster and keep getting the following error:
 {noformat}
 Connection error: ('Unable to connect to any servers', {'localhost': 
 ConnectionShutdown('Connection AsyncoreConnection(44536208) localhost:9160 
 (closed) is already closed',)})
 {noformat}
 I tried with both cqlsh and datatax python cassandra-driver and no luck to 
 login.
 I created /rooot/.cassandra/cqlshrc file for cqlsh settings, the content is:
 {noformat}
 [authentication]
 username =
 password =
 [connection]
 hostname = localhost
 port = 9160
 factory = cqlshlib.ssl.ssl_transport_factory
 [ssl]
 certfile = /root/.cassandra/localhost_user1.pem
 validate = false ## Optional, true by default
 {noformat}
 my cassandra.yaml configuration related to client_encryptions:
 {noformat}
 client_encryption_options:
 enabled: True
 keystore: /etc/cassandra/conf/.keystore
 keystore_password: cassnest
 {noformat}
 the keystore, truststore, cert/pem (localhost_user1.pem) key have been 
 verified to be working fine for datastax enterprise version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8711) cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled

2015-02-05 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307593#comment-14307593
 ] 

Mikhail Stepura commented on CASSANDRA-8711:


http://www.datastax.com/documentation/cassandra/2.1/cassandra/security/secureCqlshSSL_t.html

Don't forget to use {{--ssl}} flag

 cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled
 

 Key: CASSANDRA-8711
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8711
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu
 Fix For: 2.1.3


 I have been trying to setup client encryption on a three nodes 2.1.2 version 
 cassandra cluster and keep getting the following error:
 {noformat}
 Connection error: ('Unable to connect to any servers', {'localhost': 
 ConnectionShutdown('Connection AsyncoreConnection(44536208) localhost:9160 
 (closed) is already closed',)})
 {noformat}
 I tried with both cqlsh and datatax python cassandra-driver and no luck to 
 login.
 I created /rooot/.cassandra/cqlshrc file for cqlsh settings, the content is:
 {noformat}
 [authentication]
 username =
 password =
 [connection]
 hostname = localhost
 port = 9160
 factory = cqlshlib.ssl.ssl_transport_factory
 [ssl]
 certfile = /root/.cassandra/localhost_user1.pem
 validate = false ## Optional, true by default
 {noformat}
 my cassandra.yaml configuration related to client_encryptions:
 {noformat}
 client_encryption_options:
 enabled: True
 keystore: /etc/cassandra/conf/.keystore
 keystore_password: cassnest
 {noformat}
 the keystore, truststore, cert/pem (localhost_user1.pem) key have been 
 verified to be working fine for datastax enterprise version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Git Push Summary

2015-02-05 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.1.3-tentative [deleted] 98905809c


Git Push Summary

2015-02-05 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.1.3-tentative [created] caba0a592


[jira] [Commented] (CASSANDRA-8308) Windows: Commitlog access violations on unit tests

2015-02-05 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306980#comment-14306980
 ] 

Robert Stupp commented on CASSANDRA-8308:
-

bq. clearly different between our CI environment and my local win7 install

same for me - trunk_utest_win32 still failing (though less utest errors) and 
working fine in my Win7 VM

 Windows: Commitlog access violations on unit tests
 --

 Key: CASSANDRA-8308
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8308
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
  Labels: Windows
 Fix For: 3.0

 Attachments: 8308-post-fix.txt, 8308_v1.txt, 8308_v2.txt, 8308_v3.txt


 We have four unit tests failing on trunk on Windows, all with 
 FileSystemException's related to the SchemaLoader:
 {noformat}
 [junit] Test 
 org.apache.cassandra.db.compaction.DateTieredCompactionStrategyTest FAILED
 [junit] Test org.apache.cassandra.cql3.ThriftCompatibilityTest FAILED
 [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest FAILED
 [junit] Test org.apache.cassandra.repair.LocalSyncTaskTest FAILED
 {noformat}
 Example error:
 {noformat}
 [junit] Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\commitlog;0\CommitLog-5-1415908745965.log: The process 
 cannot access the file because it is being used by another process.
 [junit]
 [junit] at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
 [junit] at 
 sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
 [junit] at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 [junit] at java.nio.file.Files.delete(Files.java:1079)
 [junit] at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:125)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7688) Add data sizing to a system table

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307108#comment-14307108
 ] 

Piotr Kołaczkowski commented on CASSANDRA-7688:
---

Looks good.

{code}
   // delete all previous values with a single range tombstone.
mutation.deleteRange(SIZE_ESTIMATES_CF,
 estimatesTable.comparator.make(table).start(),
 estimatesTable.comparator.make(table).end(),
 timestamp - 1);

// add a CQL row for each primary token range.
ColumnFamily cells = mutation.addOrGet(estimatesTable);
for (Map.EntryRangeToken, PairLong, Long entry : 
estimates.entrySet())
{
RangeToken range = entry.getKey();
PairLong, Long values = entry.getValue();
Composite prefix = estimatesTable.comparator.make(table, 
range.left.toString(), range.right.toString());
CFRowAdder adder = new CFRowAdder(cells, prefix, timestamp);
adder.add(partitions_count, values.left)
 .add(mean_partition_size, values.right);
}

mutation.apply();
{code}

Are updates of the table atomic? I can see you delete a whole bunch of token 
ranges with one tombstone and than add one by one. Is it possible to get an 
incomplete table when querying at the wrong moment?

 Add data sizing to a system table
 -

 Key: CASSANDRA-7688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremiah Jordan
Assignee: Aleksey Yeschenko
 Fix For: 2.1.3

 Attachments: 7688.txt


 Currently you can't implement something similar to describe_splits_ex purely 
 from the a native protocol driver.  
 https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
 getting ownership information to a client in the java-driver.  But you still 
 need the data sizing part to get splits of a given size.  We should add the 
 sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8743) Repair on NFS in version 2.1.2

2015-02-05 Thread Tamar Nirenberg (JIRA)
Tamar Nirenberg created CASSANDRA-8743:
--

 Summary: Repair on NFS in version 2.1.2
 Key: CASSANDRA-8743
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8743
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tamar Nirenberg
Priority: Critical


Running repair over NFS in Cassandra 2.1.2 encounters this error and crashes 
the ring:
ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,811 Validator.java:232 - 
Failed creating a merkle tree for [repair #c84c7c70-a21b-11e4-aeca-19e6d7fa2595 
on ATTRIBUTES/LINKS, 
(11621838520493020277529637175352775759,11853478749048239324667887059881170862]],
 /10.1.234.63 (see log for details)
ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,827 CassandraDaemon.java:153 - 
Exception in thread Thread[ValidationExecutor:2,1,main]
org.apache.cassandra.io.FSWriteError: java.nio.file.DirectoryNotEmptyException: 
/exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.util.FileUtils.deleteRecursive(FileUtils.java:381) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.Directories.clearSnapshot(Directories.java:547) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.ColumnFamilyStore.clearSnapshot(ColumnFamilyStore.java:2223)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:939)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:97)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:557)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_71]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_71]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.nio.file.DirectoryNotEmptyException: 
/exlibris/cassandra/local/data/data/ATTRIBUTES/LINKS/snapshots/c84c7c70-a21b-11e4-aeca-19e6d7fa2595
at 
sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242) 
~[na:1.7.0_71]
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 ~[na:1.7.0_71]
at java.nio.file.Files.delete(Files.java:1079) ~[na:1.7.0_71]
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131) 
~[apache-cassandra-2.1.2.jar:2.1.2]
... 10 common frames omitted
ERROR [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:383 - 
Stopping gossiper
WARN  [ValidationExecutor:2] 2015-01-22 11:48:14,829 StorageService.java:291 - 
Stopping gossip by operator request
INFO  [ValidationExecutor:2] 2015-01-22 11:48:14,829 Gossiper.java:1318 - 
Announcing shutdown




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7688) Add data sizing to a system table

2015-02-05 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307114#comment-14307114
 ] 

Aleksey Yeschenko commented on CASSANDRA-7688:
--

Since it's a single partition update, the whole thing is atomic and isolated, 
yes. I'm adding updates to the mutation one by one, but applying everything, 
including the removal of previous state, and addition of the new data, in one 
go, at mutation.apply() point.

So long as you fetch all the ranges together in one query, you'll always have a 
complete state. It might be slightly out of date and lagging behind (rare) 
topology updates for up to 5 minutes, but it'll always be internally consistent.

 Add data sizing to a system table
 -

 Key: CASSANDRA-7688
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7688
 Project: Cassandra
  Issue Type: New Feature
Reporter: Jeremiah Jordan
Assignee: Aleksey Yeschenko
 Fix For: 2.1.3

 Attachments: 7688.txt


 Currently you can't implement something similar to describe_splits_ex purely 
 from the a native protocol driver.  
 https://datastax-oss.atlassian.net/browse/JAVA-312 is open to expose easily 
 getting ownership information to a client in the java-driver.  But you still 
 need the data sizing part to get splits of a given size.  We should add the 
 sizing information to a system table so that native clients can get to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8711) cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled

2015-02-05 Thread Jeff Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307696#comment-14307696
 ] 

Jeff Liu commented on CASSANDRA-8711:
-

hi [~mishail].
Yes. I did use the --ssl flag. Actually after I configure ~/.cassandra/cqlshrc 
file and specify ssl configuration, cqlsh will pick the ssl configuration and 
try to connect with ssl even without --ssl flag. 
However, either one worked for me in terms of connecting to cassandra.

 cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled
 

 Key: CASSANDRA-8711
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8711
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu
 Fix For: 2.1.3


 I have been trying to setup client encryption on a three nodes 2.1.2 version 
 cassandra cluster and keep getting the following error:
 {noformat}
 Connection error: ('Unable to connect to any servers', {'localhost': 
 ConnectionShutdown('Connection AsyncoreConnection(44536208) localhost:9160 
 (closed) is already closed',)})
 {noformat}
 I tried with both cqlsh and datatax python cassandra-driver and no luck to 
 login.
 I created /rooot/.cassandra/cqlshrc file for cqlsh settings, the content is:
 {noformat}
 [authentication]
 username =
 password =
 [connection]
 hostname = localhost
 port = 9160
 factory = cqlshlib.ssl.ssl_transport_factory
 [ssl]
 certfile = /root/.cassandra/localhost_user1.pem
 validate = false ## Optional, true by default
 {noformat}
 my cassandra.yaml configuration related to client_encryptions:
 {noformat}
 client_encryption_options:
 enabled: True
 keystore: /etc/cassandra/conf/.keystore
 keystore_password: cassnest
 {noformat}
 the keystore, truststore, cert/pem (localhost_user1.pem) key have been 
 verified to be working fine for datastax enterprise version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8711) cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled

2015-02-05 Thread Jeff Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307700#comment-14307700
 ] 

Jeff Liu commented on CASSANDRA-8711:
-

Which port should be used? 

 cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled
 

 Key: CASSANDRA-8711
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8711
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu
 Fix For: 2.1.3


 I have been trying to setup client encryption on a three nodes 2.1.2 version 
 cassandra cluster and keep getting the following error:
 {noformat}
 Connection error: ('Unable to connect to any servers', {'localhost': 
 ConnectionShutdown('Connection AsyncoreConnection(44536208) localhost:9160 
 (closed) is already closed',)})
 {noformat}
 I tried with both cqlsh and datatax python cassandra-driver and no luck to 
 login.
 I created /rooot/.cassandra/cqlshrc file for cqlsh settings, the content is:
 {noformat}
 [authentication]
 username =
 password =
 [connection]
 hostname = localhost
 port = 9160
 factory = cqlshlib.ssl.ssl_transport_factory
 [ssl]
 certfile = /root/.cassandra/localhost_user1.pem
 validate = false ## Optional, true by default
 {noformat}
 my cassandra.yaml configuration related to client_encryptions:
 {noformat}
 client_encryption_options:
 enabled: True
 keystore: /etc/cassandra/conf/.keystore
 keystore_password: cassnest
 {noformat}
 the keystore, truststore, cert/pem (localhost_user1.pem) key have been 
 verified to be working fine for datastax enterprise version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8726) throw OOM in Memory if we fail to allocate

2015-02-05 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8726:

Attachment: 8726.txt

 throw OOM in Memory if we fail to allocate
 --

 Key: CASSANDRA-8726
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8726
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.3, 2.0.13

 Attachments: 8726.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8689) Assertion error in 2.1.2: ERROR [IndexSummaryManager:1]

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307404#comment-14307404
 ] 

Benedict edited comment on CASSANDRA-8689 at 2/5/15 6:42 PM:
-

-Can we get a bisect run on this?-

Scratch that; wrong memory bug ticket.


was (Author: benedict):
Can we get a bisect run on this?

 Assertion error in 2.1.2: ERROR [IndexSummaryManager:1]
 ---

 Key: CASSANDRA-8689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8689
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu
Assignee: Benedict
 Fix For: 2.1.3


 After upgrading a 6 nodes cassandra from 2.1.0 to 2.1.2, start getting the 
 following assertion error.
 {noformat}
 ERROR [IndexSummaryManager:1] 2015-01-26 20:55:40,451 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[IndexSummaryManager:1,1,main]
 java.lang.AssertionError: null
 at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getOffHeapSize(IndexSummary.java:192)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getIndexSummaryOffHeapSize(SSTableReader.java:1070)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:292)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:238)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:139)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:77)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_45]
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
 [na:1.7.0_45]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_45]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 {noformat}
 cassandra service is still running despite the issue. Node has total 8G 
 memory with 2G allocated to heap. We are basically running read queries to 
 retrieve data out of cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8716) java.util.concurrent.ExecutionException: java.lang.AssertionError: Memory was freed when running cleanup

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307738#comment-14307738
 ] 

Benedict commented on CASSANDRA-8716:
-

Could we get a git bisect on this to help narrow it down?

 java.util.concurrent.ExecutionException: java.lang.AssertionError: Memory 
 was freed when running cleanup
 --

 Key: CASSANDRA-8716
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8716
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Centos 6.6, Cassandra 2.0.12, Oracle JDK 1.7.0_67
Reporter: Imri Zvik
Assignee: Benedict
Priority: Minor
 Fix For: 2.0.13

 Attachments: system.log.gz


 {code}Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.AssertionError: Memory was 
 freed
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:234)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:272)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1115)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2177)
 at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.AssertionError: Memory was freed
 at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:259)
 at org.apache.cassandra.io.util.Memory.getInt(Memory.java:211)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getIndex(IndexSummary.java:79)
 at 
 org.apache.cassandra.io.sstable.IndexSummary.getKey(IndexSummary.java:84)
 at 
 

[jira] [Commented] (CASSANDRA-8308) Windows: Commitlog access violations on unit tests

2015-02-05 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307062#comment-14307062
 ] 

Robert Stupp commented on CASSANDRA-8308:
-

Hm - at the moment I've got no other idea than to try to delete each segment 
file in a for-try-wait loop.
Made another one 
https://github.com/snazy/cassandra/commit/9d1cb6fc3a5a89480817434e55d515f4a37036dc

Maybe there's something async going on here (which the new pray-n-error-patch 
could fix) or something with mmap cleaner is going weird (which would be a 
bigger problem). We could check the latter by setting {{disk_access_mode = 
standard}}.

 Windows: Commitlog access violations on unit tests
 --

 Key: CASSANDRA-8308
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8308
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
  Labels: Windows
 Fix For: 3.0

 Attachments: 8308-post-fix.txt, 8308_v1.txt, 8308_v2.txt, 8308_v3.txt


 We have four unit tests failing on trunk on Windows, all with 
 FileSystemException's related to the SchemaLoader:
 {noformat}
 [junit] Test 
 org.apache.cassandra.db.compaction.DateTieredCompactionStrategyTest FAILED
 [junit] Test org.apache.cassandra.cql3.ThriftCompatibilityTest FAILED
 [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest FAILED
 [junit] Test org.apache.cassandra.repair.LocalSyncTaskTest FAILED
 {noformat}
 Example error:
 {noformat}
 [junit] Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\commitlog;0\CommitLog-5-1415908745965.log: The process 
 cannot access the file because it is being used by another process.
 [junit]
 [junit] at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
 [junit] at 
 sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
 [junit] at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 [junit] at java.nio.file.Files.delete(Files.java:1079)
 [junit] at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:125)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8711) cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled

2015-02-05 Thread Jeff Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307704#comment-14307704
 ] 

Jeff Liu commented on CASSANDRA-8711:
-

Here is some additional information:

When I don't have a cqlshrc file:
{noformat}
/home/jliu# cqlsh --ssl
Validation is enabled; SSL transport factory requires a valid certfile to be 
specified. Please provide path to the certfile in [ssl] section as 'certfile' 
option in /root/.cassandra/cqlshrc (or use [certfiles] section) or set 
SSL_CERTFILE environment variable.
{noformat}

when I have a cqlshrc file(contents as pasted in description)
{noformat}
/home/jliu# cqlsh --ssl
Password:
Connection error: ('Unable to connect to any servers', {'localhost': error(8, 
'ENOEXEC')})
{noformat}

 cassandra 2.1.2 cqlsh not able to connect when ssl client encryption enabled
 

 Key: CASSANDRA-8711
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8711
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeff Liu
 Fix For: 2.1.3


 I have been trying to setup client encryption on a three nodes 2.1.2 version 
 cassandra cluster and keep getting the following error:
 {noformat}
 Connection error: ('Unable to connect to any servers', {'localhost': 
 ConnectionShutdown('Connection AsyncoreConnection(44536208) localhost:9160 
 (closed) is already closed',)})
 {noformat}
 I tried with both cqlsh and datatax python cassandra-driver and no luck to 
 login.
 I created /rooot/.cassandra/cqlshrc file for cqlsh settings, the content is:
 {noformat}
 [authentication]
 username =
 password =
 [connection]
 hostname = localhost
 port = 9160
 factory = cqlshlib.ssl.ssl_transport_factory
 [ssl]
 certfile = /root/.cassandra/localhost_user1.pem
 validate = false ## Optional, true by default
 {noformat}
 my cassandra.yaml configuration related to client_encryptions:
 {noformat}
 client_encryption_options:
 enabled: True
 keystore: /etc/cassandra/conf/.keystore
 keystore_password: cassnest
 {noformat}
 the keystore, truststore, cert/pem (localhost_user1.pem) key have been 
 verified to be working fine for datastax enterprise version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-6106) Provide timestamp with true microsecond resolution

2015-02-05 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict resolved CASSANDRA-6106.
-
Resolution: Won't Fix

It was decided that v3 protocol implementors should ensure their timestamps are 
high precision, since this is the default source of timestamps going forwards.

 Provide timestamp with true microsecond resolution
 --

 Key: CASSANDRA-6106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: DSE Cassandra 3.1, but also HEAD
Reporter: Christopher Smith
Assignee: Benedict
Priority: Minor
  Labels: timestamps
 Fix For: 3.0

 Attachments: microtimstamp.patch, microtimstamp_random.patch, 
 microtimstamp_random_rev2.patch


 I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
 mentioned issues with millisecond rounding in timestamps and was able to 
 reproduce the issue. If I specify a timestamp in a mutating query, I get 
 microsecond precision, but if I don't, I get timestamps rounded to the 
 nearest millisecond, at least for my first query on a given connection, which 
 substantially increases the possibilities of collision.
 I believe I found the offending code, though I am by no means sure this is 
 comprehensive. I think we probably need a fairly comprehensive replacement of 
 all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8749) Cleanup SegmentedFile

2015-02-05 Thread Benedict (JIRA)
Benedict created CASSANDRA-8749:
---

 Summary: Cleanup SegmentedFile
 Key: CASSANDRA-8749
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8749
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4


As a follow up to 8707 (building upon it for ease, since that edits these 
files), and a precursor to another follow up, this ticket cleans up the 
SegmentedFile hierarchy a little, and makes it encapsulate the construction of 
a new reader, so we implementation details don't leak into SSTableReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8749) Cleanup SegmentedFile

2015-02-05 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307922#comment-14307922
 ] 

Benedict commented on CASSANDRA-8749:
-

Patch available 
[here|https://github.com/belliottsmith/cassandra/tree/cleanup-segmentedfile]

 Cleanup SegmentedFile
 -

 Key: CASSANDRA-8749
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8749
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4


 As a follow up to 8707 (building upon it for ease, since that edits these 
 files), and a precursor to another follow up, this ticket cleans up the 
 SegmentedFile hierarchy a little, and makes it encapsulate the construction 
 of a new reader, so we implementation details don't leak into SSTableReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8726) throw OOM in Memory if we fail to allocate

2015-02-05 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307939#comment-14307939
 ] 

Jonathan Ellis commented on CASSANDRA-8726:
---

+1

 throw OOM in Memory if we fail to allocate
 --

 Key: CASSANDRA-8726
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8726
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.3, 2.0.13

 Attachments: 8726.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8683) Ensure early reopening has no overlap with replaced files

2015-02-05 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8683:
--
Reviewer: Marcus Eriksson

 Ensure early reopening has no overlap with replaced files
 -

 Key: CASSANDRA-8683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8683
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.3

 Attachments: 0001-avoid-NPE-in-getPositionsForRanges.patch


 Incremental repairs holds a set of the sstables it started the repair on (we 
 need to know which sstables were actually validated to be able to anticompact 
 them). This includes any tmplink files that existed when the compaction 
 started (if we wouldn't include those, we would miss data since we move the 
 start point of the existing non-tmplink files)
 With CASSANDRA-6916 we swap out those instances with new ones 
 (SSTR.cloneWithNewStart / SSTW.openEarly), meaning that the underlying file 
 can get deleted even though we hold a reference.
 This causes the unit test error: 
 http://cassci.datastax.com/job/trunk_utest/1330/testReport/junit/org.apache.cassandra.db.compaction/LeveledCompactionStrategyTest/testValidationMultipleSSTablePerLevel/
 (note that it only fails on trunk though, in 2.1 we don't hold references to 
 the repairing files for non-incremental repairs, but the bug should exist in 
 2.1 as well)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8749) Cleanup SegmentedFile

2015-02-05 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8749:
--
Reviewer: Marcus Eriksson

 Cleanup SegmentedFile
 -

 Key: CASSANDRA-8749
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8749
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4


 As a follow up to 8707 (building upon it for ease, since that edits these 
 files), and a precursor to another follow up, this ticket cleans up the 
 SegmentedFile hierarchy a little, and makes it encapsulate the construction 
 of a new reader, so we implementation details don't leak into SSTableReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)