[jira] [Commented] (CASSANDRA-6974) Replaying archived commitlogs isn't working

2014-04-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966381#comment-13966381
 ] 

Benedict commented on CASSANDRA-6974:
-

OK, so this is a whole lot more complex a problem than it first appears. I was 
being thrown by the fact that some of the logs were replaying, but it seems 
those were the empty recycled logs created at startup. There are two problems 
to fix:

# To increase safety of log replay, we introduced a checksum of the log id into 
the header. Unfortunately it appears that on _restore_ the id in the name is 
ignored and a new segment id is allocated. Not only does this mean logs are 
replayed out of order (so is probably undesirable), it also means that the new 
checksum is rejecting these log files as they have the wrong checksum for their 
named id. However since we impose no constraints on the archive command, it is 
possible that the end user has always been archiving in a way that destroys the 
original segment name+id, so relying on it being present may be impossible. I'm 
reticent to drop the checksum safety check, as it's tied into safety of the new 
CL model.
# The test restores a CF, but creates a new CF and streams the old CFs data to 
it. The CL holds data against the old CF, and on replay ignores the mutations 
because the target CF does not exist

[~jbellis] [~vijay2...@yahoo.com] do you have any opinion on point 1? We could 
encode the ID in the header itself, and use the ID on restore to construct the 
ID of the target file only, which would probably retain the present guarantees. 
Or we could throw a hissy fit if we're provided a non-standard name for the 
segment. This also brings up a point about CL and MS version - these are 
currently encoded in the name as well, so if somebody restores an old version 
against a new C* cluster, they'll find things don't go as planned, so we may 
want to consider encoding these in the header going forwards also. We can use 
the presence of the checksum to confirm that we're operating on a new enough 
version that supports the scheme.

As to point 2, I'm not sure if this is a problem with the test or with restore 
procedure: I'm guessing it's not atypical to restore by creating a new cluster. 
In which case we have a whole separate problem to address. [~yukim]: thoughts?

Also, for the record [~enigmacurry], the log files from the run where the 
inserts and archives happen are getting trashed, so diagnosing this was 
trickier than it might otherwise have been. Would be nice to fix that (I assume 
it may apply to other tests as well)

 Replaying archived commitlogs isn't working
 ---

 Key: CASSANDRA-6974
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6974
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: Benedict
 Fix For: 2.1 beta2

 Attachments: 2.0.system.log, 2.1.system.log


 I have a test for restoring archived commitlogs, which is not working in 2.1 
 HEAD.  My commitlogs consist of 30,000 inserts, but system.log indicates 
 there were only 2 mutations replayed:
 {code}
 INFO  [main] 2014-04-02 11:49:54,173 CommitLog.java:115 - Log replay 
 complete, 2 replayed mutations
 {code}
 There are several warnings in the logs about bad headers and invalid CRCs: 
 {code}
 WARN  [main] 2014-04-02 11:49:54,156 CommitLogReplayer.java:138 - Encountered 
 bad header at position 0 of commit log /tmp/dtest
 -mZIlPE/test/node1/commitlogs/CommitLog-4-1396453793570.log, with invalid 
 CRC. The end of segment marker should be zero.
 {code}
 compare that to the same test run on 2.0, where it replayed many more 
 mutations:
 {code}
  INFO [main] 2014-04-02 11:49:04,673 CommitLog.java (line 132) Log replay 
 complete, 35960 replayed mutations
 {code}
 I'll attach the system logs for reference.
 [Here is the dtest to reproduce 
 this|https://github.com/riptano/cassandra-dtest/blob/master/snapshot_test.py#L75]
  - (This currently relies on the fix for snapshots available in 
 CASSANDRA-6965.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6974) Replaying archived commitlogs isn't working

2014-04-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966381#comment-13966381
 ] 

Benedict edited comment on CASSANDRA-6974 at 4/11/14 10:27 AM:
---

OK, so this is a whole lot more complex a problem than it first appears. I was 
being thrown by the fact that some of the logs were replaying, but it seems 
those were the empty recycled logs created at startup. There are two problems 
to fix:

# To increase safety of log replay, we introduced a checksum of the log id into 
the header. Unfortunately it appears that on _restore_ the id in the filename 
is ignored and a fresh segment id is allocated. Not only does this mean logs 
are replayed out of order (so is probably undesirable), it also means that the 
new checksum is rejecting all restored log files as they have the wrong 
checksum for their id. However since we impose no constraints on the archive 
command, it is possible some users have been trashing the original segment 
name+id, so relying on it may be impossible. I'm reticent to drop the checksum 
safety check, as it's tied into safety of the new CL model, though.
# The test restores a CF, but creates a new CF and streams the old CFs data to 
it. The CL holds data against the old CF, and on replay ignores the mutations 
because the target CF does not exist

[~jbellis] [~vijay2...@yahoo.com] do you have any opinion on point 1? We could 
encode the ID in the header itself, and use the ID on restore to construct the 
ID of the target file only, which would probably retain the present guarantees. 
Or we could throw a hissy fit if we're provided a non-standard name for the 
segment. This also brings up a point about CL and MS version - these are 
currently encoded in the name as well, so if somebody restores an old version 
against a new C* cluster, they'll find things don't go as planned, so we may 
want to consider encoding these in the header going forwards also. We can use 
the presence of the checksum to confirm that we're operating on a new enough 
version that supports the scheme.

As to point 2, I'm not sure if this is a problem with the test or with restore 
procedure: I'm guessing it's not atypical to restore by creating a new cluster. 
In which case we have a whole separate problem to address. [~yukim]: thoughts?

Also, for the record [~enigmacurry], the log files from the run where the 
inserts and archives happen are getting trashed, so diagnosing this was 
trickier than it might otherwise have been. Would be nice to fix that (I assume 
it may apply to other tests as well)


was (Author: benedict):
OK, so this is a whole lot more complex a problem than it first appears. I was 
being thrown by the fact that some of the logs were replaying, but it seems 
those were the empty recycled logs created at startup. There are two problems 
to fix:

# To increase safety of log replay, we introduced a checksum of the log id into 
the header. Unfortunately it appears that on _restore_ the id in the name is 
ignored and a new segment id is allocated. Not only does this mean logs are 
replayed out of order (so is probably undesirable), it also means that the new 
checksum is rejecting these log files as they have the wrong checksum for their 
named id. However since we impose no constraints on the archive command, it is 
possible that the end user has always been archiving in a way that destroys the 
original segment name+id, so relying on it being present may be impossible. I'm 
reticent to drop the checksum safety check, as it's tied into safety of the new 
CL model.
# The test restores a CF, but creates a new CF and streams the old CFs data to 
it. The CL holds data against the old CF, and on replay ignores the mutations 
because the target CF does not exist

[~jbellis] [~vijay2...@yahoo.com] do you have any opinion on point 1? We could 
encode the ID in the header itself, and use the ID on restore to construct the 
ID of the target file only, which would probably retain the present guarantees. 
Or we could throw a hissy fit if we're provided a non-standard name for the 
segment. This also brings up a point about CL and MS version - these are 
currently encoded in the name as well, so if somebody restores an old version 
against a new C* cluster, they'll find things don't go as planned, so we may 
want to consider encoding these in the header going forwards also. We can use 
the presence of the checksum to confirm that we're operating on a new enough 
version that supports the scheme.

As to point 2, I'm not sure if this is a problem with the test or with restore 
procedure: I'm guessing it's not atypical to restore by creating a new cluster. 
In which case we have a whole separate problem to address. [~yukim]: thoughts?

Also, for the record [~enigmacurry], the log files from the run where the 
inserts and archives happen are getting trashed, so 

[jira] [Commented] (CASSANDRA-7006) secondary_indexes_test test_6924 dtest fails on 2.1

2014-04-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966396#comment-13966396
 ] 

Sam Tunnicliffe commented on CASSANDRA-7006:


I would say this failure is expected, with CASSANDRA-6924 still being 
unresolved. I'll take a look at that issue, but maybe we want to close this and 
if the severity warrants it bump  6924 to a blocker.

 secondary_indexes_test test_6924 dtest fails on 2.1
 ---

 Key: CASSANDRA-7006
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7006
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Sam Tunnicliffe
Priority: Blocker
 Fix For: 2.1 beta2


 {noformat}
 ==
 FAIL: test_6924 (secondary_indexes_test.TestSecondaryIndexes)
 --
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/secondary_indexes_test.py, line 
 135, in test_6924
 self.assertEqual(count,10)
 AssertionError: 7 != 10
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Shyam K Gopal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966429#comment-13966429
 ] 

Shyam K Gopal commented on CASSANDRA-6525:
--

I am still getting this error in DSE 2.0.5 and 2.0.6.. Tried in various machine 
mac  ubuntu. 

Steps :
1 - CREATE TABLE DSQ (
exchange text,
sc_code int,
load_date timeuuid, /* tried timestamp also but same behaviour */
PRIMARY KEY (exchange, sc_code, load_date)
) 
2 - Did SSTable load
writer.newRow(compositeColumn.builder().add(bytes(entry.stock_exchange)).add(bytes(entry.sc_code)).add(bytes(new
 com.eaio.uuid.UUID().toString())).build());
3 - sstablesload 
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of stock/DSQ/stock-DSQ-ib-1-Data.db to [/127.0.0.1]
progress: [/127.0.0.1 1/1 (100%)] [total: 100% - 2147483647MB/s (avg: 2MB/s)

4 - No errors in server log
5 - Log into cqlsh and select * from DSQ; 
6 -- errors in Server log: 
Exception in thread Thread[ReadStage:51,5,main]
java.io.IOError: java.io.EOFException
at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
at 
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
at 
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
at 
org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:87)
at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
at 
org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.EOFException
at java.io.RandomAccessFile.readUnsignedShort(RandomAccessFile.java:713)
at 
org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)

[jira] [Updated] (CASSANDRA-6996) Setting severity via JMX broken

2014-04-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6996:
--

  Component/s: Tools
Fix Version/s: 2.0.7

 Setting severity via JMX broken
 ---

 Key: CASSANDRA-6996
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6996
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Rick Branson
Assignee: Vijay
Priority: Minor
 Fix For: 2.0.7

 Attachments: 0001-CASSANDRA-6996.patch


 Looks like setting the Severity attribute in the DynamicEndpointSnitch via 
 JMX is a no-op.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-6924) Data Inserted Immediately After Secondary Index Creation is not Indexed

2014-04-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6924:
-

Assignee: Sam Tunnicliffe

 Data Inserted Immediately After Secondary Index Creation is not Indexed
 ---

 Key: CASSANDRA-6924
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6924
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Sam Tunnicliffe
 Fix For: 2.0.7

 Attachments: repro.py


 The head of the cassandra-1.2 branch (currently 1.2.16-tentative) contains a 
 regression from 1.2.15.  Data that is inserted immediately after secondary 
 index creation may never get indexed.
 You can reproduce the issue with a [pycassa integration 
 test|https://github.com/pycassa/pycassa/blob/master/tests/test_autopacking.py#L793]
  by running:
 {noformat}
 nosetests tests/test_autopacking.py:TestKeyValidators.test_get_indexed_slices
 {noformat}
 from the pycassa directory.
 The operation order goes like this:
 # create CF
 # create secondary index
 # insert data
 # query secondary index
 If a short sleep is added in between steps 2 and 3, the data gets indexed and 
 the query is successful.
 If a sleep is only added in between steps 3 and 4, some of the data is never 
 indexed and the query will return incomplete results.  This appears to be the 
 case even if the sleep is relatively long (30s), which makes me think the 
 data may never get indexed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7006) secondary_indexes_test test_6924 dtest fails on 2.1

2014-04-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-7006.
---

Resolution: Duplicate

 secondary_indexes_test test_6924 dtest fails on 2.1
 ---

 Key: CASSANDRA-7006
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7006
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Sam Tunnicliffe
Priority: Blocker
 Fix For: 2.1 beta2


 {noformat}
 ==
 FAIL: test_6924 (secondary_indexes_test.TestSecondaryIndexes)
 --
 Traceback (most recent call last):
   File /home/mshuler/git/cassandra-dtest/secondary_indexes_test.py, line 
 135, in test_6924
 self.assertEqual(count,10)
 AssertionError: 7 != 10
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6996) Setting severity via JMX broken

2014-04-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6996:
--

Reviewer: Rick Branson

 Setting severity via JMX broken
 ---

 Key: CASSANDRA-6996
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6996
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Rick Branson
Assignee: Vijay
Priority: Minor
 Fix For: 2.0.7

 Attachments: 0001-CASSANDRA-6996.patch


 Looks like setting the Severity attribute in the DynamicEndpointSnitch via 
 JMX is a no-op.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7000) Assertion in SSTableReader during repair.

2014-04-11 Thread Ben Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966551#comment-13966551
 ] 

Ben Chan commented on CASSANDRA-7000:
-

Just to confirm, repair works fine with
{noformat}
# current trunk
git checkout 471f5cc34c99
git apply 7000-2.1-v2.txt 7000.supplement.txt
{noformat}

I think SSTableReader#close as it currently stands still doesn't quite make 
sense. But since it isn't actually used anywhere (post-patch), it may be easier 
to just slap a TODO on it.

{noformat}
// Or how about this? Works if the last reference was released, and fails in
// tidy() otherwise.
public void close()
{
references.decrementAndGet();
tidy(false);
}
{noformat}


 Assertion in SSTableReader during repair.
 -

 Key: CASSANDRA-7000
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7000
 Project: Cassandra
  Issue Type: Bug
Reporter: Ben Chan
Assignee: Ben Chan
 Attachments: 7000-2.1-v2.txt, 7000.supplement.txt, 
 sstablereader-assertion-bisect-helper, 
 sstablereader-assertion-bisect-helper-v2, sstablereader-assertion.patch


 I ran a {{git bisect run}} using the attached bisect script. Repro code:
 {noformat}
 # 5dfe241: trunk as of my git bisect run
 # 345772d: empirically determined good commit.
 git bisect start 5dfe241 345772d
 git bisect run ./sstablereader-assertion-bisect-helper-v2
 {noformat}
 The first failing commit is 5ebadc1 (first parent of {{refs/bisect/bad}}).
 Prior to 5ebadc1, SSTableReader#close() never checked reference count. After 
 5ebadc1, there was an assertion for {{references.get() == 0}}. However, since 
 the reference count is initialized to 1, a SSTableReader#close() was always 
 guaranteed to either throw an AssertionError or to be a second call to 
 SSTableReader#tidy() on the same SSTableReader.
 The attached patch chooses an in-between behavior. It requires the reference 
 count to match the initialization value of 1 for SSTableReader#close(), and 
 the same behavior as 5ebadc1 otherwise.
 This allows repair to finish successfully, but I'm not 100% certain what the 
 desired behavior is for SSTableReader#close(). Should it close without regard 
 to reference count, as it did pre-5ebadc1?
 Edit: accidentally uploaded a flawed version of 
 {{sstablereader-assertion-bisect-helper}} (doesn't work out-of-the-box with 
 {{git bisect}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6487) Log WARN on large batch sizes

2014-04-11 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-6487:
--

Attachment: cassandra-2.0-6487.diff

 Log WARN on large batch sizes
 -

 Key: CASSANDRA-6487
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487
 Project: Cassandra
  Issue Type: Improvement
Reporter: Patrick McFadin
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.8

 Attachments: 6487_trunk.patch, 6487_trunk_v2.patch, 
 cassandra-2.0-6487.diff


 Large batches on a coordinator can cause a lot of node stress. I propose 
 adding a WARN log entry if batch sizes go beyond a configurable size. This 
 will give more visibility to operators on something that can happen on the 
 developer side. 
 New yaml setting with 5k default.
 {{# Log WARN on any batch size exceeding this value. 5k by default.}}
 {{# Caution should be taken on increasing the size of this threshold as it 
 can lead to node instability.}}
 {{batch_size_warn_threshold: 5k}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7025) RejectedExecutionException when stopping a node after drain

2014-04-11 Thread Sergio Bossa (JIRA)
Sergio Bossa created CASSANDRA-7025:
---

 Summary: RejectedExecutionException when stopping a node after 
drain
 Key: CASSANDRA-7025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7025
 Project: Cassandra
  Issue Type: Bug
Reporter: Sergio Bossa
Assignee: Sergio Bossa
Priority: Trivial


The following exception is caused by the BatchlogManager trying to enqueue a 
task in the shutdown postflush executor:
{noformat}
ERROR 14:49:50,580 Exception in thread Thread[BatchlogTasks:1,5,main]
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
down
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:145)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:855)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceBlockingFlush(ColumnFamilyStore.java:869)
at 
org.apache.cassandra.db.BatchlogManager.cleanup(BatchlogManager.java:345)
at 
org.apache.cassandra.db.BatchlogManager.replayAllFailedBatches(BatchlogManager.java:197)
at 
org.apache.cassandra.db.BatchlogManager$1.runMayThrow(BatchlogManager.java:96)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{noformat}

It is harmless, but generates a lot of noise in the logs, which makes debugging 
for actual problems harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7025) RejectedExecutionException when stopping a node after drain

2014-04-11 Thread Sergio Bossa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Bossa updated CASSANDRA-7025:


Attachment: CASSANDRA-7025.patch

 RejectedExecutionException when stopping a node after drain
 ---

 Key: CASSANDRA-7025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7025
 Project: Cassandra
  Issue Type: Bug
Reporter: Sergio Bossa
Assignee: Sergio Bossa
Priority: Trivial
 Attachments: CASSANDRA-7025.patch


 The following exception is caused by the BatchlogManager trying to enqueue a 
 task in the shutdown postflush executor:
 {noformat}
 ERROR 14:49:50,580 Exception in thread Thread[BatchlogTasks:1,5,main]
 java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
 down
   at 
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
   at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
   at 
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
   at 
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:145)
   at 
 java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:855)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceBlockingFlush(ColumnFamilyStore.java:869)
   at 
 org.apache.cassandra.db.BatchlogManager.cleanup(BatchlogManager.java:345)
   at 
 org.apache.cassandra.db.BatchlogManager.replayAllFailedBatches(BatchlogManager.java:197)
   at 
 org.apache.cassandra.db.BatchlogManager$1.runMayThrow(BatchlogManager.java:96)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}
 It is harmless, but generates a lot of noise in the logs, which makes 
 debugging for actual problems harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7025) RejectedExecutionException when stopping a node after drain

2014-04-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7025:
-

Reviewer: Aleksey Yeschenko

 RejectedExecutionException when stopping a node after drain
 ---

 Key: CASSANDRA-7025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7025
 Project: Cassandra
  Issue Type: Bug
Reporter: Sergio Bossa
Assignee: Sergio Bossa
Priority: Trivial
 Attachments: CASSANDRA-7025.patch


 The following exception is caused by the BatchlogManager trying to enqueue a 
 task in the shutdown postflush executor:
 {noformat}
 ERROR 14:49:50,580 Exception in thread Thread[BatchlogTasks:1,5,main]
 java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
 down
   at 
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
   at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
   at 
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
   at 
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:145)
   at 
 java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:855)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceBlockingFlush(ColumnFamilyStore.java:869)
   at 
 org.apache.cassandra.db.BatchlogManager.cleanup(BatchlogManager.java:345)
   at 
 org.apache.cassandra.db.BatchlogManager.replayAllFailedBatches(BatchlogManager.java:197)
   at 
 org.apache.cassandra.db.BatchlogManager$1.runMayThrow(BatchlogManager.java:96)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}
 It is harmless, but generates a lot of noise in the logs, which makes 
 debugging for actual problems harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6487) Log WARN on large batch sizes

2014-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966648#comment-13966648
 ] 

Aleksey Yeschenko commented on CASSANDRA-6487:
--

Not saying that we should, but we can calculate the size of the resulting 
processed collection of Mutation-s w/out using reflection, and warn based on 
that.

 Log WARN on large batch sizes
 -

 Key: CASSANDRA-6487
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487
 Project: Cassandra
  Issue Type: Improvement
Reporter: Patrick McFadin
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.8

 Attachments: 6487_trunk.patch, 6487_trunk_v2.patch, 
 cassandra-2.0-6487.diff


 Large batches on a coordinator can cause a lot of node stress. I propose 
 adding a WARN log entry if batch sizes go beyond a configurable size. This 
 will give more visibility to operators on something that can happen on the 
 developer side. 
 New yaml setting with 5k default.
 {{# Log WARN on any batch size exceeding this value. 5k by default.}}
 {{# Caution should be taken on increasing the size of this threshold as it 
 can lead to node instability.}}
 {{batch_size_warn_threshold: 5k}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6487) Log WARN on large batch sizes

2014-04-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6487:
--

Reviewer: Jonathan Ellis

 Log WARN on large batch sizes
 -

 Key: CASSANDRA-6487
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487
 Project: Cassandra
  Issue Type: Improvement
Reporter: Patrick McFadin
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.8

 Attachments: 6487_trunk.patch, 6487_trunk_v2.patch, 
 cassandra-2.0-6487.diff


 Large batches on a coordinator can cause a lot of node stress. I propose 
 adding a WARN log entry if batch sizes go beyond a configurable size. This 
 will give more visibility to operators on something that can happen on the 
 developer side. 
 New yaml setting with 5k default.
 {{# Log WARN on any batch size exceeding this value. 5k by default.}}
 {{# Caution should be taken on increasing the size of this threshold as it 
 can lead to node instability.}}
 {{batch_size_warn_threshold: 5k}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6487) Log WARN on large batch sizes

2014-04-11 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966658#comment-13966658
 ] 

Jack Krupansky commented on CASSANDRA-6487:
---

Is this something important enough that an Ops team might want to monitor in an 
automated manner, like with an mbean, for OpsCenter and other monitoring tools? 
Maybe count of batch size warnings, largest batch size seen, most recent batch 
size over the limit.

 Log WARN on large batch sizes
 -

 Key: CASSANDRA-6487
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487
 Project: Cassandra
  Issue Type: Improvement
Reporter: Patrick McFadin
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.8

 Attachments: 6487_trunk.patch, 6487_trunk_v2.patch, 
 cassandra-2.0-6487.diff


 Large batches on a coordinator can cause a lot of node stress. I propose 
 adding a WARN log entry if batch sizes go beyond a configurable size. This 
 will give more visibility to operators on something that can happen on the 
 developer side. 
 New yaml setting with 5k default.
 {{# Log WARN on any batch size exceeding this value. 5k by default.}}
 {{# Caution should be taken on increasing the size of this threshold as it 
 can lead to node instability.}}
 {{batch_size_warn_threshold: 5k}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-6981) java.io.EOFException from Cassandra when doing select

2014-04-11 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-6981.


Resolution: Duplicate

Resolving as a duplicate of https://issues.apache.org/jira/browse/CASSANDRA-6525

 java.io.EOFException from Cassandra when doing select
 -

 Key: CASSANDRA-6981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6981
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.06, Oracle Java version 1.7.0_51, Linux 
 Mint 16
Reporter: Martin Bligh

 Cassandra 2.06, Oracle Java version 1.7.0_51, Linux Mint 16
 I have a cassandra keyspace with about 12 tables that are all the same.
 If I load 100,000 rows or so into a couple of those tables in Cassandra, it 
 works fine.
 If I load a larger dataset, after a while one of the tables won't do lookups 
 any more (not always the same one).
 {noformat}
 SELECT recv_time,symbol from table6 where mid='S-AUR01-20140324A-1221';
 {noformat}
 results in Request did not complete within rpc_timeout.
 where mid is the primary key (varchar). If I look at the logs, it has an 
 EOFException ... presumably it's running out of some resource (it's 
 definitely not out of disk space)
 Sometimes it does this on secondary indexes too: dropping and rebuilding the 
 index will fix it for a while. When it's broken, it seems like only one 
 particular lookup key causes timeouts (and the EOFException every time) - 
 other lookups work fine. I presume the index is corrupt somehow.
 {noformat}
 ERROR [ReadStage:110] 2014-04-03 12:39:47,018 CassandraDaemon.java (line 196) 
 Exception in thread Thread[ReadStage:110,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
 at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
 at 
 

[jira] [Reopened] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reopened CASSANDRA-6525:


Reproduced In: 2.0.4, 2.0.3  (was: 2.0.3)

https://issues.apache.org/jira/browse/CASSANDRA-6981 is a dupe of this.  I'm 
re-opening this to investigate further.  Besides this ticket and 6981, I've 
seen one other case of this: 
https://github.com/datastax/python-driver/issues/106

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Michael Shuler

 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:87)
 at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
 at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 ... 27 more
 E.g.
 SELECT * FROM table;
 Its fine.
 SELECT * FROM table WHERE field = 'N';
 field is the partition key.
 Its said Request did not complete within rpc_timeout. in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-6525:
---

Reproduced In: 2.0.6, 2.0.4, 2.0.3  (was: 2.0.3, 2.0.4)

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Michael Shuler

 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:87)
 at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
 at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 ... 27 more
 E.g.
 SELECT * FROM table;
 Its fine.
 SELECT * FROM table WHERE field = 'N';
 field is the partition key.
 Its said Request did not complete within rpc_timeout. in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966676#comment-13966676
 ] 

Tyler Hobbs commented on CASSANDRA-6525:


It's worth noting that in CASSANDRA-6981, setting {{disk_access_mode: 
standard}} seemed to fix the problem.

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Michael Shuler

 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:87)
 at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
 at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 ... 27 more
 E.g.
 SELECT * FROM table;
 Its fine.
 SELECT * FROM table WHERE field = 'N';
 field is the partition key.
 Its said Request did not complete within rpc_timeout. in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-11 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/574468cb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/574468cb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/574468cb

Branch: refs/heads/trunk
Commit: 574468cbad3b5556eb12b52a6941dbd36bf9f8d4
Parents: 471f5cc 930905b
Author: Yuki Morishita yu...@apache.org
Authored: Fri Apr 11 10:57:45 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Apr 11 10:57:45 2014 -0500

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 35 ++--
 .../db/compaction/CompactionManager.java|  7 +---
 .../cassandra/io/sstable/SSTableReader.java | 10 --
 4 files changed, 35 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/574468cb/CHANGES.txt
--



[1/3] git commit: Fix AE when SSTable is closed without releasing reference

2014-04-11 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 66a6990aa - 930905bbe
  refs/heads/trunk 471f5cc34 - 574468cba


Fix AE when SSTable is closed without releasing reference

patch by yukim and benedict; reviewed by thobbs and Ben Chan for
CASSANDRA-7000


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/930905bb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/930905bb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/930905bb

Branch: refs/heads/cassandra-2.1
Commit: 930905bbebfbfc923d71c34fb35fbdb4e7bc8ccc
Parents: 66a6990
Author: Yuki Morishita yu...@apache.org
Authored: Fri Apr 11 10:55:57 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Apr 11 10:55:57 2014 -0500

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 35 ++--
 .../db/compaction/CompactionManager.java|  7 +---
 .../cassandra/io/sstable/SSTableReader.java | 10 --
 4 files changed, 35 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/930905bb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 94dd7c3..4c2d77e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -43,6 +43,7 @@
  * Track presence of legacy counter shards in sstables (CASSANDRA-6888)
  * Ensure safe resource cleanup when replacing sstables (CASSANDRA-6912)
  * Add failure handler to async callback (CASSANDRA-6747)
+ * Fix AE when closing SSTable without releasing reference (CASSANDRA-7000)
 Merged from 2.0:
  * Put nodes in hibernate when join_ring is false (CASSANDRA-6961)
  * Allow compaction of system tables during startup (CASSANDRA-6913)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/930905bb/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 43ecdc1..ffea243 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2326,22 +2326,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public IterableDecoratedKey keySamples(RangeToken range)
 {
-CollectionSSTableReader sstables = getSSTables();
-IterableDecoratedKey[] samples = new Iterable[sstables.size()];
-int i = 0;
-for (SSTableReader sstable: sstables)
+CollectionSSTableReader sstables = markCurrentSSTablesReferenced();
+try
+{
+IterableDecoratedKey[] samples = new Iterable[sstables.size()];
+int i = 0;
+for (SSTableReader sstable: sstables)
+{
+samples[i++] = sstable.getKeySamples(range);
+}
+return Iterables.concat(samples);
+}
+finally
 {
-samples[i++] = sstable.getKeySamples(range);
+SSTableReader.releaseReferences(sstables);
 }
-return Iterables.concat(samples);
 }
 
 public long estimatedKeysForRange(RangeToken range)
 {
-long count = 0;
-for (SSTableReader sstable : getSSTables())
-count += 
sstable.estimatedKeysForRanges(Collections.singleton(range));
-return count;
+CollectionSSTableReader sstables = markCurrentSSTablesReferenced();
+try
+{
+long count = 0;
+for (SSTableReader sstable : sstables)
+count += 
sstable.estimatedKeysForRanges(Collections.singleton(range));
+return count;
+}
+finally
+{
+SSTableReader.releaseReferences(sstables);
+}
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/930905bb/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 5cebf73..b1f0c2a 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -950,16 +950,11 @@ public class CompactionManager implements 
CompactionManagerMBean
 finally
 {
 iter.close();
+SSTableReader.releaseReferences(sstables);
 if (isSnapshotValidation)
 {
-for (SSTableReader sstable : sstables)
-FileUtils.closeQuietly(sstable);
 

[2/3] git commit: Fix AE when SSTable is closed without releasing reference

2014-04-11 Thread yukim
Fix AE when SSTable is closed without releasing reference

patch by yukim and benedict; reviewed by thobbs and Ben Chan for
CASSANDRA-7000


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/930905bb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/930905bb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/930905bb

Branch: refs/heads/trunk
Commit: 930905bbebfbfc923d71c34fb35fbdb4e7bc8ccc
Parents: 66a6990
Author: Yuki Morishita yu...@apache.org
Authored: Fri Apr 11 10:55:57 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Apr 11 10:55:57 2014 -0500

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 35 ++--
 .../db/compaction/CompactionManager.java|  7 +---
 .../cassandra/io/sstable/SSTableReader.java | 10 --
 4 files changed, 35 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/930905bb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 94dd7c3..4c2d77e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -43,6 +43,7 @@
  * Track presence of legacy counter shards in sstables (CASSANDRA-6888)
  * Ensure safe resource cleanup when replacing sstables (CASSANDRA-6912)
  * Add failure handler to async callback (CASSANDRA-6747)
+ * Fix AE when closing SSTable without releasing reference (CASSANDRA-7000)
 Merged from 2.0:
  * Put nodes in hibernate when join_ring is false (CASSANDRA-6961)
  * Allow compaction of system tables during startup (CASSANDRA-6913)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/930905bb/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 43ecdc1..ffea243 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2326,22 +2326,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 public IterableDecoratedKey keySamples(RangeToken range)
 {
-CollectionSSTableReader sstables = getSSTables();
-IterableDecoratedKey[] samples = new Iterable[sstables.size()];
-int i = 0;
-for (SSTableReader sstable: sstables)
+CollectionSSTableReader sstables = markCurrentSSTablesReferenced();
+try
+{
+IterableDecoratedKey[] samples = new Iterable[sstables.size()];
+int i = 0;
+for (SSTableReader sstable: sstables)
+{
+samples[i++] = sstable.getKeySamples(range);
+}
+return Iterables.concat(samples);
+}
+finally
 {
-samples[i++] = sstable.getKeySamples(range);
+SSTableReader.releaseReferences(sstables);
 }
-return Iterables.concat(samples);
 }
 
 public long estimatedKeysForRange(RangeToken range)
 {
-long count = 0;
-for (SSTableReader sstable : getSSTables())
-count += 
sstable.estimatedKeysForRanges(Collections.singleton(range));
-return count;
+CollectionSSTableReader sstables = markCurrentSSTablesReferenced();
+try
+{
+long count = 0;
+for (SSTableReader sstable : sstables)
+count += 
sstable.estimatedKeysForRanges(Collections.singleton(range));
+return count;
+}
+finally
+{
+SSTableReader.releaseReferences(sstables);
+}
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/930905bb/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 5cebf73..b1f0c2a 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -950,16 +950,11 @@ public class CompactionManager implements 
CompactionManagerMBean
 finally
 {
 iter.close();
+SSTableReader.releaseReferences(sstables);
 if (isSnapshotValidation)
 {
-for (SSTableReader sstable : sstables)
-FileUtils.closeQuietly(sstable);
 cfs.clearSnapshot(snapshotName);
 }
-else
-{
-SSTableReader.releaseReferences(sstables);
-   

[jira] [Commented] (CASSANDRA-7000) Assertion in SSTableReader during repair.

2014-04-11 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966679#comment-13966679
 ] 

Yuki Morishita commented on CASSANDRA-7000:
---

I committed v2 and supplement to 2.1 and trunk with AE in tidy changed to 
IllegalStateException.

I think we can completely drop 'implements Closable'/close method. It is no 
longer used, and if used by accident after release, it will likely to cause 
exception.
Thoughts?

 Assertion in SSTableReader during repair.
 -

 Key: CASSANDRA-7000
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7000
 Project: Cassandra
  Issue Type: Bug
Reporter: Ben Chan
Assignee: Ben Chan
 Attachments: 7000-2.1-v2.txt, 7000.supplement.txt, 
 sstablereader-assertion-bisect-helper, 
 sstablereader-assertion-bisect-helper-v2, sstablereader-assertion.patch


 I ran a {{git bisect run}} using the attached bisect script. Repro code:
 {noformat}
 # 5dfe241: trunk as of my git bisect run
 # 345772d: empirically determined good commit.
 git bisect start 5dfe241 345772d
 git bisect run ./sstablereader-assertion-bisect-helper-v2
 {noformat}
 The first failing commit is 5ebadc1 (first parent of {{refs/bisect/bad}}).
 Prior to 5ebadc1, SSTableReader#close() never checked reference count. After 
 5ebadc1, there was an assertion for {{references.get() == 0}}. However, since 
 the reference count is initialized to 1, a SSTableReader#close() was always 
 guaranteed to either throw an AssertionError or to be a second call to 
 SSTableReader#tidy() on the same SSTableReader.
 The attached patch chooses an in-between behavior. It requires the reference 
 count to match the initialization value of 1 for SSTableReader#close(), and 
 the same behavior as 5ebadc1 otherwise.
 This allows repair to finish successfully, but I'm not 100% certain what the 
 desired behavior is for SSTableReader#close(). Should it close without regard 
 to reference count, as it did pre-5ebadc1?
 Edit: accidentally uploaded a flawed version of 
 {{sstablereader-assertion-bisect-helper}} (doesn't work out-of-the-box with 
 {{git bisect}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-7000) Assertion in SSTableReader during repair.

2014-04-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966684#comment-13966684
 ] 

Benedict edited comment on CASSANDRA-7000 at 4/11/14 4:06 PM:
--

bq. I think we can completely drop 'implements Closable'/close method. It is no 
longer used, and if used by accident after release, it will likely to cause 
exception.

If it's now unused, +100


was (Author: benedict):
bq. I think we can completely drop 'implements Closable'/close method. It is no 
longer used, and if used by accident after release, it will likely to cause 
exception.
Thoughts?

If it's now unused, +100

 Assertion in SSTableReader during repair.
 -

 Key: CASSANDRA-7000
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7000
 Project: Cassandra
  Issue Type: Bug
Reporter: Ben Chan
Assignee: Ben Chan
 Attachments: 7000-2.1-v2.txt, 7000.supplement.txt, 
 sstablereader-assertion-bisect-helper, 
 sstablereader-assertion-bisect-helper-v2, sstablereader-assertion.patch


 I ran a {{git bisect run}} using the attached bisect script. Repro code:
 {noformat}
 # 5dfe241: trunk as of my git bisect run
 # 345772d: empirically determined good commit.
 git bisect start 5dfe241 345772d
 git bisect run ./sstablereader-assertion-bisect-helper-v2
 {noformat}
 The first failing commit is 5ebadc1 (first parent of {{refs/bisect/bad}}).
 Prior to 5ebadc1, SSTableReader#close() never checked reference count. After 
 5ebadc1, there was an assertion for {{references.get() == 0}}. However, since 
 the reference count is initialized to 1, a SSTableReader#close() was always 
 guaranteed to either throw an AssertionError or to be a second call to 
 SSTableReader#tidy() on the same SSTableReader.
 The attached patch chooses an in-between behavior. It requires the reference 
 count to match the initialization value of 1 for SSTableReader#close(), and 
 the same behavior as 5ebadc1 otherwise.
 This allows repair to finish successfully, but I'm not 100% certain what the 
 desired behavior is for SSTableReader#close(). Should it close without regard 
 to reference count, as it did pre-5ebadc1?
 Edit: accidentally uploaded a flawed version of 
 {{sstablereader-assertion-bisect-helper}} (doesn't work out-of-the-box with 
 {{git bisect}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7000) Assertion in SSTableReader during repair.

2014-04-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966684#comment-13966684
 ] 

Benedict commented on CASSANDRA-7000:
-

bq. I think we can completely drop 'implements Closable'/close method. It is no 
longer used, and if used by accident after release, it will likely to cause 
exception.
Thoughts?

If it's now unused, +100

 Assertion in SSTableReader during repair.
 -

 Key: CASSANDRA-7000
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7000
 Project: Cassandra
  Issue Type: Bug
Reporter: Ben Chan
Assignee: Ben Chan
 Attachments: 7000-2.1-v2.txt, 7000.supplement.txt, 
 sstablereader-assertion-bisect-helper, 
 sstablereader-assertion-bisect-helper-v2, sstablereader-assertion.patch


 I ran a {{git bisect run}} using the attached bisect script. Repro code:
 {noformat}
 # 5dfe241: trunk as of my git bisect run
 # 345772d: empirically determined good commit.
 git bisect start 5dfe241 345772d
 git bisect run ./sstablereader-assertion-bisect-helper-v2
 {noformat}
 The first failing commit is 5ebadc1 (first parent of {{refs/bisect/bad}}).
 Prior to 5ebadc1, SSTableReader#close() never checked reference count. After 
 5ebadc1, there was an assertion for {{references.get() == 0}}. However, since 
 the reference count is initialized to 1, a SSTableReader#close() was always 
 guaranteed to either throw an AssertionError or to be a second call to 
 SSTableReader#tidy() on the same SSTableReader.
 The attached patch chooses an in-between behavior. It requires the reference 
 count to match the initialization value of 1 for SSTableReader#close(), and 
 the same behavior as 5ebadc1 otherwise.
 This allows repair to finish successfully, but I'm not 100% certain what the 
 desired behavior is for SSTableReader#close(). Should it close without regard 
 to reference count, as it did pre-5ebadc1?
 Edit: accidentally uploaded a flawed version of 
 {{sstablereader-assertion-bisect-helper}} (doesn't work out-of-the-box with 
 {{git bisect}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5483) Repair tracing

2014-04-11 Thread Ben Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Chan updated CASSANDRA-5483:


Attachment: 5483-v10-rebased-and-squashed-471f5cc.patch
5483-v10-17-minor-bugfixes-and-changes.patch

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: 5483-full-trunk.txt, 
 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
  5483-v07-08-Fix-brace-style.patch, 
 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
 5483-v08-11-Shorten-trace-messages.-Use-Tracing-begin.patch, 
 5483-v08-12-Trace-streaming-in-Differencer-StreamingRepairTask.patch, 
 5483-v08-13-sendNotification-of-local-traces-back-to-nodetool.patch, 
 5483-v08-14-Poll-system_traces.events.patch, 
 5483-v08-15-Limit-trace-notifications.-Add-exponential-backoff.patch, 
 5483-v09-16-Fix-hang-caused-by-incorrect-exit-code.patch, 
 5483-v10-17-minor-bugfixes-and-changes.patch, 
 5483-v10-rebased-and-squashed-471f5cc.patch, ccm-repair-test, 
 cqlsh-left-justify-text-columns.patch, prerepair-vs-postbuggedrepair.diff, 
 test-5483-system_traces-events.txt, 
 trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
 trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
 v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
 v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/3] git commit: Remove Closable from SSTableReader

2014-04-11 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 930905bbe - 9d08e50da
  refs/heads/trunk 574468cba - cbb3c8f48


Remove Closable from SSTableReader

to clarify the way to release resource held by SSTableReader


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9d08e50d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9d08e50d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9d08e50d

Branch: refs/heads/cassandra-2.1
Commit: 9d08e50daaed83fc8e9321b6ae1d44f8e8137d8a
Parents: 930905b
Author: Yuki Morishita yu...@apache.org
Authored: Fri Apr 11 11:31:31 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Apr 11 11:31:31 2014 -0500

--
 .../org/apache/cassandra/io/sstable/SSTableReader.java   | 11 +--
 1 file changed, 1 insertion(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d08e50d/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index bc5002f..47d31b6 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -18,7 +18,6 @@
 package org.apache.cassandra.io.sstable;
 
 import java.io.BufferedInputStream;
-import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.File;
 import java.io.FileInputStream;
@@ -117,7 +116,7 @@ import static 
org.apache.cassandra.db.Directories.SECONDARY_INDEX_NAME_SEPARATOR
  * SSTableReaders are open()ed by Keyspace.onStart; after that they are 
created by SSTableWriter.renameAndOpen.
  * Do not re-call open() on existing SSTable files; use the references kept by 
ColumnFamilyStore post-start instead.
  */
-public class SSTableReader extends SSTable implements Closeable
+public class SSTableReader extends SSTable
 {
 private static final Logger logger = 
LoggerFactory.getLogger(SSTableReader.class);
 
@@ -620,14 +619,6 @@ public class SSTableReader extends SSTable implements 
Closeable
 }
 }
 
-/**
- * Schedule clean-up of resources
- */
-public void close()
-{
-tidy(false);
-}
-
 public String getFilename()
 {
 return dfile.path;



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-11 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cbb3c8f4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cbb3c8f4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cbb3c8f4

Branch: refs/heads/trunk
Commit: cbb3c8f48819d83f0713b707f93d18c75c6d9122
Parents: 574468c 9d08e50
Author: Yuki Morishita yu...@apache.org
Authored: Fri Apr 11 11:32:35 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Apr 11 11:32:35 2014 -0500

--
 .../org/apache/cassandra/io/sstable/SSTableReader.java   | 11 +--
 1 file changed, 1 insertion(+), 10 deletions(-)
--




[2/3] git commit: Remove Closable from SSTableReader

2014-04-11 Thread yukim
Remove Closable from SSTableReader

to clarify the way to release resource held by SSTableReader


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9d08e50d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9d08e50d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9d08e50d

Branch: refs/heads/trunk
Commit: 9d08e50daaed83fc8e9321b6ae1d44f8e8137d8a
Parents: 930905b
Author: Yuki Morishita yu...@apache.org
Authored: Fri Apr 11 11:31:31 2014 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Fri Apr 11 11:31:31 2014 -0500

--
 .../org/apache/cassandra/io/sstable/SSTableReader.java   | 11 +--
 1 file changed, 1 insertion(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9d08e50d/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index bc5002f..47d31b6 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -18,7 +18,6 @@
 package org.apache.cassandra.io.sstable;
 
 import java.io.BufferedInputStream;
-import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.File;
 import java.io.FileInputStream;
@@ -117,7 +116,7 @@ import static 
org.apache.cassandra.db.Directories.SECONDARY_INDEX_NAME_SEPARATOR
  * SSTableReaders are open()ed by Keyspace.onStart; after that they are 
created by SSTableWriter.renameAndOpen.
  * Do not re-call open() on existing SSTable files; use the references kept by 
ColumnFamilyStore post-start instead.
  */
-public class SSTableReader extends SSTable implements Closeable
+public class SSTableReader extends SSTable
 {
 private static final Logger logger = 
LoggerFactory.getLogger(SSTableReader.class);
 
@@ -620,14 +619,6 @@ public class SSTableReader extends SSTable implements 
Closeable
 }
 }
 
-/**
- * Schedule clean-up of resources
- */
-public void close()
-{
-tidy(false);
-}
-
 public String getFilename()
 {
 return dfile.path;



[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-04-11 Thread Ben Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966778#comment-13966778
 ] 

Ben Chan commented on CASSANDRA-5483:
-

I made some additional changes. Everything is included in 
[^5483-v10-rebased-and-squashed-471f5cc.patch], but I attached 
[^5483-v10-17-minor-bugfixes-and-changes.patch] to make it more more convenient 
to review.

Repair fails without 
[7000-2.1-v2.txt|https://issues.apache.org/jira/secure/attachment/12639633/7000-2.1-v2.txt]
 so I've included that patch in the test code.

Overview:
 * Limit exponential backoff.
 * Handle the case where traceType is negative.
 * Reimplement log2 in BitUtil style.
 * Forgot to add a trace parameter.

{noformat}
# rebased against trunk @ 471f5cc
# git checkout 471f5cc
W=https://issues.apache.org/jira/secure/attachment
for url in \
  $W/12639821/5483-v10-rebased-and-squashed-471f5cc.patch \
  $W/12639633/7000-2.1-v2.txt
do
  { [ -e $(basename $url) ] || curl -sO $url; }  git apply $(basename $url)
done 
ant clean  ant 
./ccm-repair-test -kR 
ccm node1 stop 
ccm node1 clear 
ccm node1 start 
./ccm-repair-test -rt
{noformat}


 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: 5483-full-trunk.txt, 
 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
  5483-v07-08-Fix-brace-style.patch, 
 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
 5483-v08-11-Shorten-trace-messages.-Use-Tracing-begin.patch, 
 5483-v08-12-Trace-streaming-in-Differencer-StreamingRepairTask.patch, 
 5483-v08-13-sendNotification-of-local-traces-back-to-nodetool.patch, 
 5483-v08-14-Poll-system_traces.events.patch, 
 5483-v08-15-Limit-trace-notifications.-Add-exponential-backoff.patch, 
 5483-v09-16-Fix-hang-caused-by-incorrect-exit-code.patch, 
 5483-v10-17-minor-bugfixes-and-changes.patch, 
 5483-v10-rebased-and-squashed-471f5cc.patch, ccm-repair-test, 
 cqlsh-left-justify-text-columns.patch, prerepair-vs-postbuggedrepair.diff, 
 test-5483-system_traces-events.txt, 
 trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
 trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
 v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
 v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-5483) Repair tracing

2014-04-11 Thread Ben Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966778#comment-13966778
 ] 

Ben Chan edited comment on CASSANDRA-5483 at 4/11/14 5:42 PM:
--

I made some additional changes. Everything is included in 
[^5483-v10-rebased-and-squashed-471f5cc.patch], but I attached 
[^5483-v10-17-minor-bugfixes-and-changes.patch] to make it more more convenient 
to review.

-Repair fails without 
[7000-2.1-v2.txt|https://issues.apache.org/jira/secure/attachment/12639633/7000-2.1-v2.txt]
 so I've included that patch in the test code.-

Overview:
 * Limit exponential backoff.
 * Handle the case where traceType is negative.
 * Reimplement log2 in BitUtil style.
 * Forgot to add a trace parameter.

{noformat}
# rebased against trunk @ 471f5cc, tested against cbb3c8f
# git checkout cbb3c8f
W=https://issues.apache.org/jira/secure/attachment
for url in \
  $W/12639821/5483-v10-rebased-and-squashed-471f5cc.patch
do
  { [ -e $(basename $url) ] || curl -sO $url; }  git apply $(basename $url)
done 
ant clean  ant 
./ccm-repair-test -kR 
ccm node1 stop 
ccm node1 clear 
ccm node1 start 
./ccm-repair-test -rt
{noformat}

Edit: 7000 landed.



was (Author: usrbincc):
I made some additional changes. Everything is included in 
[^5483-v10-rebased-and-squashed-471f5cc.patch], but I attached 
[^5483-v10-17-minor-bugfixes-and-changes.patch] to make it more more convenient 
to review.

Repair fails without 
[7000-2.1-v2.txt|https://issues.apache.org/jira/secure/attachment/12639633/7000-2.1-v2.txt]
 so I've included that patch in the test code.

Overview:
 * Limit exponential backoff.
 * Handle the case where traceType is negative.
 * Reimplement log2 in BitUtil style.
 * Forgot to add a trace parameter.

{noformat}
# rebased against trunk @ 471f5cc
# git checkout 471f5cc
W=https://issues.apache.org/jira/secure/attachment
for url in \
  $W/12639821/5483-v10-rebased-and-squashed-471f5cc.patch \
  $W/12639633/7000-2.1-v2.txt
do
  { [ -e $(basename $url) ] || curl -sO $url; }  git apply $(basename $url)
done 
ant clean  ant 
./ccm-repair-test -kR 
ccm node1 stop 
ccm node1 clear 
ccm node1 start 
./ccm-repair-test -rt
{noformat}


 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: 5483-full-trunk.txt, 
 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
  5483-v07-08-Fix-brace-style.patch, 
 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
 5483-v08-11-Shorten-trace-messages.-Use-Tracing-begin.patch, 
 5483-v08-12-Trace-streaming-in-Differencer-StreamingRepairTask.patch, 
 5483-v08-13-sendNotification-of-local-traces-back-to-nodetool.patch, 
 5483-v08-14-Poll-system_traces.events.patch, 
 5483-v08-15-Limit-trace-notifications.-Add-exponential-backoff.patch, 
 5483-v09-16-Fix-hang-caused-by-incorrect-exit-code.patch, 
 5483-v10-17-minor-bugfixes-and-changes.patch, 
 5483-v10-rebased-and-squashed-471f5cc.patch, ccm-repair-test, 
 cqlsh-left-justify-text-columns.patch, prerepair-vs-postbuggedrepair.diff, 
 test-5483-system_traces-events.txt, 
 trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
 trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
 v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
 v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Martin Bligh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966868#comment-13966868
 ] 

Martin Bligh commented on CASSANDRA-6525:
-

(copied from 6981)
I thought it was interesting how far apart these two numbers were:

java.io.IOError: java.io.IOException: mmap segment underflow; remaining is 
20402577 but 1879048192 requested

And that the requested number is vaguely close to 2^^31 - did something do a 
negative number and wrap a 32 bit signed here?
To be fair, it's not that close to 2^^31, but still way off what was expected?

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Michael Shuler

 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:87)
 at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
 at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 ... 27 more
 E.g.
 SELECT * FROM table;
 Its fine.
 SELECT * FROM table WHERE field = 'N';
 field is the partition key.
 Its said Request did not complete within rpc_timeout. in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-3668) Parallel streaming for sstableloader

2014-04-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966872#comment-13966872
 ] 

Joshua McKenzie commented on CASSANDRA-3668:


Have a stabilized version again 2.0.6.  We have some other issues on trunk 
right now on the streaming path so I'll wait until we have that ironed out to 
rebase to trunk, re-test, and post patch.  Some performance #'s against a 
single node locally:
{code:title=single_node}
  Summary statistics:
 Connections per host: : 1
 Total files transferred:  : 76
 Total bytes transferred:  : 2037105326
 Total duration (ms):  : 43382
 Average transfer rate (MB/s): : 22
 Peak transfer rate (MB/s):: 25
  Summary statistics:
 Connections per host: : 2
 Total files transferred:  : 76
 Total bytes transferred:  : 2037105326
 Total duration (ms):  : 25794
 Average transfer rate (MB/s): : 38
 Peak transfer rate (MB/s):: 45
  Summary statistics:
 Connections per host: : 4
 Total files transferred:  : 76
 Total bytes transferred:  : 2037105326
 Total duration (ms):  : 20063
 Average transfer rate (MB/s): : 48
 Peak transfer rate (MB/s):: 60
  Summary statistics:
 Connections per host: : 6
 Total files transferred:  : 76
 Total bytes transferred:  : 2037105326
 Total duration (ms):  : 19350
 Average transfer rate (MB/s): : 50
 Peak transfer rate (MB/s):: 66
{code}

With 3 nodes hosted locally on ccm and 6 connections per host, I'm pushing a 
comparable 65MB/s peak and 44MB/s average.

I'll update once we get trunk sorted out and I rebase to it.

 Parallel streaming for sstableloader
 

 Key: CASSANDRA-3668
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3668
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Reporter: Manish Zope
Assignee: Joshua McKenzie
Priority: Minor
  Labels: streaming
 Fix For: 2.1 beta2

 Attachments: 3668-1.1-v2.txt, 3668-1.1.txt, 
 3688-reply_before_closing_writer.txt, sstable-loader performance.txt

   Original Estimate: 48h
  Remaining Estimate: 48h

 One of my colleague had reported the bug regarding the degraded performance 
 of the sstable generator and sstable loader.
 ISSUE :- https://issues.apache.org/jira/browse/CASSANDRA-3589 
 As stated in above issue generator performance is rectified but performance 
 of the sstableloader is still an issue.
 3589 is marked as duplicate of 3618.Both issues shows resolved status.But the 
 problem with sstableloader still exists.
 So opening other issue so that sstbleloader problem should not go unnoticed.
 FYI : We have tested the generator part with the patch given in 3589.Its 
 Working fine.
 Please let us know if you guys require further inputs from our side.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/2] git commit: Shutdown batchlog executor in SS#drain()

2014-04-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 47ace44e1 - 8845c5108


Shutdown batchlog executor in SS#drain()

patch by Sergio Bossa; reviewed by Aleksey Yeschenko for CASSANDRA-7025


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fe94e90f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fe94e90f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fe94e90f

Branch: refs/heads/cassandra-2.0
Commit: fe94e90f4bd9274a0f0ab10616de2215da8d6b17
Parents: d41c075
Author: Sergio Bossa sergio.bo...@gmail.com
Authored: Fri Apr 11 15:45:53 2014 +0100
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 20:50:50 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b3e5310..07c09cf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@
  * Continue assassinating even if the endpoint vanishes (CASSANDRA-6787)
  * Schedule schema pulls on change (CASSANDRA-6971)
  * Non-droppable verbs shouldn't be dropped from OTC (CASSANDRA-6980)
+ * Shutdown batchlog executor in SS#drain() (CASSANDRA-7025)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 7bfbf0c..b8dbadd 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -77,7 +77,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 private final AtomicLong totalBatchesReplayed = new AtomicLong();
 private final AtomicBoolean isReplaying = new AtomicBoolean();
 
-private static final ScheduledExecutorService batchlogTasks = new 
DebuggableScheduledThreadPoolExecutor(BatchlogTasks);
+public static final ScheduledExecutorService batchlogTasks = new 
DebuggableScheduledThreadPoolExecutor(BatchlogTasks);
 
 public void start()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a7541f4..1e7bed4 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3499,6 +3499,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 FBUtilities.waitOnFutures(flushes);
 
+BatchlogManager.batchlogTasks.shutdown();
+BatchlogManager.batchlogTasks.awaitTermination(60, TimeUnit.SECONDS);
+
 ColumnFamilyStore.postFlushExecutor.shutdown();
 ColumnFamilyStore.postFlushExecutor.awaitTermination(60, 
TimeUnit.SECONDS);
 



[2/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-04-11 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8845c510
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8845c510
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8845c510

Branch: refs/heads/trunk
Commit: 8845c5108f12677134f784467c72f8c6050dbc15
Parents: 47ace44 fe94e90
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Apr 11 20:58:21 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 20:58:21 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8845c510/CHANGES.txt
--
diff --cc CHANGES.txt
index e71f1af,07c09cf..9edd705
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -58,50 -4,17 +58,51 @@@ Merged from 1.2
   * Continue assassinating even if the endpoint vanishes (CASSANDRA-6787)
   * Schedule schema pulls on change (CASSANDRA-6971)
   * Non-droppable verbs shouldn't be dropped from OTC (CASSANDRA-6980)
+  * Shutdown batchlog executor in SS#drain() (CASSANDRA-7025)
  
  
 -1.2.16
 - * Add UNLOGGED, COUNTER options to BATCH documentation (CASSANDRA-6816)
 - * add extra SSL cipher suites (CASSANDRA-6613)
 - * fix nodetool getsstables for blob PK (CASSANDRA-6803)
 +2.0.6
 + * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
 + * Pool CqlRecordWriter clients by inetaddress rather than Range 
 +   (CASSANDRA-6665)
 + * Fix compaction_history timestamps (CASSANDRA-6784)
 + * Compare scores of full replica ordering in DES (CASSANDRA-6883)
 + * fix CME in SessionInfo updateProgress affecting netstats (CASSANDRA-6577)
 + * Allow repairing between specific replicas (CASSANDRA-6440)
 + * Allow per-dc enabling of hints (CASSANDRA-6157)
 + * Add compatibility for Hadoop 0.2.x (CASSANDRA-5201)
 + * Fix EstimatedHistogram races (CASSANDRA-6682)
 + * Failure detector correctly converts initial value to nanos (CASSANDRA-6658)
 + * Add nodetool taketoken to relocate vnodes (CASSANDRA-4445)
 + * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
 + * Improve nodetool cfhistograms formatting (CASSANDRA-6360)
 + * Expose bulk loading progress over JMX (CASSANDRA-4757)
 + * Correctly handle null with IF conditions and TTL (CASSANDRA-6623)
 + * Account for range/row tombstones in tombstone drop
 +   time histogram (CASSANDRA-6522)
 + * Stop CommitLogSegment.close() from calling sync() (CASSANDRA-6652)
 + * Make commitlog failure handling configurable (CASSANDRA-6364)
 + * Avoid overlaps in LCS (CASSANDRA-6688)
 + * Improve support for paginating over composites (CASSANDRA-4851)
 + * Fix count(*) queries in a mixed cluster (CASSANDRA-6707)
 + * Improve repair tasks(snapshot, differencing) concurrency (CASSANDRA-6566)
 + * Fix replaying pre-2.0 commit logs (CASSANDRA-6714)
 + * Add static columns to CQL3 (CASSANDRA-6561)
 + * Optimize single partition batch statements (CASSANDRA-6737)
 + * Disallow post-query re-ordering when paging (CASSANDRA-6722)
 + * Fix potential paging bug with deleted columns (CASSANDRA-6748)
 + * Fix NPE on BulkLoader caused by losing StreamEvent (CASSANDRA-6636)
 + * Fix truncating compression metadata (CASSANDRA-6791)
 + * Fix UPDATE updating PRIMARY KEY columns implicitly (CASSANDRA-6782)
 + * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
 +   (CASSANDRA-6733)
 + * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
 + * Fix CQLSStableWriter.addRow(MapString, Object) (CASSANDRA-6526)
 + * Fix HSHA server introducing corrupt data (CASSANDRA-6285)
 + * Fix CAS conditions for COMPACT STORAGE tables (CASSANDRA-6813)
 +Merged from 1.2:
   * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
   * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)
 - * Don't attempt cross-dc forwarding in mixed-version cluster with 1.1 
 -   (CASSANDRA-6732)
   * Fix broken streams when replacing with same IP (CASSANDRA-6622)
   * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
   * Fix partition and range deletes not triggering flush (CASSANDRA-6655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8845c510/src/java/org/apache/cassandra/db/BatchlogManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8845c510/src/java/org/apache/cassandra/service/StorageService.java
--



[1/4] git commit: Shutdown batchlog executor in SS#drain()

2014-04-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk cbb3c8f48 - 8c0aa9927


Shutdown batchlog executor in SS#drain()

patch by Sergio Bossa; reviewed by Aleksey Yeschenko for CASSANDRA-7025


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fe94e90f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fe94e90f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fe94e90f

Branch: refs/heads/trunk
Commit: fe94e90f4bd9274a0f0ab10616de2215da8d6b17
Parents: d41c075
Author: Sergio Bossa sergio.bo...@gmail.com
Authored: Fri Apr 11 15:45:53 2014 +0100
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 20:50:50 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b3e5310..07c09cf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@
  * Continue assassinating even if the endpoint vanishes (CASSANDRA-6787)
  * Schedule schema pulls on change (CASSANDRA-6971)
  * Non-droppable verbs shouldn't be dropped from OTC (CASSANDRA-6980)
+ * Shutdown batchlog executor in SS#drain() (CASSANDRA-7025)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 7bfbf0c..b8dbadd 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -77,7 +77,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 private final AtomicLong totalBatchesReplayed = new AtomicLong();
 private final AtomicBoolean isReplaying = new AtomicBoolean();
 
-private static final ScheduledExecutorService batchlogTasks = new 
DebuggableScheduledThreadPoolExecutor(BatchlogTasks);
+public static final ScheduledExecutorService batchlogTasks = new 
DebuggableScheduledThreadPoolExecutor(BatchlogTasks);
 
 public void start()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a7541f4..1e7bed4 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3499,6 +3499,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 FBUtilities.waitOnFutures(flushes);
 
+BatchlogManager.batchlogTasks.shutdown();
+BatchlogManager.batchlogTasks.awaitTermination(60, TimeUnit.SECONDS);
+
 ColumnFamilyStore.postFlushExecutor.shutdown();
 ColumnFamilyStore.postFlushExecutor.awaitTermination(60, 
TimeUnit.SECONDS);
 



[3/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-11 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/service/StorageService.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f00662a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f00662a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f00662a

Branch: refs/heads/trunk
Commit: 3f00662a199f35275226f5ccfce2b81d227cf38c
Parents: 9d08e50 8845c51
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Apr 11 21:02:16 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 21:02:16 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f00662a/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f00662a/src/java/org/apache/cassandra/db/BatchlogManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f00662a/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 3e94172,7382cbd..6e18567
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -3543,10 -3461,9 +3543,13 @@@ public class StorageService extends Not
  }
  FBUtilities.waitOnFutures(flushes);
  
+ BatchlogManager.batchlogTasks.shutdown();
+ BatchlogManager.batchlogTasks.awaitTermination(60, TimeUnit.SECONDS);
+ 
 +// whilst we've flushed all the CFs, which will have recycled all 
completed segments, we want to ensure
 +// there are no segments to replay, so we force the recycling of any 
remaining (should be at most one)
 +CommitLog.instance.forceRecycleAllSegments();
 +
  ColumnFamilyStore.postFlushExecutor.shutdown();
  ColumnFamilyStore.postFlushExecutor.awaitTermination(60, 
TimeUnit.SECONDS);
  



[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-04-11 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8845c510
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8845c510
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8845c510

Branch: refs/heads/cassandra-2.1
Commit: 8845c5108f12677134f784467c72f8c6050dbc15
Parents: 47ace44 fe94e90
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Apr 11 20:58:21 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 20:58:21 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8845c510/CHANGES.txt
--
diff --cc CHANGES.txt
index e71f1af,07c09cf..9edd705
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -58,50 -4,17 +58,51 @@@ Merged from 1.2
   * Continue assassinating even if the endpoint vanishes (CASSANDRA-6787)
   * Schedule schema pulls on change (CASSANDRA-6971)
   * Non-droppable verbs shouldn't be dropped from OTC (CASSANDRA-6980)
+  * Shutdown batchlog executor in SS#drain() (CASSANDRA-7025)
  
  
 -1.2.16
 - * Add UNLOGGED, COUNTER options to BATCH documentation (CASSANDRA-6816)
 - * add extra SSL cipher suites (CASSANDRA-6613)
 - * fix nodetool getsstables for blob PK (CASSANDRA-6803)
 +2.0.6
 + * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
 + * Pool CqlRecordWriter clients by inetaddress rather than Range 
 +   (CASSANDRA-6665)
 + * Fix compaction_history timestamps (CASSANDRA-6784)
 + * Compare scores of full replica ordering in DES (CASSANDRA-6883)
 + * fix CME in SessionInfo updateProgress affecting netstats (CASSANDRA-6577)
 + * Allow repairing between specific replicas (CASSANDRA-6440)
 + * Allow per-dc enabling of hints (CASSANDRA-6157)
 + * Add compatibility for Hadoop 0.2.x (CASSANDRA-5201)
 + * Fix EstimatedHistogram races (CASSANDRA-6682)
 + * Failure detector correctly converts initial value to nanos (CASSANDRA-6658)
 + * Add nodetool taketoken to relocate vnodes (CASSANDRA-4445)
 + * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
 + * Improve nodetool cfhistograms formatting (CASSANDRA-6360)
 + * Expose bulk loading progress over JMX (CASSANDRA-4757)
 + * Correctly handle null with IF conditions and TTL (CASSANDRA-6623)
 + * Account for range/row tombstones in tombstone drop
 +   time histogram (CASSANDRA-6522)
 + * Stop CommitLogSegment.close() from calling sync() (CASSANDRA-6652)
 + * Make commitlog failure handling configurable (CASSANDRA-6364)
 + * Avoid overlaps in LCS (CASSANDRA-6688)
 + * Improve support for paginating over composites (CASSANDRA-4851)
 + * Fix count(*) queries in a mixed cluster (CASSANDRA-6707)
 + * Improve repair tasks(snapshot, differencing) concurrency (CASSANDRA-6566)
 + * Fix replaying pre-2.0 commit logs (CASSANDRA-6714)
 + * Add static columns to CQL3 (CASSANDRA-6561)
 + * Optimize single partition batch statements (CASSANDRA-6737)
 + * Disallow post-query re-ordering when paging (CASSANDRA-6722)
 + * Fix potential paging bug with deleted columns (CASSANDRA-6748)
 + * Fix NPE on BulkLoader caused by losing StreamEvent (CASSANDRA-6636)
 + * Fix truncating compression metadata (CASSANDRA-6791)
 + * Fix UPDATE updating PRIMARY KEY columns implicitly (CASSANDRA-6782)
 + * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
 +   (CASSANDRA-6733)
 + * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
 + * Fix CQLSStableWriter.addRow(MapString, Object) (CASSANDRA-6526)
 + * Fix HSHA server introducing corrupt data (CASSANDRA-6285)
 + * Fix CAS conditions for COMPACT STORAGE tables (CASSANDRA-6813)
 +Merged from 1.2:
   * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
   * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)
 - * Don't attempt cross-dc forwarding in mixed-version cluster with 1.1 
 -   (CASSANDRA-6732)
   * Fix broken streams when replacing with same IP (CASSANDRA-6622)
   * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
   * Fix partition and range deletes not triggering flush (CASSANDRA-6655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8845c510/src/java/org/apache/cassandra/db/BatchlogManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8845c510/src/java/org/apache/cassandra/service/StorageService.java
--



[3/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-11 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/service/StorageService.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f00662a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f00662a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f00662a

Branch: refs/heads/cassandra-2.1
Commit: 3f00662a199f35275226f5ccfce2b81d227cf38c
Parents: 9d08e50 8845c51
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Apr 11 21:02:16 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 21:02:16 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f00662a/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f00662a/src/java/org/apache/cassandra/db/BatchlogManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f00662a/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 3e94172,7382cbd..6e18567
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -3543,10 -3461,9 +3543,13 @@@ public class StorageService extends Not
  }
  FBUtilities.waitOnFutures(flushes);
  
+ BatchlogManager.batchlogTasks.shutdown();
+ BatchlogManager.batchlogTasks.awaitTermination(60, TimeUnit.SECONDS);
+ 
 +// whilst we've flushed all the CFs, which will have recycled all 
completed segments, we want to ensure
 +// there are no segments to replay, so we force the recycling of any 
remaining (should be at most one)
 +CommitLog.instance.forceRecycleAllSegments();
 +
  ColumnFamilyStore.postFlushExecutor.shutdown();
  ColumnFamilyStore.postFlushExecutor.awaitTermination(60, 
TimeUnit.SECONDS);
  



[1/3] git commit: Shutdown batchlog executor in SS#drain()

2014-04-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 9d08e50da - 3f00662a1


Shutdown batchlog executor in SS#drain()

patch by Sergio Bossa; reviewed by Aleksey Yeschenko for CASSANDRA-7025


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fe94e90f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fe94e90f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fe94e90f

Branch: refs/heads/cassandra-2.1
Commit: fe94e90f4bd9274a0f0ab10616de2215da8d6b17
Parents: d41c075
Author: Sergio Bossa sergio.bo...@gmail.com
Authored: Fri Apr 11 15:45:53 2014 +0100
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 20:50:50 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b3e5310..07c09cf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@
  * Continue assassinating even if the endpoint vanishes (CASSANDRA-6787)
  * Schedule schema pulls on change (CASSANDRA-6971)
  * Non-droppable verbs shouldn't be dropped from OTC (CASSANDRA-6980)
+ * Shutdown batchlog executor in SS#drain() (CASSANDRA-7025)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 7bfbf0c..b8dbadd 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -77,7 +77,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 private final AtomicLong totalBatchesReplayed = new AtomicLong();
 private final AtomicBoolean isReplaying = new AtomicBoolean();
 
-private static final ScheduledExecutorService batchlogTasks = new 
DebuggableScheduledThreadPoolExecutor(BatchlogTasks);
+public static final ScheduledExecutorService batchlogTasks = new 
DebuggableScheduledThreadPoolExecutor(BatchlogTasks);
 
 public void start()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a7541f4..1e7bed4 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3499,6 +3499,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 FBUtilities.waitOnFutures(flushes);
 
+BatchlogManager.batchlogTasks.shutdown();
+BatchlogManager.batchlogTasks.awaitTermination(60, TimeUnit.SECONDS);
+
 ColumnFamilyStore.postFlushExecutor.shutdown();
 ColumnFamilyStore.postFlushExecutor.awaitTermination(60, 
TimeUnit.SECONDS);
 



[4/4] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-11 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c0aa992
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c0aa992
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c0aa992

Branch: refs/heads/trunk
Commit: 8c0aa99279418a2f2e22f6fb7da224e7db5d22a2
Parents: cbb3c8f 3f00662
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Apr 11 21:03:53 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 21:03:53 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c0aa992/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c0aa992/src/java/org/apache/cassandra/service/StorageService.java
--



git commit: Shutdown batchlog executor in SS#drain()

2014-04-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 d41c07572 - fe94e90f4


Shutdown batchlog executor in SS#drain()

patch by Sergio Bossa; reviewed by Aleksey Yeschenko for CASSANDRA-7025


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fe94e90f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fe94e90f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fe94e90f

Branch: refs/heads/cassandra-1.2
Commit: fe94e90f4bd9274a0f0ab10616de2215da8d6b17
Parents: d41c075
Author: Sergio Bossa sergio.bo...@gmail.com
Authored: Fri Apr 11 15:45:53 2014 +0100
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 20:50:50 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b3e5310..07c09cf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@
  * Continue assassinating even if the endpoint vanishes (CASSANDRA-6787)
  * Schedule schema pulls on change (CASSANDRA-6971)
  * Non-droppable verbs shouldn't be dropped from OTC (CASSANDRA-6980)
+ * Shutdown batchlog executor in SS#drain() (CASSANDRA-7025)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 7bfbf0c..b8dbadd 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -77,7 +77,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 private final AtomicLong totalBatchesReplayed = new AtomicLong();
 private final AtomicBoolean isReplaying = new AtomicBoolean();
 
-private static final ScheduledExecutorService batchlogTasks = new 
DebuggableScheduledThreadPoolExecutor(BatchlogTasks);
+public static final ScheduledExecutorService batchlogTasks = new 
DebuggableScheduledThreadPoolExecutor(BatchlogTasks);
 
 public void start()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fe94e90f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a7541f4..1e7bed4 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3499,6 +3499,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 FBUtilities.waitOnFutures(flushes);
 
+BatchlogManager.batchlogTasks.shutdown();
+BatchlogManager.batchlogTasks.awaitTermination(60, TimeUnit.SECONDS);
+
 ColumnFamilyStore.postFlushExecutor.shutdown();
 ColumnFamilyStore.postFlushExecutor.awaitTermination(60, 
TimeUnit.SECONDS);
 



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-04-11 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8845c510
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8845c510
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8845c510

Branch: refs/heads/cassandra-2.0
Commit: 8845c5108f12677134f784467c72f8c6050dbc15
Parents: 47ace44 fe94e90
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Apr 11 20:58:21 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Apr 11 20:58:21 2014 +0300

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 src/java/org/apache/cassandra/service/StorageService.java | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8845c510/CHANGES.txt
--
diff --cc CHANGES.txt
index e71f1af,07c09cf..9edd705
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -58,50 -4,17 +58,51 @@@ Merged from 1.2
   * Continue assassinating even if the endpoint vanishes (CASSANDRA-6787)
   * Schedule schema pulls on change (CASSANDRA-6971)
   * Non-droppable verbs shouldn't be dropped from OTC (CASSANDRA-6980)
+  * Shutdown batchlog executor in SS#drain() (CASSANDRA-7025)
  
  
 -1.2.16
 - * Add UNLOGGED, COUNTER options to BATCH documentation (CASSANDRA-6816)
 - * add extra SSL cipher suites (CASSANDRA-6613)
 - * fix nodetool getsstables for blob PK (CASSANDRA-6803)
 +2.0.6
 + * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
 + * Pool CqlRecordWriter clients by inetaddress rather than Range 
 +   (CASSANDRA-6665)
 + * Fix compaction_history timestamps (CASSANDRA-6784)
 + * Compare scores of full replica ordering in DES (CASSANDRA-6883)
 + * fix CME in SessionInfo updateProgress affecting netstats (CASSANDRA-6577)
 + * Allow repairing between specific replicas (CASSANDRA-6440)
 + * Allow per-dc enabling of hints (CASSANDRA-6157)
 + * Add compatibility for Hadoop 0.2.x (CASSANDRA-5201)
 + * Fix EstimatedHistogram races (CASSANDRA-6682)
 + * Failure detector correctly converts initial value to nanos (CASSANDRA-6658)
 + * Add nodetool taketoken to relocate vnodes (CASSANDRA-4445)
 + * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
 + * Improve nodetool cfhistograms formatting (CASSANDRA-6360)
 + * Expose bulk loading progress over JMX (CASSANDRA-4757)
 + * Correctly handle null with IF conditions and TTL (CASSANDRA-6623)
 + * Account for range/row tombstones in tombstone drop
 +   time histogram (CASSANDRA-6522)
 + * Stop CommitLogSegment.close() from calling sync() (CASSANDRA-6652)
 + * Make commitlog failure handling configurable (CASSANDRA-6364)
 + * Avoid overlaps in LCS (CASSANDRA-6688)
 + * Improve support for paginating over composites (CASSANDRA-4851)
 + * Fix count(*) queries in a mixed cluster (CASSANDRA-6707)
 + * Improve repair tasks(snapshot, differencing) concurrency (CASSANDRA-6566)
 + * Fix replaying pre-2.0 commit logs (CASSANDRA-6714)
 + * Add static columns to CQL3 (CASSANDRA-6561)
 + * Optimize single partition batch statements (CASSANDRA-6737)
 + * Disallow post-query re-ordering when paging (CASSANDRA-6722)
 + * Fix potential paging bug with deleted columns (CASSANDRA-6748)
 + * Fix NPE on BulkLoader caused by losing StreamEvent (CASSANDRA-6636)
 + * Fix truncating compression metadata (CASSANDRA-6791)
 + * Fix UPDATE updating PRIMARY KEY columns implicitly (CASSANDRA-6782)
 + * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
 +   (CASSANDRA-6733)
 + * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
 + * Fix CQLSStableWriter.addRow(MapString, Object) (CASSANDRA-6526)
 + * Fix HSHA server introducing corrupt data (CASSANDRA-6285)
 + * Fix CAS conditions for COMPACT STORAGE tables (CASSANDRA-6813)
 +Merged from 1.2:
   * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
   * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)
 - * Don't attempt cross-dc forwarding in mixed-version cluster with 1.1 
 -   (CASSANDRA-6732)
   * Fix broken streams when replacing with same IP (CASSANDRA-6622)
   * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
   * Fix partition and range deletes not triggering flush (CASSANDRA-6655)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8845c510/src/java/org/apache/cassandra/db/BatchlogManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8845c510/src/java/org/apache/cassandra/service/StorageService.java
--



[jira] [Commented] (CASSANDRA-7015) sstableloader NPE

2014-04-11 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967002#comment-13967002
 ] 

Yuki Morishita commented on CASSANDRA-7015:
---

I'm also getting NPE when using sstableloader.
My exception is:

{code}
Exception in thread main java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:36)
at 
org.apache.cassandra.io.util.SegmentedFile$SegmentIterator.next(SegmentedFile.java:162)
at 
org.apache.cassandra.io.util.SegmentedFile$SegmentIterator.next(SegmentedFile.java:143)
at 
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1355)
at 
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1287)
at 
org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:1200)
at 
org.apache.cassandra.io.sstable.SSTableLoader$1.accept(SSTableLoader.java:123)
at java.io.File.list(File.java:1155)
at 
org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:74)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:84)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.config.DatabaseDescriptor.getFileCacheSizeInMB(DatabaseDescriptor.java:1346)
at 
org.apache.cassandra.service.FileCacheService.clinit(FileCacheService.java:39)
... 11 more
{code}

I think this comes from the change in CASSANDRA-6912, which let Index.db(ifile) 
use PooledSegmentedFile instead of BufferedSegementFile because sstableloader 
does not load cassandra.yaml and index_access_mode is always null.
(https://github.com/apache/cassandra/commit/5ebadc11e36749e6479f9aba19406db3aacdaf41#diff-24a44f83dd1458c0959d90752a16bab5L278)

 sstableloader NPE
 -

 Key: CASSANDRA-7015
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7015
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: Benedict

 The basic snapshot dtest is failing:
 {code}
 PRINT_DEBUG=true nosetests2 -x -s -v snapshot_test.py:TestSnapshot
 {code}
 This is due to this error from sstableloader:
 {code}
 Opening sstables and calculating sections to stream
 null
 java.lang.NullPointerException
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getFilename(SSTableReader.java:627)
   at org.apache.cassandra.io.sstable.SSTable.toString(SSTable.java:243)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.loadSummary(SSTableReader.java:802)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.openForBatch(SSTableReader.java:343)
   at 
 org.apache.cassandra.io.sstable.SSTableLoader$1.accept(SSTableLoader.java:113)
   at java.io.File.list(File.java:1155)
   at 
 org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:74)
   at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
   at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:84)
 FAIL
 {code}
 This was working as of CASSANDRA-6965 being fixed, but it's broken again. I 
 think it's due to the changes in 5ebadc11e36749e, so I'm assigning to you 
 Bendedict.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7015) sstableloader NPE

2014-04-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967003#comment-13967003
 ] 

Joshua McKenzie commented on CASSANDRA-7015:


bq. Is there a good reason to hide the stacktrace normally? I'm kinda -1 on 
that behaviour.
I'm -1 on that as well; that behavior caused me a few headaches while working 
on CASSANDRA-3668.  I can tweak it so we default to always print with my 
changes on that ticket unless Yuki has a reason why we shouldn't go that route.

 sstableloader NPE
 -

 Key: CASSANDRA-7015
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7015
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: Benedict

 The basic snapshot dtest is failing:
 {code}
 PRINT_DEBUG=true nosetests2 -x -s -v snapshot_test.py:TestSnapshot
 {code}
 This is due to this error from sstableloader:
 {code}
 Opening sstables and calculating sections to stream
 null
 java.lang.NullPointerException
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getFilename(SSTableReader.java:627)
   at org.apache.cassandra.io.sstable.SSTable.toString(SSTable.java:243)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.loadSummary(SSTableReader.java:802)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.openForBatch(SSTableReader.java:343)
   at 
 org.apache.cassandra.io.sstable.SSTableLoader$1.accept(SSTableLoader.java:113)
   at java.io.File.list(File.java:1155)
   at 
 org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:74)
   at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
   at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:84)
 FAIL
 {code}
 This was working as of CASSANDRA-6965 being fixed, but it's broken again. I 
 think it's due to the changes in 5ebadc11e36749e, so I'm assigning to you 
 Bendedict.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7026) CQL:WHERE ... IN with full partition keys

2014-04-11 Thread Dan Hunt (JIRA)
Dan Hunt created CASSANDRA-7026:
---

 Summary: CQL:WHERE ... IN with full partition keys
 Key: CASSANDRA-7026
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7026
 Project: Cassandra
  Issue Type: Wish
  Components: Core, Drivers (now out of tree)
Reporter: Dan Hunt


It would be handy to be able to pass in a list of fully qualified composite 
partition keys in an IN filter to retrieve multiple distinct rows with a single 
select.  Not entirely sure how that would work.  It looks like maybe it could 
be done with the existing token() function, like:

SELECT * FROM table WHERE token(keyPartA, keyPartB) IN (token(1, 1), token(4, 
2))

Though, I guess you'd also want some way to pass a list of tokens to a prepared 
statement through the driver.  This of course all assumes that an IN filter 
could be faster than a bunch of prepared statements, which might not be true.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-11 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967052#comment-13967052
 ] 

Mikhail Stepura commented on CASSANDRA-6831:


I'm trying to understand changes made by this commit (for CASSANDRA-5702): 
https://github.com/apache/cassandra/commit/67435b528dd474bd25fc90eaace6e6786f75ce04#diff-75146ba408a51071a0b19ffdfbb2bb3cL1965

Before that, only tables with REGULAR columns with {{componentIndex == null}}  
were Thrift-*compatible*, i.e. tables WITH COMPACT STORAGE weren't compatible.
After those changes, only tables which have a REGULAR column with 
{{componentIndex != null}} became Thrift *incompatible*, i.e. tables WITH 
COMPACT STORAGE became compatible

[~slebresne] [~iamaleksey] could you guys shed more light on that? Can we 
switch back to the previous behavior?


 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967092#comment-13967092
 ] 

Ryan McGuire edited comment on CASSANDRA-6525 at 4/11/14 8:59 PM:
--

fwiw, I've written a multi-threaded test for this using the python-driver. It's 
attached above as 6981_test.py. I used the criteria stated in CASSANDRA-6981:

bq. created about 16 tables, all the same, each with about 5 text fields and 5 
binary fields. Most of those fields had a secondary index. Then insert into all 
the tables in parallel.

I'm using 16 tables, each with 5 text fields, 5 blob fields, inserting 10,000 
rows into each table in parallel, and then selecting that data out based on a 
single field (blob5) that has 5 diffent options.

I could not reproduce the error in this ticket, however I did get this error 
several times:

{code}
ERROR [ReadStage:136] 2014-04-11 16:55:36,312 CassandraDaemon.java (line 198) 
Exception in thread Thread[ReadStage:136,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1920)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.io.util.RandomAccessReader.getTotalBufferSize(RandomAccessReader.java:157)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.getTotalBufferSize(CompressedRandomAccessReader.java:159)
at 
org.apache.cassandra.service.FileCacheService.get(FileCacheService.java:96)
at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:36)
at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1195)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:57)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:42)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1540)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1369)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:260)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:103)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1735)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:50)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:556)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1723)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1374)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1916)
... 3 more
{code}


was (Author: enigmacurry):
fwiw, I've written a multi-threaded test for this using the python-driver. I 
used the criteria stated in CASSANDRA-6981:

bq. created about 16 tables, all the same, each with about 5 text fields and 5 
binary fields. Most of those fields had a secondary index. Then insert into all 
the tables in parallel.

I'm using 16 tables, each with 5 text fields, 5 blob fields, inserting 10,000 
rows into each table in parallel, and then selecting that data out based on a 
single field (blob5) that has 5 diffent options.

I could not reproduce the error in this ticket, however I did get this error 
several times:

{code}
ERROR [ReadStage:136] 2014-04-11 16:55:36,312 CassandraDaemon.java (line 198) 
Exception in thread Thread[ReadStage:136,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 

[jira] [Updated] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6525:


Attachment: 6981_test.py

fwiw, I've written a multi-threaded test for this using the python-driver. I 
used the criteria stated in CASSANDRA-6981:

bq. created about 16 tables, all the same, each with about 5 text fields and 5 
binary fields. Most of those fields had a secondary index. Then insert into all 
the tables in parallel.

I'm using 16 tables, each with 5 text fields, 5 blob fields, inserting 10,000 
rows into each table in parallel, and then selecting that data out based on a 
single field (blob5) that has 5 diffent options.

I could not reproduce the error in this ticket, however I did get this error 
several times:

{code}
ERROR [ReadStage:136] 2014-04-11 16:55:36,312 CassandraDaemon.java (line 198) 
Exception in thread Thread[ReadStage:136,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1920)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.io.util.RandomAccessReader.getTotalBufferSize(RandomAccessReader.java:157)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.getTotalBufferSize(CompressedRandomAccessReader.java:159)
at 
org.apache.cassandra.service.FileCacheService.get(FileCacheService.java:96)
at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:36)
at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1195)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:57)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:42)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1540)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1369)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:260)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:103)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1735)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:50)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:556)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1723)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1374)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1916)
... 3 more
{code}

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Michael Shuler
 Attachments: 6981_test.py


 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at 

[jira] [Updated] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6525:


Attachment: (was: 6981_test.py)

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Michael Shuler
 Attachments: 6981_test.py


 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:87)
 at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
 at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 ... 27 more
 E.g.
 SELECT * FROM table;
 Its fine.
 SELECT * FROM table WHERE field = 'N';
 field is the partition key.
 Its said Request did not complete within rpc_timeout. in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6525:


Attachment: 6981_test.py

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Michael Shuler
 Attachments: 6981_test.py


 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:87)
 at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
 at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
 at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at java.io.RandomAccessFile.readFully(Unknown Source)
 at 
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
 at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 ... 27 more
 E.g.
 SELECT * FROM table;
 Its fine.
 SELECT * FROM table WHERE field = 'N';
 field is the partition key.
 Its said Request did not complete within rpc_timeout. in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967134#comment-13967134
 ] 

Ryan McGuire commented on CASSANDRA-6525:
-

Running this a few more times, I was able to get this on 2.0.5:

{code}
ERROR [ReadStage:90] 2014-04-11 17:37:57,768 CassandraDaemon.java (line 192) 
Exception in thread Thread[ReadStage:90,5,main]
java.lang.RuntimeException: 
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException: 
EOF after 46084 bytes out of 48857
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
java.io.EOFException: EOF after 46084 bytes out of 48857
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:82)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:42)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1560)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1379)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:166)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:105)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1742)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
... 3 more
Caused by: java.io.EOFException: EOF after 46084 bytes out of 48857
at 
org.apache.cassandra.io.util.FileUtils.skipBytesFully(FileUtils.java:392)
at 
org.apache.cassandra.utils.ByteBufferUtil.skipShortLength(ByteBufferUtil.java:382)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:70)
... 22 more
{code}

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Michael Shuler
 Attachments: 6981_test.py


 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 

git commit: Fix ticket number in CHANGES

2014-04-11 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 8845c5108 - 294c0116d


Fix ticket number in CHANGES


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/294c0116
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/294c0116
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/294c0116

Branch: refs/heads/cassandra-2.0
Commit: 294c0116d94682e3a6ad6f5778475fa047f2aca0
Parents: 8845c51
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Apr 11 16:43:06 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Apr 11 16:43:06 2014 -0500

--
 CHANGES.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/294c0116/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9edd705..451d046 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -63,10 +63,10 @@ Merged from 1.2:
 
 2.0.6
  * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
- * Pool CqlRecordWriter clients by inetaddress rather than Range 
+ * Pool CqlRecordWriter clients by inetaddress rather than Range
(CASSANDRA-6665)
  * Fix compaction_history timestamps (CASSANDRA-6784)
- * Compare scores of full replica ordering in DES (CASSANDRA-6883)
+ * Compare scores of full replica ordering in DES (CASSANDRA-6683)
  * fix CME in SessionInfo updateProgress affecting netstats (CASSANDRA-6577)
  * Allow repairing between specific replicas (CASSANDRA-6440)
  * Allow per-dc enabling of hints (CASSANDRA-6157)



[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-11 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7232783b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7232783b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7232783b

Branch: refs/heads/cassandra-2.1
Commit: 7232783bc5ab7134c1698d866ceb9cca330d0441
Parents: 3f00662 294c011
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Apr 11 16:43:41 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Apr 11 16:43:41 2014 -0500

--
 CHANGES.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7232783b/CHANGES.txt
--



[1/2] git commit: Fix ticket number in CHANGES

2014-04-11 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 3f00662a1 - 7232783bc


Fix ticket number in CHANGES


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/294c0116
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/294c0116
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/294c0116

Branch: refs/heads/cassandra-2.1
Commit: 294c0116d94682e3a6ad6f5778475fa047f2aca0
Parents: 8845c51
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Apr 11 16:43:06 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Apr 11 16:43:06 2014 -0500

--
 CHANGES.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/294c0116/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9edd705..451d046 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -63,10 +63,10 @@ Merged from 1.2:
 
 2.0.6
  * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
- * Pool CqlRecordWriter clients by inetaddress rather than Range 
+ * Pool CqlRecordWriter clients by inetaddress rather than Range
(CASSANDRA-6665)
  * Fix compaction_history timestamps (CASSANDRA-6784)
- * Compare scores of full replica ordering in DES (CASSANDRA-6883)
+ * Compare scores of full replica ordering in DES (CASSANDRA-6683)
  * fix CME in SessionInfo updateProgress affecting netstats (CASSANDRA-6577)
  * Allow repairing between specific replicas (CASSANDRA-6440)
  * Allow per-dc enabling of hints (CASSANDRA-6157)



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-11 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f04b775d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f04b775d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f04b775d

Branch: refs/heads/trunk
Commit: f04b775dc8c0ecf93b2f63eeb69142452cfa3c1c
Parents: 8c0aa99 7232783
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Apr 11 16:44:11 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Apr 11 16:44:11 2014 -0500

--
 CHANGES.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f04b775d/CHANGES.txt
--



[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-11 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7232783b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7232783b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7232783b

Branch: refs/heads/trunk
Commit: 7232783bc5ab7134c1698d866ceb9cca330d0441
Parents: 3f00662 294c011
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Apr 11 16:43:41 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Apr 11 16:43:41 2014 -0500

--
 CHANGES.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7232783b/CHANGES.txt
--



[1/3] git commit: Fix ticket number in CHANGES

2014-04-11 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 8c0aa9927 - f04b775dc


Fix ticket number in CHANGES


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/294c0116
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/294c0116
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/294c0116

Branch: refs/heads/trunk
Commit: 294c0116d94682e3a6ad6f5778475fa047f2aca0
Parents: 8845c51
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Apr 11 16:43:06 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Apr 11 16:43:06 2014 -0500

--
 CHANGES.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/294c0116/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9edd705..451d046 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -63,10 +63,10 @@ Merged from 1.2:
 
 2.0.6
  * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
- * Pool CqlRecordWriter clients by inetaddress rather than Range 
+ * Pool CqlRecordWriter clients by inetaddress rather than Range
(CASSANDRA-6665)
  * Fix compaction_history timestamps (CASSANDRA-6784)
- * Compare scores of full replica ordering in DES (CASSANDRA-6883)
+ * Compare scores of full replica ordering in DES (CASSANDRA-6683)
  * fix CME in SessionInfo updateProgress affecting netstats (CASSANDRA-6577)
  * Allow repairing between specific replicas (CASSANDRA-6440)
  * Allow per-dc enabling of hints (CASSANDRA-6157)



[jira] [Resolved] (CASSANDRA-7005) repair_test dtest fails on 2.1

2014-04-11 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-7005.
---

Resolution: Duplicate

repair_test is now passing. 
http://cassci.datastax.com/job/cassandra-2.1_dtest/102/testReport/

Closing as Duplicate of CASSANDRA-7000.

 repair_test dtest fails on 2.1
 --

 Key: CASSANDRA-7005
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7005
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 2.1 beta2


 {noformat}
 $ PRINT_DEBUG=true nosetests --nocapture --nologcapture --verbosity=3 
 repair_test.py
 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
 simple_repair_order_preserving_test (repair_test.TestRepair) ... cluster ccm 
 directory: /tmp/dtest-BVfye7
 Starting cluster..
 Inserting data...
 Checking data on node3...
 Checking data on node1...
 Checking data on node2...
 starting repair...
 [2014-04-08 13:44:31,424] Starting repair command #1, repairing 3 ranges for 
 keyspace ks (seq=true, full=true)
 [2014-04-08 13:44:32,748] Repair session d262e390-bf4d-11e3-a482-75998baadb41 
 for range (00,0113427455640312821154458202477256070484] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #d262e390-bf4d-11e3-a482-75998baadb41 on ks/cf, 
 (00,0113427455640312821154458202477256070484]] Validation failed in /127.0.0.2
 [2014-04-08 13:44:32,751] Repair session d2b98f10-bf4d-11e3-a482-75998baadb41 
 for range 
 (0113427455640312821154458202477256070484,56713727820156410577229101238628035242]
  failed with error org.apache.cassandra.exceptions.RepairException: [repair 
 #d2b98f10-bf4d-11e3-a482-75998baadb41 on ks/cf, 
 (0113427455640312821154458202477256070484,56713727820156410577229101238628035242]]
  Validation failed in /127.0.0.2
 [2014-04-08 13:44:32,753] Repair session d2dca770-bf4d-11e3-a482-75998baadb41 
 for range (56713727820156410577229101238628035242,00] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #d2dca770-bf4d-11e3-a482-75998baadb41 on ks/cf, 
 (56713727820156410577229101238628035242,00]] Validation failed in /127.0.0.2
 [2014-04-08 13:44:32,753] Repair command #1 finished
 [2014-04-08 13:44:32,770] Nothing to repair for keyspace 'system'
 [2014-04-08 13:44:32,783] Starting repair command #2, repairing 2 ranges for 
 keyspace system_traces (seq=true, full=true)
 [2014-04-08 13:44:34,635] Repair session d3310900-bf4d-11e3-a482-75998baadb41 
 for range 
 (0113427455640312821154458202477256070484,56713727820156410577229101238628035242]
  finished
 [2014-04-08 13:44:34,640] Repair session d3f80280-bf4d-11e3-a482-75998baadb41 
 for range (56713727820156410577229101238628035242,00] finished
 [2014-04-08 13:44:34,640] Repair command #2 finished
 Repair time: 4.63053512573
 FAIL
 ERROR
 simple_repair_test (repair_test.TestRepair) ... cluster ccm directory: 
 /tmp/dtest-_L5lTP
 Starting cluster..
 Inserting data...
 Checking data on node3...
 Checking data on node1...
 Checking data on node2...
 starting repair...
 [2014-04-08 13:47:14,109] Starting repair command #1, repairing 3 ranges for 
 keyspace ks (seq=true, full=true)
 [2014-04-08 13:47:15,291] Repair session 335a5840-bf4e-11e3-b691-75998baadb41 
 for range (-3074457345618258603,3074457345618258602] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #335a5840-bf4e-11e3-b691-75998baadb41 on ks/cf, 
 (-3074457345618258603,3074457345618258602]] Validation failed in /127.0.0.2
 [2014-04-08 13:47:15,292] Repair session 33ad0c20-bf4e-11e3-b691-75998baadb41 
 for range (-9223372036854775808,-3074457345618258603] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #33ad0c20-bf4e-11e3-b691-75998baadb41 on ks/cf, 
 (-9223372036854775808,-3074457345618258603]] Validation failed in /127.0.0.2
 [2014-04-08 13:47:15,295] Repair session 33e978e0-bf4e-11e3-b691-75998baadb41 
 for range (3074457345618258602,-9223372036854775808] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #33e978e0-bf4e-11e3-b691-75998baadb41 on ks/cf, 
 (3074457345618258602,-9223372036854775808]] Validation failed in /127.0.0.2
 [2014-04-08 13:47:15,295] Repair command #1 finished
 [2014-04-08 13:47:15,307] Nothing to repair for keyspace 'system'
 [2014-04-08 13:47:15,322] Starting repair command #2, repairing 2 ranges for 
 keyspace system_traces (seq=true, full=true)
 [2014-04-08 13:47:15,983] Repair session 3412f9e0-bf4e-11e3-b691-75998baadb41 
 for range (-3074457345618258603,3074457345618258602] finished
 [2014-04-08 13:47:15,988] Repair session 345d9770-bf4e-11e3-b691-75998baadb41 
 for range (3074457345618258602,-9223372036854775808] finished
 

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using WHERE

2014-04-11 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967148#comment-13967148
 ] 

Ryan McGuire commented on CASSANDRA-6525:
-

This repros on git:cassandra-2.0 HEAD as well:

{code}
ERROR [ReadStage:82] 2014-04-11 17:49:50,903 CassandraDaemon.java (line 216) 
Exception in thread Thread[ReadStage:82,5,main]
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException: 
EOF after 35761 bytes out of 48857
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:82)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:42)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1540)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1369)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:164)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher$1.computeNext(CompositesSearcher.java:103)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1735)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:50)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:556)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1723)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1374)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1916)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.EOFException: EOF after 35761 bytes out of 48857
at 
org.apache.cassandra.io.util.FileUtils.skipBytesFully(FileUtils.java:394)
at 
org.apache.cassandra.utils.ByteBufferUtil.skipShortLength(ByteBufferUtil.java:382)
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.init(SimpleSliceReader.java:70)
... 22 more
{code}

 Cannot select data which using WHERE
 --

 Key: CASSANDRA-6525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
 Project: Cassandra
  Issue Type: Bug
 Environment: Linux RHEL5
 RAM: 1GB
 Cassandra 2.0.3
 CQL spec 3.1.1
 Thrift protocol 19.38.0
Reporter: Silence Chow
Assignee: Michael Shuler
 Attachments: 6981_test.py


 I am developing a system on my single machine using VMware Player with 1GB 
 Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
 I using WHERE and it has just below 10 records. I have got this error in 
 system log:
 ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.io.IOError: java.io.EOFException
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at 
 

[jira] [Commented] (CASSANDRA-6487) Log WARN on large batch sizes

2014-04-11 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967165#comment-13967165
 ] 

Lyuben Todorov commented on CASSANDRA-6487:
---

[~iamaleksey] I assume you mean calling {{ByteBuffer#limit}} in 
{{BatchStatement#executeWithPerStatementVariables}}, I like the idea, it will 
be much more accurate than just counting queries and it's just a loop with a 
counter, and shouldn't hurt the fast path, right? /cc [~benedict]. 

bq. Maybe count of batch size warnings, largest batch size seen, most recent 
batch size over the limit.
[~jkrupan] +1, maybe also something like total statement count over the limit 
(e.g. if a batch exceeds the limit by 10, and this occurs 4 times, that metric 
will end up with 40). 

 Log WARN on large batch sizes
 -

 Key: CASSANDRA-6487
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487
 Project: Cassandra
  Issue Type: Improvement
Reporter: Patrick McFadin
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.8

 Attachments: 6487_trunk.patch, 6487_trunk_v2.patch, 
 cassandra-2.0-6487.diff


 Large batches on a coordinator can cause a lot of node stress. I propose 
 adding a WARN log entry if batch sizes go beyond a configurable size. This 
 will give more visibility to operators on something that can happen on the 
 developer side. 
 New yaml setting with 5k default.
 {{# Log WARN on any batch size exceeding this value. 5k by default.}}
 {{# Caution should be taken on increasing the size of this threshold as it 
 can lead to node instability.}}
 {{batch_size_warn_threshold: 5k}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-11 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6831:
---

Reproduced In: 2.1 beta1, 2.0.6, 1.2.16  (was: 1.2.16)

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17, 2.0.7, 2.1 beta2


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-11 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6831:
---

Fix Version/s: 2.1 beta2
   2.0.7

 Updates to COMPACT STORAGE tables via cli drop CQL information
 --

 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.17, 2.0.7, 2.1 beta2


 If a COMPACT STORAGE table is altered using the CLI all information about the 
 column names reverts to the initial key, column1, column2 namings.  
 Additionally, the changes in the columns name will not take effect until the 
 Cassandra service is restarted.  This means that the clients using CQL will 
 continue to work properly until the service is restarted, at which time they 
 will start getting errors about non-existant columns in the table.
 When attempting to rename the columns back using ALTER TABLE an error stating 
 the column already exists will be raised.  The only way to get it back is to 
 ALTER TABLE and change the comment or something, which will bring back all 
 the original column names.
 This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
 In cqlsh
 {code}
 Connected to cluster1 at 127.0.0.3:9160.
 [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
 19.36.2]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
 'replication_factor' : 3 };
 cqlsh USE test;
 cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
 baz) ) WITH COMPACT STORAGE;
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}
 Now in cli:
 {code}
   Connected to: cluster1 on 127.0.0.3/9160
 Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@unknown] use test;
 Authenticated to keyspace: test
 [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
 3bf5fa49-5d03-34f0-b46c-6745f7740925
 {code}
 Now back in cqlsh:
 {code}
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   column1 text,
   value text,
   PRIMARY KEY (bar, column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='hey this is a comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
 cqlsh:test describe table foo;
 CREATE TABLE foo (
   bar text,
   baz text,
   qux text,
   PRIMARY KEY (bar, baz)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='this is a new comment' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6487) Log WARN on large batch sizes

2014-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967230#comment-13967230
 ] 

Aleksey Yeschenko commented on CASSANDRA-6487:
--

No, that's not what I meant. I meant the size of the resulting Mutation-s 
(RowMutation-s pre 2.1), as a sum of ColumnFamily#dataSize()-s for each of the 
Mutation#getColumnFamilies(). Of course it would affect the path - any extra 
stuff you do would.

 Log WARN on large batch sizes
 -

 Key: CASSANDRA-6487
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487
 Project: Cassandra
  Issue Type: Improvement
Reporter: Patrick McFadin
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.8

 Attachments: 6487_trunk.patch, 6487_trunk_v2.patch, 
 cassandra-2.0-6487.diff


 Large batches on a coordinator can cause a lot of node stress. I propose 
 adding a WARN log entry if batch sizes go beyond a configurable size. This 
 will give more visibility to operators on something that can happen on the 
 developer side. 
 New yaml setting with 5k default.
 {{# Log WARN on any batch size exceeding this value. 5k by default.}}
 {{# Caution should be taken on increasing the size of this threshold as it 
 can lead to node instability.}}
 {{batch_size_warn_threshold: 5k}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6487) Log WARN on large batch sizes

2014-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13967237#comment-13967237
 ] 

Aleksey Yeschenko commented on CASSANDRA-6487:
--

Anyway, I'm not saying that this is the way to go - merely listing options.

 Log WARN on large batch sizes
 -

 Key: CASSANDRA-6487
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6487
 Project: Cassandra
  Issue Type: Improvement
Reporter: Patrick McFadin
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.8

 Attachments: 6487_trunk.patch, 6487_trunk_v2.patch, 
 cassandra-2.0-6487.diff


 Large batches on a coordinator can cause a lot of node stress. I propose 
 adding a WARN log entry if batch sizes go beyond a configurable size. This 
 will give more visibility to operators on something that can happen on the 
 developer side. 
 New yaml setting with 5k default.
 {{# Log WARN on any batch size exceeding this value. 5k by default.}}
 {{# Caution should be taken on increasing the size of this threshold as it 
 can lead to node instability.}}
 {{batch_size_warn_threshold: 5k}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)