[
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13997968#comment-13997968
]
Tyler Hobbs commented on CASSANDRA-6525:
----------------------------------------
The problem is that key cache entries stick around after the keyspace is
dropped. After it's recreated and read, there are key cache hits that return
old positions. I'm not sure why it only seems to be a problem for the
secondary index tables; my guess is that the key-cache preheating that happens
after compaction is replacing the old entries in the key cache for the data
tables.
CASSANDRA-5202 is the correct permanent solution for this, but that's for 2.1.
For 2.0, perhaps we should do something similar to CASSANDRA-6351 and go
through the key cache to invalidate all entries for the CF when it's dropped.
> Cannot select data which using "WHERE"
> --------------------------------------
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
> Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
> Reporter: Silence Chow
> Assignee: Tyler Hobbs
> Fix For: 2.0.8
>
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when
> I using "WHERE" and it has just below 10 records. I have got this error in
> system log:
> {noformat}
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187)
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at
> org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> {noformat}
> E.g.
> {{SELECT * FROM table;}}
> Its fine.
> {{SELECT * FROM table WHERE field = 'N';}}
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh
--
This message was sent by Atlassian JIRA
(v6.2#6252)