[ 
https://issues.apache.org/jira/browse/CASSANDRA-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Bligh updated CASSANDRA-6981:
------------------------------------

    Description: 
Cassandra 2.06, Oracle Java version "1.7.0_51", Linux Mint 16

I have a cassandra keyspace with about 12 tables that are all the same.

If I load 100,000 rows or so into a couple of those tables in Cassandra, it 
works fine.

If I load a larger dataset, after a while one of the tables won't do lookups 
any more (not always the same one).

SELECT recv_time,symbol from table6 where mid='S-AUR01-20140324A-1221';
results in "Request did not complete within rpc_timeout."

where "mid" is the primary key (varchar). If I look at the logs, it has an 
EOFException ... presumably it's running out of some resource (it's definitely 
not out of disk space)

Sometimes it does this on secondary indexes too: dropping and rebuilding the 
index will fix it for a while. When it's broken, it seems like only one 
particular lookup key causes timeouts (and the EOFException every time) - other 
lookups work fine. I presume the index is corrupt somehow.

ERROR [ReadStage:110] 2014-04-03 12:39:47,018 CassandraDaemon.java (line 196) 
Exception in thread Thread[ReadStage:110,5,main]
    java.io.IOError: java.io.EOFException
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
    at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
    at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
    at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
    at 
org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
    at 
org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
    at 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
    at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
    at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
    at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
    at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
    at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
    at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
    at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
    at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
    at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
    at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
    at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
    at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
    Caused by: java.io.EOFException
    at java.io.RandomAccessFile.readFully(Unknown Source)
    at java.io.RandomAccessFile.readFully(Unknown Source)
    at 
org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
    at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
    at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
    at 
org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:110)
    at 
org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
    ... 28 more


  was:
Cassandra 2.06, Oracle Java version "1.7.0_51", Linux Mint 16

I have a cassandra keyspace with about 12 tables that are all the same.

If I load 100,000 rows or so into a couple of those tables in Cassandra, it 
works fine.

If I load a larger dataset, after a while one of the tables won't do lookups 
any more (not always the same one).

SELECT recv_time,symbol from table6 where mid='S-AUR01-20140324A-1221';
results in "Request did not complete within rpc_timeout."

where "mid" is the primary key (varchar). If I look at the logs, it has an 
EOFException ... presumably it's running out of some resource (it's definitely 
not out of disk space)

ERROR [ReadStage:110] 2014-04-03 12:39:47,018 CassandraDaemon.java (line 196) 
Exception in thread Thread[ReadStage:110,5,main]
    java.io.IOError: java.io.EOFException
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
    at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
    at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
    at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
    at 
org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
    at 
org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
    at 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
    at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
    at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
    at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
    at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
    at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
    at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
    at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
    at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
    at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
    at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
    at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
    at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
    Caused by: java.io.EOFException
    at java.io.RandomAccessFile.readFully(Unknown Source)
    at java.io.RandomAccessFile.readFully(Unknown Source)
    at 
org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
    at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
    at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
    at 
org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:110)
    at 
org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
    ... 28 more



> java.io.EOFException from Cassandra when doing select
> -----------------------------------------------------
>
>                 Key: CASSANDRA-6981
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6981
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Cassandra 2.06, Oracle Java version "1.7.0_51", Linux 
> Mint 16
>            Reporter: Martin Bligh
>
> Cassandra 2.06, Oracle Java version "1.7.0_51", Linux Mint 16
> I have a cassandra keyspace with about 12 tables that are all the same.
> If I load 100,000 rows or so into a couple of those tables in Cassandra, it 
> works fine.
> If I load a larger dataset, after a while one of the tables won't do lookups 
> any more (not always the same one).
> SELECT recv_time,symbol from table6 where mid='S-AUR01-20140324A-1221';
> results in "Request did not complete within rpc_timeout."
> where "mid" is the primary key (varchar). If I look at the logs, it has an 
> EOFException ... presumably it's running out of some resource (it's 
> definitely not out of disk space)
> Sometimes it does this on secondary indexes too: dropping and rebuilding the 
> index will fix it for a while. When it's broken, it seems like only one 
> particular lookup key causes timeouts (and the EOFException every time) - 
> other lookups work fine. I presume the index is corrupt somehow.
> ERROR [ReadStage:110] 2014-04-03 12:39:47,018 CassandraDaemon.java (line 196) 
> Exception in thread Thread[ReadStage:110,5,main]
>     java.io.IOError: java.io.EOFException
>     at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
>     at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>     at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>     at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>     at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>     at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>     at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>     at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>     at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>     at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>     at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>     at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>     at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>     at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>     at 
> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>     at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>     at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>     at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
>     at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
>     at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
>     at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
>     at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
>     at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
>     at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>     at java.lang.Thread.run(Unknown Source)
>     Caused by: java.io.EOFException
>     at java.io.RandomAccessFile.readFully(Unknown Source)
>     at java.io.RandomAccessFile.readFully(Unknown Source)
>     at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
>     at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>     at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>     at 
> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:110)
>     at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>     at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>     ... 28 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to