[ 
https://issues.apache.org/jira/browse/CASSANDRA-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965328#comment-13965328
 ] 

Martin Bligh commented on CASSANDRA-6981:
-----------------------------------------

So this is a little tricky, because it's proprietary data and I've changed 
things around a bit since then. Basically what I was doing was on a desktop 
machine with 32GB of RAM and just one disk (regular HDD, not SSD), created 
about 16 tables, all the same, each with about 5 text fields and 5 binary 
fields. Most of those fields had a secondary index. Then insert into all the 
tables in parallel. 

I'm aware this isn't a great design scheme, but it certainly shouldn't fall 
over like this ....



> java.io.EOFException from Cassandra when doing select
> -----------------------------------------------------
>
>                 Key: CASSANDRA-6981
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6981
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Cassandra 2.06, Oracle Java version "1.7.0_51", Linux 
> Mint 16
>            Reporter: Martin Bligh
>
> Cassandra 2.06, Oracle Java version "1.7.0_51", Linux Mint 16
> I have a cassandra keyspace with about 12 tables that are all the same.
> If I load 100,000 rows or so into a couple of those tables in Cassandra, it 
> works fine.
> If I load a larger dataset, after a while one of the tables won't do lookups 
> any more (not always the same one).
> {noformat}
> SELECT recv_time,symbol from table6 where mid='S-AUR01-20140324A-1221';
> {noformat}
> results in "Request did not complete within rpc_timeout."
> where "mid" is the primary key (varchar). If I look at the logs, it has an 
> EOFException ... presumably it's running out of some resource (it's 
> definitely not out of disk space)
> Sometimes it does this on secondary indexes too: dropping and rebuilding the 
> index will fix it for a while. When it's broken, it seems like only one 
> particular lookup key causes timeouts (and the EOFException every time) - 
> other lookups work fine. I presume the index is corrupt somehow.
> {noformat}
> ERROR [ReadStage:110] 2014-04-03 12:39:47,018 CassandraDaemon.java (line 196) 
> Exception in thread Thread[ReadStage:110,5,main]
>     java.io.IOError: java.io.EOFException
>     at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
>     at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>     at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>     at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>     at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>     at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>     at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>     at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>     at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>     at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>     at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>     at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>     at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>     at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>     at 
> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>     at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>     at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>     at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
>     at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
>     at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
>     at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
>     at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
>     at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
>     at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>     at java.lang.Thread.run(Unknown Source)
>     Caused by: java.io.EOFException
>     at java.io.RandomAccessFile.readFully(Unknown Source)
>     at java.io.RandomAccessFile.readFully(Unknown Source)
>     at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
>     at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>     at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>     at 
> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:110)
>     at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>     at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>     ... 28 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to