[ 
https://issues.apache.org/jira/browse/CASSANDRA-11019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-11019.
------------------------------------------
    Resolution: Duplicate

This is a duplicate of CASSANDRA-10743. This will be fixed in the upcoming 
3.0.3.

> UnsupportedOperationException on nodetool compact
> -------------------------------------------------
>
>                 Key: CASSANDRA-11019
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11019
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Compaction
>         Environment: Debian 3.16.7
>            Reporter: Jason Kania
>
> When attempting to run "nodetool compact" from the command line after 
> upgrading to 3.0.1-rc-1, the following error occurs:
> error: null
> -- StackTrace --
> java.lang.UnsupportedOperationException
>         at 
> org.apache.cassandra.db.rows.CellPath$EmptyCellPath.get(CellPath.java:143)
>         at 
> org.apache.cassandra.db.marshal.CollectionType$CollectionPathSerializer.serializedSize(CollectionType.java:226)
>         at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serializedSize(BufferCell.java:325)
>         at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.sizeOfComplexColumn(UnfilteredSerializer.java:297)
>         at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:282)
>         at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:163)
>         at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>         at 
> org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:144)
>         at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:112)
>         at 
> org.apache.cassandra.db.ColumnIndex.writeAndBuildIndex(ColumnIndex.java:52)
>         at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:149)
>         at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:118)
>         at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>         at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
>         at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>         at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>         at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>         at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>         at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:572)
>         at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> An attempt to run "nodetool repair" reports no errors.
> If the command is run on individual tables,
> ie nodetool compact "sensorCheck" "sensorUnit"
> an error is only seen on one of the tables. So, firstly, the table causing 
> the error should be identified in the output.
> I can some queries on the table without issue and run describe on it from 
> within cqlsh. Command "nodetool repair" returns no errors. However, other 
> queries result in the following:
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1258, in perform_simple_statement
>     result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
>     raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> along with the following exception:
> WARN  [SharedPool-Worker-2] 2016-01-14 23:50:09,892 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.UnsupportedOperationException
>         at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2379)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_65]
>         at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.1.jar:3.0.1]
>         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> Caused by: java.lang.UnsupportedOperationException: null
>         at 
> org.apache.cassandra.db.rows.CellPath$EmptyCellPath.get(CellPath.java:143) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.marshal.CollectionType$CollectionPathSerializer.serialize(CollectionType.java:216)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serialize(BufferCell.java:260)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.writeComplexColumn(UnfilteredSerializer.java:197)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:185)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:96)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:132)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:298)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289) 
> ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1721)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2375)
>  ~[apache-cassandra-3.0.1.jar:3.0.1]
>         ... 4 common frames omitted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to