Having run into a recurring compaction problem due to a corrupt sstable
(perceived row size was 13 petabytes or something), I sstable2json -x 'd
 the key and am now trying to re-import the sstable without it.  However,
I'm running into the following exception:

Importing 2882 keys...
java.lang.ClassCastException: org.apache.cassandra.db.ExpiringColumn cannot
be cast to org.apache.cassandra.db.SuperColumn
at
org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:363)
 at
org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:347)
at
org.apache.cassandra.db.ColumnFamilySerializer.serializeForSSTable(ColumnFamilySerializer.java:88)
 at
org.apache.cassandra.db.ColumnFamilySerializer.serializeWithIndexes(ColumnFamilySerializer.java:107)
at
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:147)
 at
org.apache.cassandra.tools.SSTableImport.importUnsorted(SSTableImport.java:290)
at
org.apache.cassandra.tools.SSTableImport.importJson(SSTableImport.java:252)
 at org.apache.cassandra.tools.SSTableImport.main(SSTableImport.java:476)
ERROR: org.apache.cassandra.db.ExpiringColumn cannot be cast to
org.apache.cassandra.db.SuperColumn

The CF is a SuperColumnFamily, if that's relevant.

1. What should I do about this problem?

2. (somewhat unrelated) Our usage of this SCF has moved away from requiring
"super"-ness.  Aside from missing out on the potential for future seconary
indexes, are we suffering any sort of operational/performance hit from this
classification?

Reply via email to