[ 
https://issues.apache.org/jira/browse/CASSANDRA-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15503551#comment-15503551
 ] 

Alex Petrov commented on CASSANDRA-12501:
-----------------------------------------

[~sushpr...@gmail.com] I've been able to reconstruct your schema (looks like it 
was a supercolumnfamily with a compound key). It's only possible to create 
these through the thrift API. 

in CQL, it's being translated to table that has a column with an empty name and 
which is of a type of {{map}}. 

However, after following the upgrade path I was not able to reproduce your 
issue. If you have old and new sstables at hand, could you please dump their 
contents with corresponding {{sstabledump}} in 3.0 and {{sstable2json}} in 
{{2.1}}?

> Table read error on migrating from 2.1.9 to 3x
> ----------------------------------------------
>
>                 Key: CASSANDRA-12501
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12501
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Linux ubuntu 14.04
>            Reporter: Sushma Pradeep
>            Assignee: Edward Capriolo
>            Priority: Blocker
>
> We are upgrading our db from cassandra 1.2.9 to 3.x. 
> We successfully upgraded from 1.2.9 -> 2.0.17 -> 2.1.9
> However when I upgraded from 2.1.9 -> 3.0.8 I was not able to read one table. 
> Table is described as:
> {code}
> CREATE TABLE xchngsite.settles (
>     key ascii,
>     column1 bigint,
>     column2 ascii,
>     "" map<ascii, blob>,
>     value blob,
>     PRIMARY KEY (key, column1, column2)
> ) WITH COMPACT STORAGE
>     AND CLUSTERING ORDER BY (column1 ASC, column2 ASC)
>     AND bloom_filter_fp_chance = 0.01
>     AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
>     AND comment = ''
>     AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
>     AND compression = {'enabled': 'false'}
>     AND crc_check_chance = 1.0
>     AND dclocal_read_repair_chance = 0.0
>     AND default_time_to_live = 0
>     AND gc_grace_seconds = 864000
>     AND max_index_interval = 2048
>     AND memtable_flush_period_in_ms = 0
>     AND min_index_interval = 128
>     AND read_repair_chance = 1.0
>     AND speculative_retry = '99PERCENTILE';
> {code}
> However I am able to read all other tables. 
> When I run {{select * from table}}, I get below error:
> {code}
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1297, in perform_simple_statement
>     result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
>     raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {code}
> And {{tail -f system.log}} says:
> {code}
> WARN  [SharedPool-Worker-2] 2016-08-19 08:50:47,949 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 9
>       at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2471)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_77]
>       at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.4.jar:3.4]
>       at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.4.jar:3.4]
>       at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 9
>       at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:113) 
> ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:105) 
> ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:310)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator.computeNext(BigTableScanner.java:265)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner.hasNext(BigTableScanner.java:245)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$4.hasNext(UnfilteredPartitionIterators.java:216)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:127)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:123)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:292) 
> ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1799)
>  ~[apache-cassandra-3.4.jar:3.4]
>       at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2467)
>  ~[apache-cassandra-3.4.jar:3.4]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to