[ 
https://issues.apache.org/jira/browse/PHOENIX-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432784#comment-16432784
 ] 

Sergey Soldatov commented on PHOENIX-4366:
------------------------------------------

[~samarthjain] there are may be 2 different scanners at the same moment. One 
has encodingScheme scheme and another one doesn't. So the second scan may 
override it and the first scanner will fail with the early mentioned exception. 
[~jamestaylor] changes look good. The concern I have is not related to this 
particular JIRA, but the whole idea that we rely on the client whether to use 
encoding columns make me worry. Recently I had a case when an app with some old 
version of the client was used to ingest the data and the result was a dataset 
with null values for non pk columns. Definitely, that's a topic for a separate 
JIRA, but looking at the current code I hardly can imagine how we can prevent 
that. 

> Rebuilding a local index fails sometimes
> ----------------------------------------
>
>                 Key: PHOENIX-4366
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4366
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.12.0
>            Reporter: Marcin Januszkiewicz
>            Assignee: James Taylor
>            Priority: Blocker
>             Fix For: 4.14.0
>
>         Attachments: PHOENIX-4366_v1.patch
>
>
> We have a table created in 4.12 with the new column encoding scheme and with 
> several local indexes. Sometimes when we issue an ALTER INDEX ... REBUILD 
> command, it fails with the following exception:
> {noformat}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TRACES,\x01BY01O90A6-$599a349e,1509979836322.3f
> 30c9d449ed6c60a1cda6898f766bd0.: null                                         
>                                                                               
>                                   
>         at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)      
>                                                                               
>                        
>         at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)       
>                                                                               
>                        
>         at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
>                                                   
>         at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:284)
>                                                            
>         at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
>                                                                               
>                      
>         at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>                                                                 
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)    
>                                                                               
>                                   
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)    
>                                                                               
>                                   
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)     
>                                                                               
>                        
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)     
>                                                                               
>                        
> Caused by: java.lang.UnsupportedOperationException                            
>                                                                               
>                                   
>         at 
> org.apache.phoenix.schema.PTable$QualifierEncodingScheme$1.decode(PTable.java:247)
>                                                                               
>                    
>         at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:141)
>                                                                         
>         at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:56)
>                                                                          
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:560) 
>                                                                               
>                        
>         at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) 
>                                                                               
>                        
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5735)
>                                                                               
>      
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5891)
>                                                                               
>        
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5669)
>                                                                               
>             
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5654)
>                                                                               
>             
>         at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:522)
>                                                        
>         at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:236)
>                                                   
>         ... 7 more (state=08000,code=101) 
> {noformat}
> This failure is intermittent, since usually we can run the command again and 
> it will succeed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to