[ 
https://issues.apache.org/jira/browse/PHOENIX-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431669#comment-16431669
 ] 

Samarth Jain commented on PHOENIX-4366:
---------------------------------------

I was motivated by just getting hold of the column encoding related values once 
in preScannerOpen and reusing it across the board (instead of having to fetch 
it from the scan context every time). I made this with the assumption that 
every region gets it's own co-processor instance. Or is it one instance per 
region server? If former, why is it problematic to store these values as member 
variables since their scope should only be limited to the table region.

> Rebuilding a local index fails sometimes
> ----------------------------------------
>
>                 Key: PHOENIX-4366
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4366
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.12.0
>            Reporter: Marcin Januszkiewicz
>            Assignee: James Taylor
>            Priority: Blocker
>             Fix For: 4.14.0
>
>         Attachments: PHOENIX-4366_v1.patch
>
>
> We have a table created in 4.12 with the new column encoding scheme and with 
> several local indexes. Sometimes when we issue an ALTER INDEX ... REBUILD 
> command, it fails with the following exception:
> {noformat}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TRACES,\x01BY01O90A6-$599a349e,1509979836322.3f
> 30c9d449ed6c60a1cda6898f766bd0.: null                                         
>                                                                               
>                                   
>         at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)      
>                                                                               
>                        
>         at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)       
>                                                                               
>                        
>         at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
>                                                   
>         at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:284)
>                                                            
>         at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
>                                                                               
>                      
>         at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>                                                                 
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)    
>                                                                               
>                                   
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)    
>                                                                               
>                                   
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)     
>                                                                               
>                        
>         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)     
>                                                                               
>                        
> Caused by: java.lang.UnsupportedOperationException                            
>                                                                               
>                                   
>         at 
> org.apache.phoenix.schema.PTable$QualifierEncodingScheme$1.decode(PTable.java:247)
>                                                                               
>                    
>         at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:141)
>                                                                         
>         at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:56)
>                                                                          
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:560) 
>                                                                               
>                        
>         at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) 
>                                                                               
>                        
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5735)
>                                                                               
>      
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5891)
>                                                                               
>        
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5669)
>                                                                               
>             
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5654)
>                                                                               
>             
>         at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:522)
>                                                        
>         at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:236)
>                                                   
>         ... 7 more (state=08000,code=101) 
> {noformat}
> This failure is intermittent, since usually we can run the command again and 
> it will succeed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to