[ https://issues.apache.org/jira/browse/PHOENIX-1005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jeffrey Zhong updated PHOENIX-1005: ----------------------------------- Attachment: phoenix-1005.patch [~jamestaylor] Could you review the patch? Thanks. > upsert data error after drop index > ---------------------------------- > > Key: PHOENIX-1005 > URL: https://issues.apache.org/jira/browse/PHOENIX-1005 > Project: Phoenix > Issue Type: Bug > Affects Versions: 3.0.0, 4.0.0, 5.0.0 > Reporter: mumu > Assignee: Jeffrey Zhong > Attachments: phoenix-1005.patch > > > one table (T) has a index table (IDXT), when i drop the IDXT, and continue > to upsert data into T, there will caught error that can not update index > IDXT, and then phoenix shut the regionserve down. > phoenix client use cache to save tables's information, i think the bug is > because of the cache is not update after drop index table. > there is some log: > 2014-05-23 11:13:48,270 WARN > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: > Encountered problems when prefetch META table: > org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for > table: IDXT, row=IDXT,,99999999999999 > at > org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:151) > at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1060) > at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1122) > at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1002) > at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:959) > at > org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:39) > at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:251) > at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:243) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment$HTableWrapper.<init>(CoprocessorHost.java:370) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.getTable(CoprocessorHost.java:696) > at > org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.getTable(CoprocessorHost.java:685) > at > org.apache.phoenix.hbase.index.table.CoprocessorHTableFactory.getTable(CoprocessorHTableFactory.java:61) > at > org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:99) > at > org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:154) > at > org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:139) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > 2014-05-23 11:13:48,325 ERROR > org.apache.phoenix.hbase.index.parallel.BaseTaskRunner: Found a failed task > because: > org.apache.phoenix.hbase.index.exception.SingleIndexWriteFailureException: > IDXT > ...... > ERROR org.apache.phoenix.hbase.index.write.KillServerOnFailurePolicy: Could > not update the index table, killing server region because couldn't write to > an index table -- This message was sent by Atlassian JIRA (v6.2#6252)