[ https://issues.apache.org/jira/browse/PHOENIX-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Brian Thomas updated PHOENIX-4075: ---------------------------------- Environment: 3 Node HBase cluster running version 1.2.0-cdh5.7.2 Phoenix version 4.7.0 using thin client + query server was: 3 Node HBase cluster running version 0.98 Phoenix version 4.7.0 using thin client + query server > Executor already shutdown building secondary index > -------------------------------------------------- > > Key: PHOENIX-4075 > URL: https://issues.apache.org/jira/browse/PHOENIX-4075 > Project: Phoenix > Issue Type: Bug > Environment: 3 Node HBase cluster running version 1.2.0-cdh5.7.2 > Phoenix version 4.7.0 using thin client + query server > Reporter: Brian Thomas > > I started to experience an issue trying to insert some data into our HBase > cluster using phoenix. This seems to have start occurring after I added a > secondary index on one of my columns. > The issue seems to resolve temporarily with an HBase restart. However, it > eventually falls back into this state and once it gets into this state, it's > not possible to insert data anymore. When my application is running, it tries > to insert 80k rows and then issues a single commit. However, the batch size > doesn't seem to affect it as I cannot even insert a single row when it gets > into this state. The full stack trace I see is > Caused by: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed > 131563 actions: > org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: Failed > to build index for unexp > ected reason! > at > org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:180) > at > org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:205) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1007) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1003) > at > org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3088) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2875) > at > org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2817) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:751) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:713) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2142) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.concurrent.RejectedExecutionException: Executor already > shutdown > at > com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.startTask(MoreExecutors.java:327) > at > com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:251) > at > com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56) > at > org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:58) > at > org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99) > at > org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:143) > at > org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:273) > at > org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:202) > ... 17 more > : 131563 times, > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:247) > at > org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:227) > at > org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.getErrors(AsyncProcess.java:1663) > at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:982) > at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:996) > at > org.apache.phoenix.execute.MutationState.send(MutationState.java:951) > ... 22 more > at > org.apache.calcite.avatica.remote.Service$ErrorResponse.toException(Service.java:2315) > at > org.apache.calcite.avatica.remote.RemoteProtobufService._apply(RemoteProtobufService.java:46) > at > org.apache.calcite.avatica.remote.ProtobufService.apply(ProtobufService.java:107) > at > org.apache.calcite.avatica.remote.RemoteMeta$17.call(RemoteMeta.java:377) > at > org.apache.calcite.avatica.remote.RemoteMeta$17.call(RemoteMeta.java:375) > at > org.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:666) > at > org.apache.calcite.avatica.remote.RemoteMeta.commit(RemoteMeta.java:375) > at > org.apache.calcite.avatica.AvaticaConnection.commit(AvaticaConnection.java:175) -- This message was sent by Atlassian JIRA (v6.4.14#64029)