[ 
https://issues.apache.org/jira/browse/HBASE-12091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16227447#comment-16227447
 ] 

Lars Hofhansl commented on HBASE-12091:
---------------------------------------

So it looks like in branch-1 there's something amiss. Except for getting the 
remote exception I get the following:
Note that that will *not* cause the replication stream to stop. I noticed that 
when I tried writing a test for above.

{code}
2017-10-31 12:56:46,876 WARN  
[RS_OPEN_REGION-localhost:37372-0.replicationSource.localhost%2C37372%2C1509479790408,2]
 regionserver.HBaseInterClusterReplicationEndpoint(349): Can't replicate 
because of a local or network error: 
org.apache.hadoop.hbase.DoNotRetryIOException: Unable to instantiate exception 
received from 
server:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException.<init>(java.lang.String)
        at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87)
        at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:353)
        at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:330)
        at 
org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:74)
        at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.replicateEntries(HBaseInterClusterReplicationEndpoint.java:426)
        at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:445)
        at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:403)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:473)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
        at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException):
 org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
action: Table 'test_dropped' was not found, got: test.: 1 time, servers with 
issues: null
        at 
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:295)
        at 
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$2300(AsyncProcess.java:271)
        at 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.getErrors(AsyncProcess.java:1779)
        at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:925)
        at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:939)
        at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.batch(ReplicationSink.java:376)
        at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.replicateEntries(ReplicationSink.java:209)
        at 
org.apache.hadoop.hbase.replication.regionserver.Replication.replicateLogEntries(Replication.java:232)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:2008)
        at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22751)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)

        at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:386)
        at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94)
        at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
        at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
        at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
        at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
        at 
org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:596)
        at 
org.apache.hadoop.hbase.ipc.BlockingRpcConnection.run(BlockingRpcConnection.java:334)
        ... 1 more
{code}


> Optionally ignore edits for dropped tables for replication.
> -----------------------------------------------------------
>
>                 Key: HBASE-12091
>                 URL: https://issues.apache.org/jira/browse/HBASE-12091
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>            Assignee: Lars Hofhansl
>         Attachments: 12091.txt
>
>
> We just ran into a scenario where we dropped a table from both the source and 
> the sink, but the source still has outstanding edits that now it could not 
> get rid of. Now all replication is backed up behind these unreplicatable 
> edits.
> We should have an option to ignore edits for tables dropped at the source.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to