[
https://issues.apache.org/jira/browse/HDFS-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719557#comment-16719557
]
Erik Krogen edited comment on HDFS-14146 at 12/13/18 12:05 AM:
---------------------------------------------------------------
Hey [~csun], this is a good find. I actually think this problem is more serious
than just the issue you raised.
Normally, _reader_ threads attempt to offer a request to the {{callQueue}}. If
the {{callQueue}} is full, they will block, creating a natural backoff by
pushing back on the listen queue, which will eventually cause clients to be
unable to connect. However, now we have the _handler_ threads attempting to
offer a request to the {{callQueue}} -- but the handler threads are the same
ones that drain the queue. If the queue became full, and all handler threads
were waiting to attempt to push a request into the {{callQueue}}, this could
result in deadlock (all handlers are waiting on the queue because it is full,
and no handler will drain the queue).
I think that instead of {{callQueue.put()}} we need to use {{callQueue.add()}},
which will never block, and instead throw an overflow exception if the queue is
full. We also should make sure the exception handling is the same as when it
happens in the reader thread, which current looks like (taken from
{{processOneRpc}}):
{code}
...
} else {
processRpcRequest(header, buffer);
}
} catch (RpcServerException rse) {
// inform client of error, but do not rethrow else non-fatal
// exceptions will close connection!
if (LOG.isDebugEnabled()) {
LOG.debug(Thread.currentThread().getName() +
": processOneRpc from client " + this +
" threw exception [" + rse + "]");
}
// use the wrapped exception if there is one.
Throwable t = (rse.getCause() != null) ? rse.getCause() : rse;
final RpcCall call = new RpcCall(this, callId, retry);
setupResponse(call,
rse.getRpcStatusProto(), rse.getRpcErrorCodeProto(), null,
t.getClass().getName(), t.getMessage());
sendResponse(call);
}
{code}
It's not clear to me if the logic you added will match this; can you confirm?
Edit:
I took a look and the error handling from the handler looks like:
{code}
void doResponse(Throwable t) throws IOException {
RpcCall call = this;
if (t != null) {
// clone the call to prevent a race with another thread stomping
// on the response while being sent. the original call is
// effectively discarded since the wait count won't hit zero
call = new RpcCall(this);
setupResponse(call,
RpcStatusProto.FATAL, RpcErrorCodeProto.ERROR_RPC_SERVER,
null, t.getClass().getName(), StringUtils.stringifyException(t));
} else {
setupResponse(call, call.responseParams.returnStatus,
call.responseParams.detailedErr, call.rv,
call.responseParams.errorClass,
call.responseParams.error);
}
connection.sendResponse(call);
}
{code}
So here it is set as a {{FATAL}} error always, but the
{{CallQueueOverflowException}} can either be {{FATAL}} or {{ERROR}} (depending
on whether it is a disconnect or a keepalive backoff) -- IIUC this is important
to indicate to the client how it should behave. We may need a way to indicate
to {{doResponse()}} what the {{RpcStatusProto}} should be using the
{{RpcServerException}} thrown by the queue offer.
was (Author: xkrogen):
Hey [~csun], this is a good find. I actually think this problem is more serious
than just the issue you raised.
Normally, _reader_ threads attempt to offer a request to the {{callQueue}}. If
the {{callQueue}} is full, they will block, creating a natural backoff by
pushing back on the listen queue, which will eventually cause clients to be
unable to connect. However, now we have the _handler_ threads attempting to
offer a request to the {{callQueue}} -- but the handler threads are the same
ones that drain the queue. If the queue became full, and all handler threads
were waiting to attempt to push a request into the {{callQueue}}, this could
result in deadlock (all handlers are waiting on the queue because it is full,
and no handler will drain the queue).
I think that instead of {{callQueue.put()}} we need to use {{callQueue.add()}},
which will never block, and instead throw an overflow exception if the queue is
full. We also should make sure the exception handling is the same as when it
happens in the reader thread, which current looks like (taken from
{{processOneRpc}}):
{code}
...
} else {
processRpcRequest(header, buffer);
}
} catch (RpcServerException rse) {
// inform client of error, but do not rethrow else non-fatal
// exceptions will close connection!
if (LOG.isDebugEnabled()) {
LOG.debug(Thread.currentThread().getName() +
": processOneRpc from client " + this +
" threw exception [" + rse + "]");
}
// use the wrapped exception if there is one.
Throwable t = (rse.getCause() != null) ? rse.getCause() : rse;
final RpcCall call = new RpcCall(this, callId, retry);
setupResponse(call,
rse.getRpcStatusProto(), rse.getRpcErrorCodeProto(), null,
t.getClass().getName(), t.getMessage());
sendResponse(call);
}
{code}
It's not clear to me if the logic you added will match this; can you confirm?
> Handle exception from internalQueueCall
> ---------------------------------------
>
> Key: HDFS-14146
> URL: https://issues.apache.org/jira/browse/HDFS-14146
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: ipc
> Reporter: Chao Sun
> Assignee: Chao Sun
> Priority: Critical
> Attachments: HDFS-14146-HDFS-12943.000.patch
>
>
> When we re-queue RPC call, the {{internalQueueCall}} will potentially throw
> exceptions (e.g., RPC backoff), which is then swallowed. This will cause the
> RPC to be silently discarded without response to the client, which is not
> good.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]