elek commented on a change in pull request #895: HDDS-1636. Tracing id is not
propagated via async datanode grpc call
URL: https://github.com/apache/hadoop/pull/895#discussion_r290679229
##########
File path:
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/TracingUtil.java
##########
@@ -99,7 +103,16 @@ public static Scope importAndCreateScope(String name,
String encodedParent) {
if (encodedParent != null && encodedParent.length() > 0) {
StringBuilder builder = new StringBuilder();
builder.append(encodedParent);
- parentSpan = tracer.extract(StringCodec.FORMAT, builder);
+ try {
+ parentSpan = tracer.extract(StringCodec.FORMAT, builder);
+ } catch (Exception ex) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Can't extract tracing from the message.", ex);
+ } else {
+ LOG.warn(
Review comment:
Thanks the hint. I was not familiar with
`org.apache.hadoop.log.LogThrottingHelper` but I learned it today.
* Unfortunately -- based on the javadoc -- it's not thread safe. I can't
use it in the static class without significant modification.
* I also noticed that this modification couldn't help at all, as the
exception is catched by the '
PropagationRegistry$ExceptionCatchingExtractorDecorator' not this try catch
block (see the stack trace in the summary.
It can be simpler to keep the original version of the error handling in this
patch. Adopting the throttle or configuring the decorator PropagationRegistry
can be bigger task and we can do it later. What ado you think?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]