[
https://issues.apache.org/jira/browse/HADOOP-16453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898215#comment-16898215
]
Íñigo Goiri commented on HADOOP-16453:
--------------------------------------
{quote}
Íñigo Goiri Do you intent to mean that we remove the catch(Throwable ) and
replace it with the actual exceptions or add the NoMethodException separately
before, and then instead throwing exception, return the processed exception?
{quote}
Correct. Actually, [^HADOOP-16453.002.patch] looks much better.
> Update how exceptions are handled in NetUtils.java
> --------------------------------------------------
>
> Key: HADOOP-16453
> URL: https://issues.apache.org/jira/browse/HADOOP-16453
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Lisheng Sun
> Assignee: Lisheng Sun
> Priority: Minor
> Attachments: HADOOP-16453.001.patch, HADOOP-16453.002.patch
>
>
> When there is no String Constructor for the exception, we Log a Trace
> Message. Given that log and throw is not a very good approach I think the
> right thing would be to just not log it at all as HADOOP-16431.
> {code:java}
> private static <T extends IOException> T wrapWithMessage(
> T exception, String msg) throws T {
> Class<? extends Throwable> clazz = exception.getClass();
> try {
> Constructor<? extends Throwable> ctor =
> clazz.getConstructor(String.class);
> Throwable t = ctor.newInstance(msg);
> return (T)(t.initCause(exception));
> } catch (Throwable e) {
> LOG.trace("Unable to wrap exception of type {}: it has no (String) "
> + "constructor", clazz, e);
> throw exception;
> }
> }
> {code}
> *exception stack:*
> {code:java}
> 19/07/12 11:23:45 INFO mapreduce.JobSubmitter: Executing with tokens: [Kind:
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:azorprc-xiaomi, Ident: (token for
> sql_prc: HDFS_DELEGATION_TOKEN owner=sql_prc/[email protected],
> renewer=yarn_prc, realUser=, issueDate=1562901814007, maxDate=1594437814007,
> sequenceNumber=3349939, masterKeyId=1400)]
> 19/07/12 11:23:46 TRACE net.NetUtils: Unable to wrap exception of type class
> java.nio.channels.ClosedByInterruptException: it has no (String) constructor
> java.lang.NoSuchMethodException:
> java.nio.channels.ClosedByInterruptException.<init>(java.lang.String)
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.getConstructor(Class.java:1825)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1559)
> at org.apache.hadoop.ipc.Client.call(Client.java:1501)
> at org.apache.hadoop.ipc.Client.call(Client.java:1411)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:949)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler$1.call(RequestHedgingProxyProvider.java:143)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 WARN ipc.Client: Exception encountered while connecting to
> the server : java.io.InterruptedIOException: Interrupted while waiting for IO
> on channel java.nio.channels.SocketChannel[connected
> local=/10.118.30.48:34324 remote=/10.69.11.137:11200]. 60000 millis timeout
> left.
> 19/07/12 11:23:48 INFO conf.Configuration: resource-types.xml not found
> 19/07/12 11:23:48 INFO resource.ResourceUtils: Unable to find
> 'resource-types.xml'.
> 19/07/12 11:23:49 INFO impl.YarnClientImpl: Submitted application
> application_1562843952012_2236
> {code}
>
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]