[
https://issues.apache.org/jira/browse/HDFS-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14907825#comment-14907825
]
jiangyu commented on HDFS-9143:
-------------------------------
Here is the log from DataNode:
java.net.SocketTimeoutException: Call From snn/10.39.5.22 to 10.39.5.42:8020
failed on socket timeout exception: java.net.SocketTimeoutException: 60000
millis timeout while waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/10.39.5.22:57698
remote=10.39.5.42/10.39.5.42:8020]; For more details see:
http://wiki.apache.org/hadoop/SocketTimeout
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:749)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.blockReceivedAndDeleted(Unknown Source)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy14.blockReceivedAndDeleted(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReceivedAndDeleted(DatanodeProtocolClientSideTranslatorPB.java:263)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reportReceivedDeletedBlocks(BPServiceActor.java:307)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:839)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting
for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/10.39.5.22:57698
remote=10.39.5.42/10.39.5.42:8020]
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:510)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1054)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:949)
> updateCountForQuota method during EditlogTailer loadEdit can make SNN timeout
> very often
> -----------------------------------------------------------------------------------------
>
> Key: HDFS-9143
> URL: https://issues.apache.org/jira/browse/HDFS-9143
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 2.4.0, 2.6.0
> Reporter: jiangyu
> Priority: Minor
>
> I have seen many logs from datanodes in our cluster reporting socket timeout
> when sending heartbeat or blockReceivedAndDeleted to Standby NameNode, but it
> never happen to Active NameNode.
> At first, i thought it maybe caused by Editlog Tailer fetch Editlog too much
> making full gc, but after i watched the gc log, it is not. So i investigate
> the code path and log, find it only take very few seconds for the SNN to
> fetch the journal and merge it. But when you open the webpage of SNN during
> merge processing, it can not response like stop the world time of full GC,
> but there is no gc at that time. So i jstack SNN for some time, and finding
> all the time consumed by updateCountForQuota method in FSImage.
> The updateCountForQuota is called ervry time when loadEdits, it update the
> count of each directory with quota in the namespace from ROOT, besides it
> hold the write lock of FSImage, so every time when SNN merge the edit from
> JN, it is always making the stop world.
> I don't think it is necessary for SNN to updateCountForQuota everytime when
> tail the edit, when trasition to Active, it call updateCountForQuota and
> never missing any quota data.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)