[
https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089226#comment-15089226
]
Kai Zheng commented on HDFS-9617:
---------------------------------
Looking at your attached codes, you're crazily trying to use *10000* threads to
write to the same HDFS file, which is surely not to work. What's behavior and
output would you expect? As Kihwal said, this can cause all sorts of problems.
I thought you need to be clear about what you want to achieve, then ask your
questions in the user mailing list about how to do it, as MIngliang suggested.
> my java client use muti-thread to put a same file to a same hdfs uri, after
> no lease error,then client OutOfMemoryError
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-9617
> URL: https://issues.apache.org/jira/browse/HDFS-9617
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: zuotingbing
> Attachments: HadoopLoader.java, LoadThread.java, UploadProcess.java
>
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease.
> Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250]
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615)
> at
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> at org.apache.hadoop.ipc.Client.call(Client.java:1411)
> at org.apache.hadoop.ipc.Client.call(Client.java:1364)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391)
> at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536)
> my java client(JVM -Xmx=2G) :
> jmap TOP15:
> num #instances #bytes class name
> ----------------------------------------------
> 1: 48072 2053976792 [B
> 2: 45852 5987568 <constMethodKlass>
> 3: 45852 5878944 <methodKlass>
> 4: 3363 4193112 <constantPoolKlass>
> 5: 3363 2548168 <instanceKlassKlass>
> 6: 2733 2299008 <constantPoolCacheKlass>
> 7: 533 2191696 [Ljava.nio.ByteBuffer;
> 8: 24733 2026600 [C
> 9: 31287 2002368
> org.apache.hadoop.hdfs.DFSOutputStream$Packet
> 10: 31972 767328 java.util.LinkedList$Node
> 11: 22845 548280 java.lang.String
> 12: 20372 488928 java.util.concurrent.atomic.AtomicLong
> 13: 3700 452984 java.lang.Class
> 14: 981 439576 <methodDataKlass>
> 15: 5583 376344 [S
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)