Hi Cheny, When its creating the file, it should talk with NN. Here, since you mentioned destination file path with DN ip port as full URI, it may treat that as NN ip port and try to connect. So, it is failing .....
Absolute paths in DFS will be hdfs://NN_IP:NN_Port/fileneme. This will be treated as separte file in DFS. Regards, Uma HUAWEI TECHNOLOGIES CO.,LTD. huawei_logo Address: Huawei Industrial Base Bantian Longgang Shenzhen 518129, P.R.China www.huawei.com ---------------------------------------------------------------------------- --------------------------------------------------------- This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! -----Original Message----- From: Cheny [mailto:coconuttree9...@gmail.com] Sent: Thursday, July 21, 2011 7:04 AM To: core-u...@hadoop.apache.org Subject: Error when Using URI in -put command Not considering replication, if I use following command from a hadoop client outside the cluster(the client is not a datanode) hadoop dfs -put <localfilename> hdfs://<datanode ip>:50010/<filename> Can I make HDFS to locate the first block of the file on that specific datanode? I tried to do that and I got this error: put: Call to /xxx.xxx.xxx.xxx(ip of my datanode):50010 failed on local exception: java.io.EOFException Any help is greatly appreciated. -- View this message in context: http://old.nabble.com/Error-when-Using-URI-in--put-command-tp32104146p321041 46.html Sent from the Hadoop core-user mailing list archive at Nabble.com.