[ 
https://issues.apache.org/jira/browse/HADOOP-7478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13071491#comment-13071491
 ] 

XieXianshan commented on HADOOP-7478:
-------------------------------------

The exception is as follows(and i think it wasn't caused by opening src file 
but making the DataNode's block):

[root@localhost logs]# hadoop dfs -put /work/xg1 /user/hadoop
11/07/26 10:05:19 INFO hdfs.DFSClient: Exception in createBlockOutputStream 
java.net.ConnectException: Connection refused
11/07/26 10:05:19 INFO hdfs.DFSClient: Abandoning block 
blk_7177429611982124425_1004
11/07/26 10:05:25 INFO hdfs.DFSClient: Exception in createBlockOutputStream 
java.net.ConnectException: Connection refused
11/07/26 10:05:25 INFO hdfs.DFSClient: Abandoning block 
blk_9193825693957629504_1004
11/07/26 10:05:31 INFO hdfs.DFSClient: Exception in createBlockOutputStream 
java.net.ConnectException: Connection refused
11/07/26 10:05:31 INFO hdfs.DFSClient: Abandoning block 
blk_-7406363576735183504_1004
11/07/26 10:05:37 INFO hdfs.DFSClient: Exception in createBlockOutputStream 
java.net.ConnectException: Connection refused
11/07/26 10:05:37 INFO hdfs.DFSClient: Abandoning block 
blk_1387010999141076236_1004
11/07/26 10:05:43 WARN hdfs.DFSClient: DataStreamer Exception: 
java.io.IOException: Unable to create new block.
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

11/07/26 10:05:43 WARN hdfs.DFSClient: Error Recovery for block 
blk_1387010999141076236_1004 bad datanode[0] nodes == null
11/07/26 10:05:43 WARN hdfs.DFSClient: Could not get block locations. Source 
file "/user/hadoop/xg1" - Aborting...

> 0-byte files retained in the dfs while the FSshell -put is unsuccessful
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-7478
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7478
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.23.0
>            Reporter: XieXianshan
>            Assignee: XieXianshan
>            Priority: Trivial
>         Attachments: HADOOP-7478.patch
>
>
> The process of putting file into dfs is approximately as follows:
> 1) create a file in the dfs
> 2) copy from one stream to the file
> But the problem is that the file is still retained in the dfs when the 
> process 2) is terminated abnormally with unexpected exceptions,such as there 
> is no DataNode alive.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to