[ 
https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643380#action_12643380
 ] 

Konstantin Shvachko commented on HADOOP-4533:
---------------------------------------------

+1
This looks reasonable for 0.18. It fixes the semaphore contention problem and 
retains the data transfer protocol compatible across 0.18
We need to run tests with this patch.
For 0.19 and 0.20 it is better to open another jira. 

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not 
> compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>         Attachments: balancerRM_br18.patch
>
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the 
> following exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream 
> java.io.IOException: Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block 
> blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream 
> java.io.IOException: Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block 
> blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream 
> java.io.IOException: Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block 
> blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream 
> java.io.IOException: Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block 
> blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: 
> java.io.IOException: Unable to create new block.
>       at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
>       at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
>       at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block 
> blk_-2631672044886706846_1002 bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
>       at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
>       at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
>       at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with 
> hadoop 0.18.1 and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to