[ 
https://issues.apache.org/jira/browse/HDFS-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12778713#action_12778713
 ] 

Eli Collins commented on HDFS-727:
----------------------------------

Hey Dhruba,

Now that I can run the libhdfs test on trunk (HDFS-756), I ran the libhdfs test 
w/o the patch in this jira and confirmed that on an ubuntu 9.10 64-bit host the 
test fails due to this bug. Adding 
{{fprintf(stderr, "jBlockSize=%lld\n", jBlockSize);}} in hdfsOpenFile shows the 
corrupt value in test output, and the failure (could only be replicated to 0 
nodes) is the same failure I saw before (there's no node that will accept this 
large block size).

{quote}
     [exec] jBlockSize 47403621154816
     ...
     [exec] 09/11/16 20:08:06 WARN hdfs.DFSClient: DataStreamer Exception: 
org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
     File /tmp/testfile.txt could only be replicated to 0 nodes, instead of 1
{quote}

The patch still applies against trunk and 20.1 and 20.2.

Thanks,
Eli

> bug setting block size hdfsOpenFile 
> ------------------------------------
>
>                 Key: HDFS-727
>                 URL: https://issues.apache.org/jira/browse/HDFS-727
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Eli Collins
>            Assignee: Eli Collins
>             Fix For: 0.20.2, 0.21.0
>
>         Attachments: hdfs727.patch
>
>
> In hdfsOpenFile in libhdfs invokeMethod needs to cast the block size argument 
> to a jlong so a full 8 bytes are passed (rather than 4 plus some garbage 
> which causes writes to fail due to a bogus block size). 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to