[ 
https://issues.apache.org/jira/browse/HADOOP-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12648476#action_12648476
 ] 

Brian Bockelman commented on HADOOP-4619:
-----------------------------------------

Hey Pete,

I think Hudson will kill us if we post another patch but .... 

There are two issues:
1) Size of the file to create: this should be off_t (look at the struct defined 
in stat.h: 
http://www.opengroup.org/onlinepubs/000095399/basedefs/sys/stat.h.html)
2) Possible size of the reads returned by hdfsWrite; libhdfs defines this to be 
tSize.  Currently, it's an unsigned int32 (I think), but really should be 
ssize_t.

We can't fix (2) without fixing libhdfs.  We can fix (1); however, I can't 
think of a system where off_t would still be a 32-bit int...

> hdfs_write infinite loop when dfs fails and cannot write files > 2 GB
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-4619
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4619
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: libhdfs
>    Affects Versions: 0.19.0, 0.20.0
>            Reporter: Pete Wyckoff
>            Assignee: Pete Wyckoff
>             Fix For: 0.19.1, 0.20.0
>
>         Attachments: HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt, 
> HADOOP-4619.txt, HADOOP-4619.txt, HADOOP-4619.txt
>
>
> 1. hdfs_write  does not check hdfsWrite return code so -1 return code is 
> ignored.
> 2. hdfs_write uses int for overall file length

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to