These "bugs" seem to be unrelated to what you're seeing. The issues reported
below are people trying to call transferTo() with a size greater than 64MB.
Hadoop will never do this, as it calls transferTo for "packet" sized chunks.

It's still unclear to me what the original issue is. the preallocate()
function only allocates a few MB at a time iirc.

-Todd

On Thu, Mar 18, 2010 at 12:15 AM, Gokulakannan M <gok...@huawei.com> wrote:

>  Hi Todd,
>
>
>
>             What I meant is that sun nio has issues when it comes to large
> volume of data.
>
>
>
>             These are the references related to this.
>
>             http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4938442
>             http://forums.sun.com/thread.jspa?threadID=607775
>
>
>
>             Coming back to this issue, I suspected that nio might be
> having similar problem when it comes to large disk volume also.
>
>
>
>  Thanks,
>
>   Gokul
>
>
>   ------------------------------
>
> *From:* Todd Lipcon [mailto:t...@cloudera.com]
> *Sent:* Wednesday, March 17, 2010 9:26 PM
> *To:* hdfs-user@hadoop.apache.org; gok...@huawei.com
> *Subject:* Re: Problem formatting namenode
>
>
>
> Hi Gokul,
>
> Do you have a reference to a Java bug ID that discusses this? I wasn't
> aware of problems with large disks and Java NIO.
>
> -Todd
>
> On Tue, Mar 16, 2010 at 11:49 PM, Gokulakannan M <gok...@huawei.com>
> wrote:
>
> Hi,
>
>
>
>       The problem here is the *FileDispatcher.pwrite* of sun nio.
>
>       The FileDispatcher.java has so many issues when it comes to large
> volume.
>
>
>
>       Can anyone suggest an *alternative* that can be used instead of sun
>
>       nio's FileDispatcher.java in
>
>
>
>       
> *org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.pre
> allocate(FSEditLog.java:228) ???*
>
>
>
>
>
>  Thanks,
>
>   Gokul
>
>
>
>
>
>
>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Reply via email to