ant test fail for TestCrcCorruption with OutofMemory.
-----------------------------------------------------

                 Key: HADOOP-2955
                 URL: https://issues.apache.org/jira/browse/HADOOP-2955
             Project: Hadoop Core
          Issue Type: Bug
            Reporter: Mahadev konar
            Assignee: Raghu Angadi
            Priority: Blocker


TestCrcCorruption sometimes corrupts the metadata for crc and leads to 
corruption in the length of of bytes of checksum (second field in metadata). 
This does not happen always but somtimes since corruption is random in the test.

I put in a debug statement in the allocation to see how many bytes were being 
allocated and ran it for few times. This is one of the allocation in 
BlockSender:sendBlock() 

 int maxChunksPerPacket = Math.max(1,
                      (BUFFER_SIZE + bytesPerChecksum - 1)/bytesPerChecksum);
        int sizeofPacket = PKT_HEADER_LEN + 
        (bytesPerChecksum + checksumSize) * maxChunksPerPacket;
        LOG.info("Comment: bytes to allocate " + sizeofPacket);
        ByteBuffer pktBuf = ByteBuffer.allocate(sizeofPacket);


The output in one of the allocations is 

 dfs.DataNode (DataNode.java:sendBlock(1766)) - Comment: bytes to allocate 
1232596786

So we should check for number of bytes being allocated in sendBlock (should be 
less than the block size? -- seems like a good default).



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to