[ 
https://issues.apache.org/jira/browse/HADOOP-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HADOOP-9505:
----------------------------------------

    Description: 
I have created a file with checksum disable option and I am seeing 
ArrayIndexOutOfBoundsException.
{code}
out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf()
          .getInt("io.file.buffer.size", 4096), replFactor, fs
          .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled());
{code}

See the trace here:
{noformat}
java.lang.ArrayIndexOutOfBoundsException: 0
        at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178)
        at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162)
        at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106)
        at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92)
        at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261)
        at 
org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174)
{noformat}

In FSOutputSummer#int2byte will not check any bytes length, so, do you think we 
have to to check the length then only we call this in CRC NULL case, as there 
will not be any checksum bytes?
{code}
static byte[] int2byte(int integer, byte[] bytes) {
    bytes[0] = (byte)((integer >>> 24) & 0xFF);
    bytes[1] = (byte)((integer >>> 16) & 0xFF);
    bytes[2] = (byte)((integer >>>  8) & 0xFF);
    bytes[3] = (byte)((integer >>>  0) & 0xFF);
    return bytes;
  }
{code}


  was:
I have created a file with checksum disable option and I am seeing 
ArrayIndexOutOfBoundsException.
{code}
out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf()
          .getInt("io.file.buffer.size", 4096), replFactor, fs
          .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled());
{code}

See the trace here:
{noformat}
java.lang.ArrayIndexOutOfBoundsException: 0
        at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178)
        at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162)
        at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106)
        at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92)
        at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261)
        at 
org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174)
{noformat}

Whether I have missed any other configs to set?

In FSOutputSummer#int2byte will not check any bytes length, so, do you think we 
have to to check the length then only we call this in CRC NULL case, as there 
will not be any checksum bytes?
{code}
static byte[] int2byte(int integer, byte[] bytes) {
    bytes[0] = (byte)((integer >>> 24) & 0xFF);
    bytes[1] = (byte)((integer >>> 16) & 0xFF);
    bytes[2] = (byte)((integer >>>  8) & 0xFF);
    bytes[3] = (byte)((integer >>>  0) & 0xFF);
    return bytes;
  }
{code}


    
> Specifying checksum type to NULL can cause write failures with AIOBE
> --------------------------------------------------------------------
>
>                 Key: HADOOP-9505
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9505
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 2.0.5-beta
>            Reporter: Uma Maheswara Rao G
>            Priority: Minor
>
> I have created a file with checksum disable option and I am seeing 
> ArrayIndexOutOfBoundsException.
> {code}
> out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf()
>         .getInt("io.file.buffer.size", 4096), replFactor, fs
>         .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled());
> {code}
> See the trace here:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 0
>       at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178)
>       at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162)
>       at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106)
>       at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92)
>       at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
>       at java.io.DataOutputStream.write(DataOutputStream.java:90)
>       at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261)
>       at 
> org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174)
> {noformat}
> In FSOutputSummer#int2byte will not check any bytes length, so, do you think 
> we have to to check the length then only we call this in CRC NULL case, as 
> there will not be any checksum bytes?
> {code}
> static byte[] int2byte(int integer, byte[] bytes) {
>     bytes[0] = (byte)((integer >>> 24) & 0xFF);
>     bytes[1] = (byte)((integer >>> 16) & 0xFF);
>     bytes[2] = (byte)((integer >>>  8) & 0xFF);
>     bytes[3] = (byte)((integer >>>  0) & 0xFF);
>     return bytes;
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to