[ 
https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11938:
-------------------------------
    Description: 
While investigating a test failure in {{TestRecoverStripedFile}}, one issue in 
raw erasrue coder, caused by an optimization in below codes. It assumes the  
heap buffer backed by the bytes array available for reading or writing always 
starts with zero and takes the whole space.
{code}
  protected static byte[][] toArrays(ByteBuffer[] buffers) {
    byte[][] bytesArr = new byte[buffers.length][];

    ByteBuffer buffer;
    for (int i = 0; i < buffers.length; i++) {
      buffer = buffers[i];
      if (buffer == null) {
        bytesArr[i] = null;
        continue;
      }

      if (buffer.hasArray()) {
        bytesArr[i] = buffer.array();
      } else {
        throw new IllegalArgumentException("Invalid ByteBuffer passed, " +
            "expecting heap buffer");
      }
    }

    return bytesArr;
  }
{code} 

  was:
While investigating a test failure in {{TestRecoverStripedFile}}, one issue in 
raw erasrue coder, a bad optimization in below codes. It assumes the  heap 
buffer backed by the bytes array available for reading or writing always starts 
with zero and takes the whole.
{code}
  protected static byte[][] toArrays(ByteBuffer[] buffers) {
    byte[][] bytesArr = new byte[buffers.length][];

    ByteBuffer buffer;
    for (int i = 0; i < buffers.length; i++) {
      buffer = buffers[i];
      if (buffer == null) {
        bytesArr[i] = null;
        continue;
      }

      if (buffer.hasArray()) {
        bytesArr[i] = buffer.array();
      } else {
        throw new IllegalArgumentException("Invalid ByteBuffer passed, " +
            "expecting heap buffer");
      }
    }

    return bytesArr;
  }
{code} 


> Fix ByteBuffer version encode/decode API of raw erasure coder
> -------------------------------------------------------------
>
>                 Key: HADOOP-11938
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11938
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: io
>            Reporter: Kai Zheng
>            Assignee: Kai Zheng
>         Attachments: HADOOP-11938-HDFS-7285-workaround.patch
>
>
> While investigating a test failure in {{TestRecoverStripedFile}}, one issue 
> in raw erasrue coder, caused by an optimization in below codes. It assumes 
> the  heap buffer backed by the bytes array available for reading or writing 
> always starts with zero and takes the whole space.
> {code}
>   protected static byte[][] toArrays(ByteBuffer[] buffers) {
>     byte[][] bytesArr = new byte[buffers.length][];
>     ByteBuffer buffer;
>     for (int i = 0; i < buffers.length; i++) {
>       buffer = buffers[i];
>       if (buffer == null) {
>         bytesArr[i] = null;
>         continue;
>       }
>       if (buffer.hasArray()) {
>         bytesArr[i] = buffer.array();
>       } else {
>         throw new IllegalArgumentException("Invalid ByteBuffer passed, " +
>             "expecting heap buffer");
>       }
>     }
>     return bytesArr;
>   }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to