Yiqun Lin created HDFS-12565:
--------------------------------

             Summary: Ozone: Put key operation concurrent executes failed on 
Windows
                 Key: HDFS-12565
                 URL: https://issues.apache.org/jira/browse/HDFS-12565
             Project: Hadoop HDFS
          Issue Type: Sub-task
          Components: ozone
    Affects Versions: HDFS-7240
            Reporter: Yiqun Lin


When creating a batch size of key under specified bucket, then the error 
happens on Windows. This error was found by executing 
{{TestOzoneShell#testListKey()}}.
{noformat}
org.apache.hadoop.scm.container.common.helpers.StorageContainerException: 
org.apache.hadoop.scm.container.common.helpers.StorageContainerException: 
Invalid write size found. Size: 1768160 Expected: 10
        at 
org.apache.hadoop.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:373)
        at 
org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeChunk(ContainerProtocolCalls.java:175)
        at 
org.apache.hadoop.scm.storage.ChunkOutputStream.writeChunkToContainer(ChunkOutputStream.java:224)
        at 
org.apache.hadoop.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:154)
        at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:265)
        at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:174)
        at 
org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:58)
        at 
org.apache.hadoop.ozone.web.storage.DistributedStorageHandler.commitKey(DistributedStorageHandler.java:405)
        at 
org.apache.hadoop.ozone.web.handlers.KeyHandler$2.doProcess(KeyHandler.java:196)
        at 
org.apache.hadoop.ozone.web.handlers.KeyProcessTemplate.handleCall(KeyProcessTemplate.java:91)
        at 
org.apache.hadoop.ozone.web.handlers.KeyHandler.putKey(KeyHandler.java:199)
{noformat}

The related codes(ChunkUtils#writeData):
{code}
 public static void writeData(File chunkFile, ChunkInfo chunkInfo,
      byte[] data) throws
      StorageContainerException, ExecutionException, InterruptedException,
      NoSuchAlgorithmException {
    ...

    try {
      file =
          AsynchronousFileChannel.open(chunkFile.toPath(),
              StandardOpenOption.CREATE,
              StandardOpenOption.WRITE,
              StandardOpenOption.SPARSE,
              StandardOpenOption.SYNC);
      lock = file.lock().get();
      if (chunkInfo.getChecksum() != null &&
          !chunkInfo.getChecksum().isEmpty()) {
        verifyChecksum(chunkInfo, data, log);
      }
      int size = file.write(ByteBuffer.wrap(data), chunkInfo.getOffset()).get();
      if (size != data.length) { <===== error was thrown
        log.error("Invalid write size found. Size:{}  Expected: {} ", size,   
            data.length);
        throw new StorageContainerException("Invalid write size found. " +
            "Size: " + size + " Expected: " + data.length, INVALID_WRITE_SIZE);
      }
...
{code}
But if we only put one single key, it runs well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to