bshashikant commented on a change in pull request #1685:
URL: https://github.com/apache/ozone/pull/1685#discussion_r553719956



##########
File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
##########
@@ -301,36 +348,38 @@ private synchronized void readChunkFromContainer(int len) 
throws IOException {
       startByteIndex = chunkPosition;
     } else {
       // Start reading the chunk from the last chunkPosition onwards.
-      startByteIndex = bufferOffset + bufferLength;
+      startByteIndex = bufferOffsetWrtChunkData + bufferLength;
     }
 
-    // bufferOffset and bufferLength are updated below, but if read fails
+    // bufferOffsetWrtChunkData and bufferLength are updated after the data
+    // is read from Container and put into the buffers, but if read fails
     // and is retried, we need the previous position.  Position is reset after
     // successful read in adjustBufferPosition()
     storePosition();
 
+    long adjustedBuffersOffset, adjustedBuffersLen;
     if (verifyChecksum) {
-      // Update the bufferOffset and bufferLength as per the checksum
-      // boundary requirement.
-      computeChecksumBoundaries(startByteIndex, len);
+      // Adjust the chunk offset and length to include required checksum
+      // boundaries
+      Pair<Long, Long> adjustedOffsetAndLength =
+          computeChecksumBoundaries(startByteIndex, len);
+      adjustedBuffersOffset = adjustedOffsetAndLength.getLeft();
+      adjustedBuffersLen = adjustedOffsetAndLength.getRight();
     } else {
       // Read from the startByteIndex
-      bufferOffset = startByteIndex;
-      bufferLength = len;
+      adjustedBuffersOffset = startByteIndex;
+      adjustedBuffersLen = len;
     }
 
     // Adjust the chunkInfo so that only the required bytes are read from
     // the chunk.
     final ChunkInfo adjustedChunkInfo = ChunkInfo.newBuilder(chunkInfo)
-        .setOffset(bufferOffset + chunkInfo.getOffset())
-        .setLen(bufferLength)
+        .setOffset(adjustedBuffersOffset + chunkInfo.getOffset())
+        .setLen(adjustedBuffersLen)
         .build();
 
-    ByteString byteString = readChunk(adjustedChunkInfo);
-
-    buffers = byteString.asReadOnlyByteBufferList();
-    bufferIndex = 0;
-    allocated = true;
+    byte[] chunkData = readChunk(adjustedChunkInfo).toByteArray();

Review comment:
       The basic problem we are trying to solve here is to minimize the memory 
overhead in the client. In order to solve this, adding an extra buffer copy 
overhead(with the patch) does not seem to be a reasonable idea to me. Let's 
discuss it in some more detail on how to address this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to