bshashikant commented on a change in pull request #2883:
URL: https://github.com/apache/ozone/pull/2883#discussion_r765410691



##########
File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/StreamBuffer.java
##########
@@ -43,4 +43,20 @@ public int length() {
     return buffer.limit() - buffer.position();
   }
 
+  public ByteBuffer getBuffer() {
+    return buffer;
+  }
+
+  public void put(StreamBuffer sb){
+    buffer.put(sb.getBuffer());
+  }
+
+  public static StreamBuffer allocate(int size){

Review comment:
       Its better to add position() and other calls in here, instead of getting 
an instance of underlying buffer directly. 

##########
File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockDataStreamOutput.java
##########
@@ -125,7 +127,9 @@
   private List<CompletableFuture<DataStreamReply>> futures = new ArrayList<>();
   private final long syncSize = 0; // TODO: disk sync is disabled for now
   private long syncPosition = 0;
-
+  private StreamBuffer currentBuffer;
+  private int currentBufferRemaining;
+  private XceiverClientMetrics metrics;

Review comment:
       Do we really need to track currentBufferRemaining? It can always be 
deduced from streamBuffer.

##########
File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/OzoneClientConfig.java
##########
@@ -69,6 +69,14 @@
       tags = ConfigTag.CLIENT)
   private long dataStreamBufferFlushSize = 16 * 1024 * 1024;
 
+  @Config(key = "datastream.min.packet.size",
+      defaultValue = "64KB",
+      type = ConfigType.SIZE,
+      description = "The maximum size of the ByteBuffer "

Review comment:
       Let's set this size to 1 MB == bytesPerCheckSum by default.

##########
File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/StreamBuffer.java
##########
@@ -43,4 +43,20 @@ public int length() {
     return buffer.limit() - buffer.position();
   }
 
+  public ByteBuffer getBuffer() {

Review comment:
       Its safe to always return a read only copy of the buffer if required. I 
would prefer to remove this.

##########
File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockDataStreamOutput.java
##########
@@ -308,11 +335,10 @@ public void writeOnRetry(long len) throws IOException {
     int count = 0;
     while (len > 0) {
       final StreamBuffer buf = bufferList.get(count);
-      final long writeLen = Math.min(buf.length(), len);
+      final long writeLen = Math.min(buf.getBuffer().position(), len);
       final ByteBuffer duplicated = buf.duplicate();
-      if (writeLen != buf.length()) {
-        duplicated.limit(Math.toIntExact(len));
-      }
+      duplicated.position(0);

Review comment:
       will this position be always be set to 0?? The starting offset should be 
deduced rom last ack'd length in case of retry. 
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to