xiaoyuyao commented on a change in pull request #830: HDDS-1530. Freon support 
big files larger than 2GB and add --bufferSize and --validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#discussion_r288799644
 
 

 ##########
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##########
 @@ -622,7 +642,11 @@ public void run() {
                 try (Scope writeScope = GlobalTracer.get()
                     .buildSpan("writeKeyData")
                     .startActive(true)) {
-                  os.write(keyValue);
+                  for (long nrRemaining = keySize - randomValue.length;
+                        nrRemaining > 0; nrRemaining -= bufferSize) {
+                    int curSize = (int)Math.min(bufferSize, nrRemaining);
+                    os.write(keyValueBuffer, 0, curSize);
 
 Review comment:
   You are right. No issue at the socket layer. I'm thinking of the DN side, 
the chunk files of the same key being written could be the same in this scheme. 
That might increase the write performance compared with 2GB fully random 
chunks.  As long we use it consistently, it should be fine. Later on, we will 
can an option to write 0 only by default and  random up to buffersize when a 
parameter is specified. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to