yutaoChina commented on a change in pull request #3892:
URL: https://github.com/apache/carbondata/pull/3892#discussion_r472681680



##########
File path: 
integration/flink/src/main/java/org/apache/carbon/core/metadata/StageManager.java
##########
@@ -81,7 +81,7 @@ public static void writeStageInput(final String 
stageInputPath, final StageInput
   private static void writeSuccessFile(final String successFilePath) throws 
IOException {
     final DataOutputStream segmentStatusSuccessOutputStream =
         FileFactory.getDataOutputStream(successFilePath,
-            CarbonCommonConstants.BYTEBUFFER_SIZE, 1024);
+            CarbonCommonConstants.BYTEBUFFER_SIZE, 1024 * 1024 * 2);

Review comment:
       i set it 2M beacuase hdfs (dfs.namenode.fs-limits.min-block-size) 
configured minimum value size is 1M and in CarbonUtil.java class 
   `getMaxOfBlockAndFileSize(long blockSize, long fileSize) `method use `long 
maxSize = blockSize;
       if (fileSize > blockSize) {
         maxSize = fileSize;
       }` 
   if default size or filesize less than 1M program will get error ;
   why 2M ? default is 1M so default * 2 bigger than it 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to