[ 
https://issues.apache.org/jira/browse/CARBONDATA-306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15570706#comment-15570706
 ] 

ASF GitHub Bot commented on CARBONDATA-306:
-------------------------------------------

Github user Zhangshunyu commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/230#discussion_r83139950
  
    --- Diff: 
processing/src/main/java/org/apache/carbondata/processing/store/writer/AbstractFactDataWriter.java
 ---
    @@ -252,6 +252,9 @@ private static long getMaxOfBlockAndFileSize(long 
blockSize, long fileSize) {
         if (remainder > 0) {
           maxSize = maxSize + HDFS_CHECKSUM_LENGTH - remainder;
         }
    +    LOGGER.info("The configured block size is " + blockSize + " byte, " +
    --- End diff --
    
    @Jay357089 I think this is a good idea to extract ConvertByteToReadable as 
a method, since it can be used in many logs, especially for analyzing 
performance.


> block size info should be show in Desc Formatted and executor log
> -----------------------------------------------------------------
>
>                 Key: CARBONDATA-306
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-306
>             Project: CarbonData
>          Issue Type: Improvement
>            Reporter: Jay
>            Priority: Minor
>
> when run desc formatted command, the table block size should be show, as well 
> as in executor log when run load command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to