[ 
https://issues.apache.org/jira/browse/KYLIN-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968205#comment-16968205
 ] 

ZhouKang commented on KYLIN-4185:
---------------------------------

In our production environment,  the estimated size and real size is as follows:

cube_sample, segment: 20191102000000_20191103000000 

estimate size: 1736731MB  HTable size: 190701MB

cube_sample, segment: 20191001000000_20191008000000 (this is a merged segemtn)
estimate size: 49251MB   HTable size: 1153314MB

> CubeStatsReader estimate wrong cube size
> ----------------------------------------
>
>                 Key: KYLIN-4185
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4185
>             Project: Kylin
>          Issue Type: Improvement
>            Reporter: ZhouKang
>            Priority: Major
>
> CubeStatsReader estimate wrong cube size, which cause a lot of problems.
> when the estimated size is much larger than the real size, the spark 
> application's executor number is small, and cube build step will take a long 
> time. sometime the step will failed due to the large dataset.
> When the estimated size is much smaller than the real size. the cuboid file 
> in HDFS is small, and there are much of cuboid file.
>  
> In our production environment, both the two situation happened.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to