Firstly try to give your reducer more memory (by updating hadoop
configuration, you can google it) and then resume the job.

If the error still there, it indicates there might be one or more
high-cardinality dimension column in your cube. I’m doing an enhancement
which is related with this issue, plan to be release in v1.2 :
https://issues.apache.org/jira/browse/KYLIN-980

“kylin.job.mapreduce.default.reduce.input.mb” is not related with your
issue; it is used to estimate the number of reducers for build N-D cuboids.

On 11/26/15, 10:30 AM, "诸葛亮" <[email protected]> wrote:

>As the title describe,it seems the cube generate much data, it appeared
>in reduce stage, I add the
>kylin.job.mapreduce.default.reduce.input.mb=4096 but it doesn't work,
>what should I do?
>Thanks for help,
>xingchen

Reply via email to