Hi,

Thanks for the answer, actually it was maybe misleading from me to
formulate it like that but when I said "small", "medium" and "large" cubes
I was not talking about the advanced setting that can be set when creating
the cube but only about the cube actual size in terms of data (few days,
some weeks or several months of data).

Anyway I tried to build the cube by setting the property to a "small cube"
profile and I still see the exact same issue.

On Tue, Jun 23, 2015 at 3:14 AM, hongbin ma <[email protected]> wrote:

> ​hi alex,
>
> "small​","medium" and "large" are three profiles which differs from each
> other in terns of hbase region split size.
>
> if I'm remebering correctly, small profile will generate htable with 10G a
> region, 20G for medium profile, and 100G for large profile. (in v0.7.1)
>
> if your cube is not very big, you should stay with the small profile, if
> your cube is very large, you should think about the other two. In your
> case, when you use medium profile or large profile, your regions servers
> seemed not prepared for so large regions(maybe short of memory), please
> take a second look at:
>
> [LoadIncrementalHFiles-1]
>
> mapreduce.LoadIncrementalHFiles: HFile at
>
> hdfs://xxx:8020/tmp/kylin-b68bac23-ea82-4471-bc70
>
> -c144991fbbe0/smallCube/hfile/F1/e6ed300e4d9c41938d1ee474536c4fbf no
>
> longer fits inside a single region. Splitting...
>

Reply via email to