A very big cube, such as cube size id bigger than 1TB will block hbase's
normal operation when doing the BulkLoad job (because the job will write to
much data to HDFS), such as kylin metadata operation/ query. especially
when the cube's merge job maybe write to hbase N TB in a mr job.

Has anyone met the problem?

Reply via email to