yes, hbase is running on another HDFS, and in a very big BulkLoad, the HDFS
is blocking (network or disk I/O), which block Hbase.

2017-10-15 9:38 GMT+08:00 ShaoFeng Shi <shaofeng...@apache.org>:

> The generation of HFile is happened in the "Convert to HFile" step, which
> is an MR job, won't block HBase normal tasks.
>
> The HBase BulkLoad on HDFS should be very fast (second level), as it is
> just a move operation.
>
> For your case, is your HBase running with another HDFS other than the
> default HDFS?
>
>
> 2017-10-13 16:16 GMT+08:00 yu feng <olaptes...@gmail.com>:
>
> > A very big cube, such as cube size id bigger than 1TB will block hbase's
> > normal operation when doing the BulkLoad job (because the job will write
> to
> > much data to HDFS), such as kylin metadata operation/ query. especially
> > when the cube's merge job maybe write to hbase N TB in a mr job.
> >
> > Has anyone met the problem?
> >
>
>
>
> --
> Best regards,
>
> Shaofeng Shi 史少锋
>

Reply via email to