I see; while I don't have a good idea now.

2017-10-16 11:43 GMT+08:00 yu feng <[email protected]>:

> Yes, we have configure it, the problem is BulkLoad job(MR job) will write
> too many data to Hbase‘s HDFS, which will affect to Hbase's normal use, If
> there are some ways to limit the bandwidth of the BulkLoad job will be
> great. Do you have some good idea?
>
> 2017-10-16 11:32 GMT+08:00 ShaoFeng Shi <[email protected]>:
>
> > Did you configure "kylin.hbase.cluster.fs", pointing to your HBase HDFS?
> >
> > Check this blog for more:
> > https://kylin.apache.org/blog/2016/06/10/standalone-hbase-cluster/
> >
> > 2017-10-16 9:51 GMT+08:00 yu feng <[email protected]>:
> >
> > > yes, hbase is running on another HDFS, and in a very big BulkLoad, the
> > HDFS
> > > is blocking (network or disk I/O), which block Hbase.
> > >
> > > 2017-10-15 9:38 GMT+08:00 ShaoFeng Shi <[email protected]>:
> > >
> > > > The generation of HFile is happened in the "Convert to HFile" step,
> > which
> > > > is an MR job, won't block HBase normal tasks.
> > > >
> > > > The HBase BulkLoad on HDFS should be very fast (second level), as it
> is
> > > > just a move operation.
> > > >
> > > > For your case, is your HBase running with another HDFS other than the
> > > > default HDFS?
> > > >
> > > >
> > > > 2017-10-13 16:16 GMT+08:00 yu feng <[email protected]>:
> > > >
> > > > > A very big cube, such as cube size id bigger than 1TB will block
> > > hbase's
> > > > > normal operation when doing the BulkLoad job (because the job will
> > > write
> > > > to
> > > > > much data to HDFS), such as kylin metadata operation/ query.
> > especially
> > > > > when the cube's merge job maybe write to hbase N TB in a mr job.
> > > > >
> > > > > Has anyone met the problem?
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > >
> > > > Shaofeng Shi 史少锋
> > > >
> > >
> >
> >
> >
> > --
> > Best regards,
> >
> > Shaofeng Shi 史少锋
> >
>



-- 
Best regards,

Shaofeng Shi 史少锋

Reply via email to