Hi Austin,

Can you share your table description? Also,was the table empty? Last, what
does your bulk data look like? I mean, how many files? One per region? Are
you 100% sure? Have you used the HFile too to validate the splits and keys
of your files?

JMS

2018-07-17 14:12 GMT-04:00 Austin Heyne <[email protected]>:

> Hi all,
>
> I'm trying to bulk load a large amount of data into HBase. The bulk load
> succeeds but then HBase starts running compactions. My input files are
> typically ~5-6GB and there are over 3k files. I've used the same table
> splits for the bulk ingest and the bulk load so there should be no reason
> for hbase to run any compactions. However, I'm seeing it first start
> compacting the hfiles into 25+GB files and then into 200+GB files but
> didn't let it run any longer. Additionally, I've talked with another
> coworker who's tried this process in the past and he's experience the same
> thing, eventually giving up on the feature. My attempts have been on HBase
> 1.4.2. Does anyone have information on why HBase is insisting on running
> these compactions or how I can stop them? They are essentially breaking the
> feature for us.
>
> Thanks,
>
> --
> Austin L. Heyne
>
>

Reply via email to