Hi,
Yes, in fact I really want to avoid the minor compaction as much as possible, because during a long Ingest, any minor compaction largely blocks the speed of ingest. But since the memory is limited, compaction is unavoidable, thus my desire is to control it as much as possible to harmonize the code accordingly. Thanks, Hai ________________________________ From: [email protected] <[email protected]> Sent: Thursday, July 30, 2015 7:12 PM To: [email protected] Subject: RE: How to control Minor Compaction by programming It sounds like you want to try and not minor compact during your ingest of your data. Is that correct? From: William Slacum [mailto:[email protected]] Sent: Thursday, July 30, 2015 8:10 PM To: [email protected] Subject: Re: How to control Minor Compaction by programming See http://accumulo.apache.org/1.5/apidocs/org/apache/accumulo/core/client/admin/TableOperations.html#flush%28java.lang.String,%20org.apache.hadoop.io.Text,%20org.apache.hadoop.io.Text,%20boolean%29 for minor compacting (aka "flushing") a table via the API. On Thu, Jul 30, 2015 at 5:52 PM, Hai Pham <[email protected]<mailto:[email protected]>> wrote: Hi, Please share with me is there any way that we can init / control the Minor Compaction by programming (not from the shell). My situation is when I ingest a large data using the BatchWriter, the minor compaction is triggered uncontrollably. The flush() command in BatchWriter seems not for this purpose. I also tried to play around with parameters in documentation but seems not much helpful. Also, can you please explain the number 0, 1.0, 2.0, ... in charts (web monitoring) denoting the level of Minor Compaction and Major Compaction? Thank you! Hai Pham
