[
https://issues.apache.org/jira/browse/HBASE-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Vladimir Rodionov updated HBASE-13884:
--------------------------------------
Description:
http://hbase.apache.org/book.html#_compaction
{quote}
Being Stuck
When the MemStore gets too large, it needs to flush its contents to a
StoreFile. However, a Store can only have hbase.hstore.blockingStoreFiles
files, so the MemStore needs to wait for the number of StoreFiles to be reduced
by one or more compactions. However, if the MemStore grows larger than
hbase.hregion.memstore.flush.size, it is not able to flush its contents to a
StoreFile. If the MemStore is too large and the number of StoreFiles is also
too high, the algorithm is said to be "stuck". The compaction algorithm checks
for this "stuck" situation and provides mechanisms to alleviate it.
{quote}
According to source code, this "stuck" situation has nothingg to do with
MemStore size.
{code}
// Stuck and not compacting enough (estimate). It is not guaranteed that we
will be
// able to compact more if stuck and compacting, because ratio policy
excludes some
// non-compacting files from consideration during compaction (see
getCurrentEligibleFiles).
int futureFiles = filesCompacting.isEmpty() ? 0 : 1;
boolean mayBeStuck = (candidateFiles.size() - filesCompacting.size() +
futureFiles)
>= storeConfigInfo.getBlockingFileCount();
{code}
If the number of store files which are not being compacted yet exceeds blocking
file count +(potentially)1 - we say that compaction may be stuck.
was:
http://hbase.apache.org/book.html#_compaction
{quote}
Being Stuck
When the MemStore gets too large, it needs to flush its contents to a
StoreFile. However, a Store can only have hbase.hstore.blockingStoreFiles
files, so the MemStore needs to wait for the number of StoreFiles to be reduced
by one or more compactions. However, if the MemStore grows larger than
hbase.hregion.memstore.flush.size, it is not able to flush its contents to a
StoreFile. If the MemStore is too large and the number of StoreFiles is also
too high, the algorithm is said to be "stuck". The compaction algorithm checks
for this "stuck" situation and provides mechanisms to alleviate it.
{quote}
According to source code, this "stuck" situation has nothingg to do with
MemStore size.
{code}
// Stuck and not compacting enough (estimate). It is not guaranteed that we
will be
// able to compact more if stuck and compacting, because ratio policy
excludes some
// non-compacting files from consideration during compaction (see
getCurrentEligibleFiles).
int futureFiles = filesCompacting.isEmpty() ? 0 : 1;
boolean mayBeStuck = (candidateFiles.size() - filesCompacting.size() +
futureFiles)
>= storeConfigInfo.getBlockingFileCount();
{code}
> Fix Compactions section in HBase book
> -------------------------------------
>
> Key: HBASE-13884
> URL: https://issues.apache.org/jira/browse/HBASE-13884
> Project: HBase
> Issue Type: Bug
> Components: documentation
> Reporter: Vladimir Rodionov
> Priority: Trivial
>
> http://hbase.apache.org/book.html#_compaction
> {quote}
> Being Stuck
> When the MemStore gets too large, it needs to flush its contents to a
> StoreFile. However, a Store can only have hbase.hstore.blockingStoreFiles
> files, so the MemStore needs to wait for the number of StoreFiles to be
> reduced by one or more compactions. However, if the MemStore grows larger
> than hbase.hregion.memstore.flush.size, it is not able to flush its contents
> to a StoreFile. If the MemStore is too large and the number of StoreFiles is
> also too high, the algorithm is said to be "stuck". The compaction algorithm
> checks for this "stuck" situation and provides mechanisms to alleviate it.
> {quote}
> According to source code, this "stuck" situation has nothingg to do with
> MemStore size.
> {code}
> // Stuck and not compacting enough (estimate). It is not guaranteed that we
> will be
> // able to compact more if stuck and compacting, because ratio policy
> excludes some
> // non-compacting files from consideration during compaction (see
> getCurrentEligibleFiles).
> int futureFiles = filesCompacting.isEmpty() ? 0 : 1;
> boolean mayBeStuck = (candidateFiles.size() - filesCompacting.size() +
> futureFiles)
> >= storeConfigInfo.getBlockingFileCount();
> {code}
> If the number of store files which are not being compacted yet exceeds
> blocking file count +(potentially)1 - we say that compaction may be stuck.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)