[
https://issues.apache.org/jira/browse/HBASE-15314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack updated HBASE-15314:
--------------------------
Release Note:
The following patch adds a couple of features without making any big changes to
the existing BucketCache allocator. This approach introduces the notion of
'segmentation' (or in other words, the underlying IOEngine can be made up of
non-contiguous segments. Two methods are added to expose this information to
the BucketCache allocator.
boolean IOEngine#isSegmented()
boolean IOEngine#doesAllocationCrossSegments(long offset, long len)
BucketCache calls these methods to determine if a 'contiguous' allocation of a
particular block size can occur. It does this by checking if
doesAllocationCrossSegments(offset, len) is true. If an allocation crosses a
segment, another call to allocate is made for the same block. The first block
is wasted (it is marked allocated). The worst case is 1 'largest block' per
file. If an allocation fails for any reason, all/any allocated blocks
(including wasted ones) are freed again for subsequent allocation requests.
This is very similar to a 'JBOD' configuration (there is no striping of any
kind).
There are also some additional fixes:
1) The 'total size' aligns with the 'total aggregate file size'. This is done
by doing a ceiling division, and rounding up the 'totalSize' so that each
segment is equally sized.
segmentSize = ceil(totalSize / numFiles)
totalSize = segmentSize * numFiles
2) All failed allocations, including extra ones made due to crossing segments,
are cleaned up
was:
The following patch adds a couple of features without making any big changes to
the existing BucketCache allocator. This approach introduces the notion of
'segmentation' (or in other words, the underlying IOEngine can be made up of
non-contiguous segments. Two methods are added to expose this information to
the BucketCache allocator.
boolean IOEngine#isSegmented()
boolean IOEngine#doesAllocationCrossSegments(long offset, long len)
BucketCache calls these methods to determine if a 'contiguous' allocation of a
particular block size can occur. It does this by checking if
doesAllocationCrossSegments(offset, len) is true. If an allocation crosses a
segment, another call to allocate is made for the same block. The first block
is wasted (it is marked allocated). The worst case is 1 'largest block' per
file.
If an allocation fails for any reason, all/any allocated blocks (including
wasted ones) are freed again for subsequent allocation requests.
This is very similar to a 'JBOD' configuration (there is no striping of any
kind).
There are also some additional fixes:
1) The 'total size' aligns with the 'total aggregate file size'. This is done
by doing a ceiling division, and rounding up the 'totalSize' so that each
segment is equally sized.
segmentSize = ceil(totalSize / numFiles)
totalSize = segmentSize * numFiles
2) All failed allocations, including extra ones made due to crossing segments,
are cleaned up
> Allow more than one backing file in bucketcache
> -----------------------------------------------
>
> Key: HBASE-15314
> URL: https://issues.apache.org/jira/browse/HBASE-15314
> Project: HBase
> Issue Type: Sub-task
> Components: BucketCache
> Reporter: stack
> Assignee: Aaron Tokhy
> Attachments: HBASE-15314-v2.patch, HBASE-15314-v3.patch,
> HBASE-15314.patch
>
>
> Allow bucketcache use more than just one backing file: e.g. chassis has more
> than one SSD in it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)