[
https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Zheng Hu updated HBASE-21937:
-----------------------------
Status: Patch Available (was: Open)
> Make the Compression#decompress can accept ByteBuff as input
> -------------------------------------------------------------
>
> Key: HBASE-21937
> URL: https://issues.apache.org/jira/browse/HBASE-21937
> Project: HBase
> Issue Type: Sub-task
> Reporter: Zheng Hu
> Assignee: Zheng Hu
> Priority: Major
> Attachments: HBASE-21937.HBASE-21879.v1.patch
>
>
> When decompressing an compressed block, we are also allocating
> HeapByteBuffer for the unpacked block. should allocate ByteBuff from the
> global ByteBuffAllocator, skimmed the code, the key point is, we need an
> ByteBuff decompress interface, not the following:
> {code}
> # Compression.java
> public static void decompress(byte[] dest, int destOffset,
> InputStream bufferedBoundedStream, int compressedSize,
> int uncompressedSize, Compression.Algorithm compressAlgo)
> throws IOException {
> //...
> }
> {code}
> Not very high priority, let me make the block without compression to be
> offheap firstly.
> In HBASE-22005, I ignored the unit test:
> 1. TestLoadAndSwitchEncodeOnDisk ;
> 2. TestHFileBlock#testPreviousOffset;
> Need to resolve this issue and make those UT works fine.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)