[
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15860875#comment-15860875
]
Hadoop QA commented on HBASE-17623:
-----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s {color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}
28m 27s {color} | {color:green} Patch does not cause any errors with Hadoop
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
44s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s
{color} | {color:red} hbase-common generated 1 new + 0 unchanged - 0 fixed = 1
total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 56s
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 105m 7s
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
36s {color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 4s {color}
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12851989/HBASE-17623.v0.patch |
| JIRA Issue | HBASE-17623 |
| Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck
hbaseanti checkstyle compile |
| uname | Linux acf7d0e24815 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
|
| git revision | master / 1b041a4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| javadoc |
https://builds.apache.org/job/PreCommit-HBASE-Build/5661/artifact/patchprocess/diff-javadoc-javadoc-hbase-common.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HBASE-Build/5661/testReport/ |
| modules | C: hbase-common hbase-server U: . |
| Console output |
https://builds.apache.org/job/PreCommit-HBASE-Build/5661/console |
| Powered by | Apache Yetus 0.3.0 http://yetus.apache.org |
This message was automatically generated.
> Reuse the bytes array when building the hfile block
> ---------------------------------------------------
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
> Issue Type: Improvement
> Reporter: ChiaPing Tsai
> Assignee: ChiaPing Tsai
> Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
> if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx,
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
> }
> userDataStream.flush();
> // This does an array copy, so it is safe to cache this byte array when
> cache-on-write.
> // Header is still the empty, 'dummy' header that is yet to be filled
> out.
> uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
> prevOffset = prevOffsetByType[blockType.getId()];
> // We need to set state before we can package the block up for
> cache-on-write. In a way, the
> // block is ready, but not yet encoded or compressed.
> state = State.BLOCK_READY;
> if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA)
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
> } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
> }
> // Calculate how many bytes we need for checksum on the tail of the
> block.
> int numBytes = (int) ChecksumUtil.numBytes(
> onDiskBlockBytesWithHeader.length,
> fileContext.getBytesPerChecksum());
> // Put the header for the on disk bytes; header currently is
> unfilled-out
> putHeader(onDiskBlockBytesWithHeader, 0,
> onDiskBlockBytesWithHeader.length + numBytes,
> uncompressedBlockBytesWithHeader.length,
> onDiskBlockBytesWithHeader.length);
> // Set the header for the uncompressed bytes (for cache-on-write) --
> IFF different from
> // onDiskBlockBytesWithHeader array.
> if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
> onDiskBlockBytesWithHeader.length + numBytes,
> uncompressedBlockBytesWithHeader.length,
> onDiskBlockBytesWithHeader.length);
> }
> if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
> }
> ChecksumUtil.generateChecksums(
> onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
> onDiskChecksum, 0, fileContext.getChecksumType(),
> fileContext.getBytesPerChecksum());
> }{code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)