[
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15063828#comment-15063828
]
Hadoop QA commented on HBASE-14940:
-----------------------------------
{color:red}-1 overall{color}.
Test results:
https://builds.apache.org/job/PreCommit-HBASE-Build/16922//testReport/
Release Findbugs (version 2.0.3) warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/16922//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors:
https://builds.apache.org/job/PreCommit-HBASE-Build/16922//artifact/patchprocess/checkstyle-aggregate.html
Console output:
https://builds.apache.org/job/PreCommit-HBASE-Build/16922//console
This message is automatically generated.
> Make our unsafe based ops more safe
> -----------------------------------
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
> Issue Type: Bug
> Reporter: Anoop Sam John
> Assignee: Anoop Sam John
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is
> available and underlying platform is having unaligned-access capability. But
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for
> safepoint polling during a large copy" as mentioned in comments in Bits.java.
> We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off
> heap, we were doing byte by byte operation (read/copy). We can avoid this and
> do better way.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)