[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15069143#comment-15069143
 ] 

Hadoop QA commented on HBASE-14940:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779175/HBASE-14940_v2.patch
  against master branch at commit 1af98f255132ef6716a1f6ba1d8d71a36ea38840.
  ATTACHMENT ID: 12779175

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

    {color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

    {color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

    {color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

    {color:green}+1 core tests{color}.  The patch passed unit tests in .

    {color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16987//testReport/
Release Findbugs (version 2.0.3)        warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16987//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16987//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/16987//console

This message is automatically generated.

> Make our unsafe based ops more safe
> -----------------------------------
>
>                 Key: HBASE-14940
>                 URL: https://issues.apache.org/jira/browse/HBASE-14940
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Anoop Sam John
>            Assignee: Anoop Sam John
>         Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to