[ 
https://issues.apache.org/jira/browse/HADOOP-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13129281#comment-13129281
 ] 

Hadoop QA commented on HADOOP-7651:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12499462/7651-22.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified 
tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    -1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/300//console

This message is automatically generated.
                
> Hadoop Record compiler generates Java files with erroneous byte-array lengths 
> for fields trailing a 'ustring' field
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-7651
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7651
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: record
>    Affects Versions: 0.20.203.0, 0.21.0, 0.22.0, 0.23.0, 0.24.0
>            Reporter: Hung-chih Yang
>            Assignee: Milind Bhandarkar
>              Labels: hadoop
>             Fix For: 0.20.204.1, 0.21.1, 0.22.0, 0.23.0, 0.24.0
>
>         Attachments: 7651-22.patch, 7651-trunk.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Hadoop Record compiler produces Java files from a DDL file. If a DDL file has 
> a class that contains a 'ustring' field, then the generated 'compareRaw()' 
> function for this record is erroneous in computing the length of remaining 
> bytes after the logic of computing the buffer segment for a 'ustring' field.
> Below is a line in a generated 'compareRaw()' function for a record class 
> with a 'ustring' field :
>           s1+=i1; s2+=i2; l1-=i1; l1-=i2;
> This line shoud be corrected by changing the last 'l1' to 'l2':
>           s1+=i1; s2+=i2; l1-=i1; l2-=i2;
> To fix this bug, one should correct the 'genCompareBytes()' function in the 
> 'JString.java' file of the package 'org.apache.hadoop.record.compiler' by 
> changing the line below to the ensuing line. There is only one digit 
> difference:
>       cb.append("s1+=i1; s2+=i2; l1-=i1; l1-=i2;\n");
>       cb.append("s1+=i1; s2+=i2; l1-=i1; l2-=i2;\n");
> This bug is serious as it will always crash unserializing a record with a 
> simple definition like the one below
> class PairStringDouble {
>   ustring first;
>   double  second;
> }
> Unserializing a record of this class will throw an exception as the 'second' 
> field does not have 8 bytes for a double value due to the erroneous length 
> computation for the remaining buffer.
> Both Hadoop 0.20 and 0.21 have this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to