[ 
https://issues.apache.org/jira/browse/PHOENIX-6013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162363#comment-17162363
 ] 

Hadoop QA commented on PHOENIX-6013:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/13008105/PHOENIX-6013-4.x.patch
  against 4.x branch at commit 218a71c07259a6ac4660335bf3636325e1ee138d.
  ATTACHMENT ID: 13008105

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
    +    public void 
testIndexMultiColumnsMultiIndexesVariableLengthNullLiteralsRVCOffset() throws 
SQLException {
+        String createIndex1 = "CREATE INDEX IF NOT EXISTS " + 
longKeyIndex1Name + " ON " + longKeyTableName + " (k2 ,v1, k4)";
+        String sql0 = "SELECT  v1,v3 FROM " + longKeyTableName + " LIMIT 3 
OFFSET (k1 ,k2, k3, k4, k5, k6)=('0','1',null,null,null,'2')";
+        try(Statement statement = conn.createStatement(); ResultSet rs = 
statement.executeQuery(sql0)) {
+            byte[] startRow = 
phoenixResultSet.getStatement().getQueryPlan().getScans().get(0).get(0).getStartRow();
+            byte[] expectedRow = new byte[] {'0',0,'1',0,0,0,0,'2',1}; //note 
trailing 1 not 0 due to phoenix internal inconsistency
+        String sql = "SELECT  k2,v1,k4 FROM " + longKeyTableName + " LIMIT 3 
OFFSET (k2,v1,k4,k1,k3,k5,k6)=('2',null,'4','1','3','5','6')";
+        try(Statement statement = conn.createStatement(); ResultSet rs = 
statement.executeQuery(sql)) {
+            byte[] startRow = 
phoenixResultSet.getStatement().getQueryPlan().getScans().get(0).get(0).getStartRow();
+            byte[] expectedRow = new byte[] 
{'2',0,0,'4',0,'1',0,'3',0,'5',0,'6',1}; //note trailing 1 not 0 due to phoenix 
internal inconsistency

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DynamicColumnIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/4066//testReport/
Code Coverage results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/4066//artifact/phoenix-core/target/site/jacoco/index.html
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/4066//console

This message is automatically generated.

> RVC Offset does not handle coerced literal nulls properly.
> ----------------------------------------------------------
>
>                 Key: PHOENIX-6013
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6013
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.15.0
>            Reporter: Daniel Wong
>            Assignee: Daniel Wong
>            Priority: Major
>             Fix For: 4.16.0
>
>         Attachments: PHOENIX-6013-4.x.patch
>
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> As part of query rewrite paths the offset may go through the A=null gets 
> rewritten to A IS NULL, the code sanity checks against an equality comparison 
> op which is not the case on this rewrite.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to