[
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780496#comment-16780496
]
Hadoop QA commented on PHOENIX-5171:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12960565/PHOENIX-5171-master.patch
against master branch at commit 52e24fce2d8d335cb261891927e7b0704c0c5256.
ATTACHMENT ID: 12960565
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:red}-1 release audit{color}. The applied patch generated 1 release
audit warnings (more than the master's current 0 warnings).
{color:red}-1 lineLengths{color}. The patch introduces the following lines
longer than 100:
+ conn.createStatement().execute("upsert into aiolos
values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2)");
+ conn.createStatement().execute("upsert into aiolos
values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2)");
+ // If we can't found data from ptr for the rest of the
slots, then skip current key.
+ // The reason that we don't store NULL at the end of the
row key for the variable data
{color:red}-1 core tests{color}. The patch failed these unit tests:
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TableDDLPermissionsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ChangePermissionsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
Test results:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/2367//testReport/
Release audit warnings:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/2367//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/2367//console
This message is automatically generated.
> SkipScan incorrectly filters composite primary key which the trailing is NULL
> ------------------------------------------------------------------------------
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 5.0.0, 4.14.1
> Reporter: jaanai
> Assignee: jaanai
> Priority: Critical
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT vdate FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after
> previous hint
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
> at
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
> at
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
> at
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
> at
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
> at
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
> ... 8 more
> {code}
> The caused by adding skipped row into nextCellHintMap. Actually, due to we
> don't store NULL at the end of the key for the variable data type, these
> keys should be skipped when invokes filterKeyValue, because they are smaller
> than the rest of the positions of the slots.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)