[
https://issues.apache.org/jira/browse/PHOENIX-5535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956645#comment-16956645
]
Hadoop QA commented on PHOENIX-5535:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12983675/PHOENIX-5535.master.002.patch
against master branch at commit b4471384bcdfbf421e1625960860361ecfaeaf6b.
ATTACHMENT ID: 12983675
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 lineLengths{color}. The patch introduces the following lines
longer than 100:
+ private void setEveryNthRowWithNull(int nrows, int nthRowNull,
PreparedStatement stmt) throws Exception {
+ // This tests the cases where a column having a null value is
overwritten with a not null value and vice versa;
+ IndexTool indexTool = runIndexTool(directApi, useSnapshot,
schemaName, dataTableName, indexTableName, null, 0, new String[0]);
+ assertEquals(NROWS,
indexTool.getJob().getCounters().findCounter(INPUT_RECORDS).getValue());
+ long actualRowCount = IndexScrutiny.scrutinizeIndex(conn,
dataTableFullName, indexTableFullName);
+ indexTool = runIndexTool(directApi, useSnapshot, schemaName,
dataTableName, indexTableName, null, 0, new String[0]);
+ assertEquals(NROWS,
indexTool.getJob().getCounters().findCounter(INPUT_RECORDS).getValue());
+ actualRowCount = IndexScrutiny.scrutinizeIndex(conn,
dataTableFullName, indexTableFullName);
+ // By default, we'd use a FirstKeyOnly filter as nothing else
needs to be projected for count(*).
+ // However, in this case, we need to project all of the data
columns that contribute to the index.
{color:red}-1 core tests{color}. The patch failed these unit tests:
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.PermissionsCacheIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.PermissionNSEnabledIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.PermissionNSDisabledIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpgradeIT
Test results:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/3054//testReport/
Console output:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/3054//console
This message is automatically generated.
> Index rebuilds via UngroupedAggregateRegionObserver should replay delete
> markers
> --------------------------------------------------------------------------------
>
> Key: PHOENIX-5535
> URL: https://issues.apache.org/jira/browse/PHOENIX-5535
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 5.0.0, 4.14.3
> Reporter: Kadir OZDEMIR
> Assignee: Kadir OZDEMIR
> Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5535.4.x-HBase-1.5.001.patch,
> PHOENIX-5535.master.001.patch, PHOENIX-5535.master.002.patch
>
> Time Spent: 1h 40m
> Remaining Estimate: 0h
>
> Currently index rebuilds for global index tables are done on the server side.
> Phoenix client generates an aggregate plan using ServerBuildIndexCompiler to
> scan every data table row on the server side . This complier sets the scan
> attributes so that the row mutations that are scanned by
> UngroupedRegionObserver are then replayed on the data table so that index
> table rows are rebuilt. During this replay, data table row updates are
> skipped and only index table row are updated.
> Phoenix allows column entries to have null values. Null values are
> represented by HBase column delete marker. This means that index rebuild must
> replay these delete markers along with put mutations. In order to do that
> ServerBuildIndexCompiler should use raw scans but currently it does use
> regular scans. This leads incorrect index rebuilds when null values are used.
> A simple test where a data table with one global index with a covered column
> that can take null value is sufficient to reproduce this problem.
> # Create a data table with columns a, b, and c where a is the primary key
> and c can have null value
> # Write one row with not null values
> # Overwrite the covered column with null (i.e., set it to null)
> # Create an index on the table where b is the secondary key and c is covered
> column
> # Rebuild the index
> # Dump the index table
> The index table row should have the null value for the covered column.
> However, it has the not null value written at step 2.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)