[
https://issues.apache.org/jira/browse/PHOENIX-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134826#comment-16134826
]
Hadoop QA commented on PHOENIX-4109:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12882818/PHOENIX-4109.patch
against master branch at commit a0f47c2bec568ebcfca6be2fe8e0fd3af9c01a60.
ATTACHMENT ID: 12882818
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:red}-1 javadoc{color}. The javadoc tool appears to have generated
56 warning messages.
{color:red}-1 release audit{color}. The applied patch generated 3 release
audit warnings (more than the master's current 0 warnings).
{color:red}-1 lineLengths{color}. The patch introduces the following lines
longer than 100:
+ private static boolean hasDisabledIndex(PMetaData metaCache, PTableKey
key) throws TableNotFoundException {
+ conn.createStatement().execute("CREATE TABLE " + fullTableName +
"(k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 VARCHAR) COLUMN_ENCODED_BYTES = 0,
STORE_NULLS=true");
+ conn.createStatement().execute("CREATE INDEX " + indexName + " ON
" + fullTableName + " (v1, v2)");
+ HTableInterface metaTable =
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
+ conn.createStatement().execute("UPSERT INTO " + fullTableName + "
VALUES('b','bb', '11')");
+ conn.createStatement().execute("UPSERT INTO " + fullTableName + "
VALUES('a','ccc','0')");
+ conn.createStatement().execute("CREATE TABLE " + fullTableName +
"(k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 VARCHAR, v3 VARCHAR)
COLUMN_ENCODED_BYTES = 0, STORE_NULLS=true");
+ conn.createStatement().execute("CREATE INDEX " + indexName + " ON
" + fullTableName + " (v1, v2) INCLUDE (v3)");
+ conn.createStatement().execute("UPSERT INTO " + fullTableName + "
VALUES('a','a','0','x')");
+ try (HTableInterface metaTable =
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES))
{
{color:red}-1 core tests{color}. The patch failed these unit tests:
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixServerRpcIT
Test results:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1280//testReport/
Release audit warnings:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1280//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1280//artifact/patchprocess/patchJavadocWarnings.txt
Console output:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1280//console
This message is automatically generated.
> Ensure mutations are processed in batches with same time stamp during partial
> rebuild
> -------------------------------------------------------------------------------------
>
> Key: PHOENIX-4109
> URL: https://issues.apache.org/jira/browse/PHOENIX-4109
> Project: Phoenix
> Issue Type: Bug
> Reporter: James Taylor
> Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4109_4.x-HBase-0.98.patch, PHOENIX-4109.patch
>
>
> Our check for needing to sort cells by timestamp in secondary index code was
> incomplete. We need to sort and run each mutation with a unique timestamp in
> a separate batch whenever we're rebuilding the index.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)