[ 
https://issues.apache.org/jira/browse/PHOENIX-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026394#comment-17026394
 ] 

Hadoop QA commented on PHOENIX-5140:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12985815/PHOENIX-5140-master-v2.patch
  against master branch at commit b15f0196bb8d139caa1a93ac4ac8dca37c04c024.
  ATTACHMENT ID: 12985815

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
    +        TableName physicalTableName = 
SchemaUtil.getPhysicalTableName(dataTableFullName.getBytes(), true);
+                    "CREATE LOCAL INDEX %s ON %s (VAL1, VAL2) ASYNC ", 
indexTableName, dataTableFullName));
+            ResultSet rs = conn.createStatement().executeQuery("EXPLAIN SELECT 
* FROM " + dataTableFullName + " WHERE VAL1 = 3 AND VAL2 = 4");
+            IndexTool indexTool = IndexToolIT.runIndexTool(true, false, 
schemaName, dataTableName, indexTableName, null, 0, new String[0]);
+            assertEquals(NROWS, 
indexTool.getJob().getCounters().findCounter(INPUT_RECORDS).getValue());
+            long actualRowCount = IndexScrutiny.scrutinizeIndex(conn, 
dataTableFullName, indexTableFullName);
+            ResultSet rs1 = conn.createStatement().executeQuery("EXPLAIN 
SELECT * FROM " + dataTableFullName + " WHERE VAL1 = 3 AND VAL2 = 4");
+        TableName physicalTableName = 
SchemaUtil.getPhysicalTableName(dataTableFullName.getBytes(), true);
+                    "CREATE LOCAL INDEX %s ON %s (VAL1) INCLUDE (VAL2) ASYNC 
", indexTableName, dataTableFullName));
+            ResultSet rs = conn.createStatement().executeQuery("EXPLAIN SELECT 
* FROM " + dataTableFullName + " WHERE VAL1 = 3 AND VAL2 = 4");

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IndexScrutinyToolIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IndexExtendedIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.util.IndexScrutinyIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/3359//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/3359//console

This message is automatically generated.

> TableNotFoundException occurs when we create local asynchronous index
> ---------------------------------------------------------------------
>
>                 Key: PHOENIX-5140
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5140
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 5.0.0
>         Environment: > HDP : 3.0.0.0, HBase : 2.0.0,phoenix : 5.0.0 and 
> hadoop : 3.1.0
>            Reporter: MariaCarrie
>            Assignee: dan zheng
>            Priority: Major
>              Labels: IndexTool, localIndex, tableUndefined
>         Attachments: PHOENIX-5140-master-v1.patch, 
> PHOENIX-5140-master-v2.patch
>
>   Original Estimate: 48h
>          Time Spent: 20m
>  Remaining Estimate: 47h 40m
>
> First I create the table and insert the data:
> ^create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
> varchar,age varchar);^
> ^upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');^
> The asynchronous index is then created:
> ^create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
> (name) ASYNC;^
> Because kerberos is enabled,So I need kinit HBase principal first,Then 
> execute the following command:
> ^HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
> /usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
> org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
> DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path 
> /hbase-backup2^
> But I got the following error:
> ^Error: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=DMP.DMP_INDEX_TEST2^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)^
> ^at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)^
> ^at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)^
> ^at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)^
> ^at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)^
> ^at java.security.AccessController.doPrivileged(Native Method)^
> ^at javax.security.auth.Subject.doAs(Subject.java:422)^
> ^at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)^
> ^at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)^
> ^Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=DMP.DMP_INDEX_TEST2^
> ^at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegionLocation(ConnectionQueryServicesImpl.java:4544)^
> ^at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegionLocation(DelegateConnectionQueryServices.java:312)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:163)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:118)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1202)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)^
> ^at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:103)^
> ^... 9 more^
> I can query this table and have access to it,It works well:
> ^select * from DMP.DMP_INDEX_TEST2;^
> ^select * from DMP.TMP_INDEX_DMP_TEST2;^
> ^drop table DMP.DMP_INDEX_TEST2;^
> But why did my MR task make this mistake? Any Suggestions from anyone?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to