[
https://issues.apache.org/jira/browse/HIVE-17696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16218228#comment-16218228
]
Hive QA commented on HIVE-17696:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12893749/HIVE-17696.2.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 11322 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
(batchId=156)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi]
(batchId=110)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut
(batchId=205)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints
(batchId=222)
org.apache.hadoop.hive.ql.parse.authorization.plugin.sqlstd.TestOperation2Privilege.checkHiveOperationTypeMatch
(batchId=270)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes
(batchId=229)
{noformat}
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7464/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7464/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7464/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12893749 - PreCommit-HIVE-Build
> Vectorized reader does not seem to be pushing down projection columns in
> certain code paths
> -------------------------------------------------------------------------------------------
>
> Key: HIVE-17696
> URL: https://issues.apache.org/jira/browse/HIVE-17696
> Project: Hive
> Issue Type: Sub-task
> Reporter: Vihang Karajgaonkar
> Assignee: Ferdinand Xu
> Attachments: HIVE-17696.2.patch, HIVE-17696.patch
>
>
> This is the code snippet from {{VectorizedParquetRecordReader.java}}
> {noformat}
> MessageType tableSchema;
> if (indexAccess) {
> List<Integer> indexSequence = new ArrayList<>();
> // Generates a sequence list of indexes
> for(int i = 0; i < columnNamesList.size(); i++) {
> indexSequence.add(i);
> }
> tableSchema = DataWritableReadSupport.getSchemaByIndex(fileSchema,
> columnNamesList,
> indexSequence);
> } else {
> tableSchema = DataWritableReadSupport.getSchemaByName(fileSchema,
> columnNamesList,
> columnTypesList);
> }
> indexColumnsWanted =
> ColumnProjectionUtils.getReadColumnIDs(configuration);
> if (!ColumnProjectionUtils.isReadAllColumns(configuration) &&
> !indexColumnsWanted.isEmpty()) {
> requestedSchema =
> DataWritableReadSupport.getSchemaByIndex(tableSchema,
> columnNamesList, indexColumnsWanted);
> } else {
> requestedSchema = fileSchema;
> }
> this.reader = new ParquetFileReader(
> configuration, footer.getFileMetaData(), file, blocks,
> requestedSchema.getColumns());
> {noformat}
> Couple of things to notice here:
> Most of this code is duplicated from {{DataWritableReadSupport.init()}}
> method.
> the else condition passes in fileSchema instead of using tableSchema like we
> do in DataWritableReadSupport.init() method. Does this cause projection
> columns to be missed when we read parquet files? We should probably just
> reuse ReadContext returned from {{DataWritableReadSupport.init()}} method
> here.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)