[
https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16773273#comment-16773273
]
Hive QA commented on HIVE-20079:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12959444/HIVE-20079.3.patch
{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 15790 tests
executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed
out) (batchId=191)
[infer_bucket_sort_num_buckets.q,gen_udf_example_add10.q,spark_explainuser_1.q,spark_use_ts_stats_for_mapjoin.q,orc_merge6.q,orc_merge5.q,bucketmapjoin6.q,spark_opt_shuffle_serde.q,temp_table_external.q,spark_dynamic_partition_pruning_6.q,dynamic_rdd_cache.q,auto_sortmerge_join_16.q,vector_outer_join3.q,spark_dynamic_partition_pruning_7.q,schemeAuthority.q,parallel_orderby.q,vector_outer_join1.q,load_hdfs_file_with_space_in_the_name.q,spark_dynamic_partition_pruning_recursive_mapjoin.q,spark_dynamic_partition_pruning_mapjoin_only.q]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_stats]
(batchId=48)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty
(batchId=230)
{noformat}
Test results:
https://builds.apache.org/job/PreCommit-HIVE-Build/16164/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16164/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16164/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12959444 - PreCommit-HIVE-Build
> Populate more accurate rawDataSize for parquet format
> -----------------------------------------------------
>
> Key: HIVE-20079
> URL: https://issues.apache.org/jira/browse/HIVE-20079
> Project: Hive
> Issue Type: Improvement
> Components: File Formats
> Affects Versions: 2.0.0
> Reporter: Aihua Xu
> Assignee: Antal Sinkovits
> Priority: Major
> Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch,
> HIVE-20079.3.patch
>
>
> Run the following queries and you will see the raw data for the table is 4
> (that is the number of fields) incorrectly. We need to populate correct data
> size so data can be split properly.
> {noformat}
> SET hive.stats.autogather=true;
> CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET;
> INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1');
> DESC FORMATTED parquet_stats;
> {noformat}
> {noformat}
> Table Parameters:
> COLUMN_STATS_ACCURATE true
> numFiles 1
> numRows 2
> rawDataSize 4
> totalSize 373
> transient_lastDdlTime 1530660523
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)