[
https://issues.apache.org/jira/browse/HIVE-20604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16628616#comment-16628616
]
Hive QA commented on HIVE-20604:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12941313/HIVE-20604.01.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 14996 tests
executed
*Failed tests:*
{noformat}
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out)
(batchId=195)
[druidmini_masking.q,druidmini_test1.q,druidkafkamini_basic.q,druidmini_joins.q,druid_timestamptz.q]
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[load_data_using_job]
(batchId=161)
org.apache.hive.jdbc.miniHS2.TestHs2ConnectionMetricsBinary.testOpenConnectionMetrics
(batchId=256)
{noformat}
Test results:
https://builds.apache.org/job/PreCommit-HIVE-Build/14056/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14056/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14056/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12941313 - PreCommit-HIVE-Build
> Minor compaction disables ORC column stats
> ------------------------------------------
>
> Key: HIVE-20604
> URL: https://issues.apache.org/jira/browse/HIVE-20604
> Project: Hive
> Issue Type: Improvement
> Components: Transactions
> Affects Versions: 1.0.0
> Reporter: Eugene Koifman
> Assignee: Eugene Koifman
> Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20604.01.patch
>
>
> {noformat}
> @Override
> public org.apache.hadoop.hive.ql.exec.FileSinkOperator.RecordWriter
> getRawRecordWriter(Path path, Options options) throws IOException {
> final Path filename = AcidUtils.createFilename(path, options);
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(),
> options.getConfiguration());
> if (!options.isWritingBase()) {
> opts.bufferSize(OrcRecordUpdater.DELTA_BUFFER_SIZE)
> .stripeSize(OrcRecordUpdater.DELTA_STRIPE_SIZE)
> .blockPadding(false)
> .compress(CompressionKind.NONE)
> .rowIndexStride(0)
> ;
> }
> {noformat}
> {{rowIndexStride(0)}} makes {{StripeStatistics.getColumnStatistics()}} return
> objects but with meaningless values, like min/max for
> {{IntegerColumnStatistics}} set to MIN_LONG/MAX_LONG.
> This interferes with ability to infer min ROW_ID for a split but also creates
> inefficient files.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)