[
https://issues.apache.org/jira/browse/HIVE-17935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256625#comment-16256625
]
Hive QA commented on HIVE-17935:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12898098/HIVE-17935.7.patch
{color:red}ERROR:{color} -1 due to build exiting with an error
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7876/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7876/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7876/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-11-17 08:13:54.847
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-7876/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-11-17 08:13:54.850
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 987d130 HIVE-16756 : Vectorization: LongColModuloLongColumn
throws java.lang.ArithmeticException: / by zero (Vihang Karajgaonkar, reviewed
by Matt McCline)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 987d130 HIVE-16756 : Vectorization: LongColModuloLongColumn
throws java.lang.ArithmeticException: / by zero (Vihang Karajgaonkar, reviewed
by Matt McCline)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-11-17 08:13:59.272
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh
/data/hiveptest/working/scratch/build.patch
error: patch failed:
ql/src/test/results/clientpositive/llap/ppd_union_view.q.out:258
error: ql/src/test/results/clientpositive/llap/ppd_union_view.q.out: patch does
not apply
error: patch failed: ql/src/test/results/clientpositive/llap/sysdb.q.out:2190
error: ql/src/test/results/clientpositive/llap/sysdb.q.out: patch does not apply
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12898098 - PreCommit-HIVE-Build
> Turn on hive.optimize.sort.dynamic.partition by default
> -------------------------------------------------------
>
> Key: HIVE-17935
> URL: https://issues.apache.org/jira/browse/HIVE-17935
> Project: Hive
> Issue Type: Bug
> Reporter: Andrew Sherman
> Assignee: Andrew Sherman
> Attachments: HIVE-17935.1.patch, HIVE-17935.2.patch,
> HIVE-17935.3.patch, HIVE-17935.4.patch, HIVE-17935.5.patch,
> HIVE-17935.6.patch, HIVE-17935.7.patch
>
>
> The config option hive.optimize.sort.dynamic.partition is an optimization for
> Hive’s dynamic partitioning feature. It was originally implemented in
> [HIVE-6455|https://issues.apache.org/jira/browse/HIVE-6455]. With this
> optimization, the dynamic partition columns and bucketing columns (in case of
> bucketed tables) are sorted before being fed to the reducers. Since the
> partitioning and bucketing columns are sorted, each reducer can keep only one
> record writer open at any time thereby reducing the memory pressure on the
> reducers. There were some early problems with this optimization and it was
> disabled by default in HiveConf in
> [HIVE-8151|https://issues.apache.org/jira/browse/HIVE-8151]. Since then
> setting hive.optimize.sort.dynamic.partition=true has been used to solve
> problems where dynamic partitioning produces with (1) too many small files on
> HDFS, which is bad for the cluster and can increase overhead for future Hive
> queries over those partitions, and (2) OOM issues in the map tasks because it
> trying to simultaneously write to 100 different files.
> It now seems that the feature is probably mature enough that it can be
> enabled by default.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)