[
https://issues.apache.org/jira/browse/HIVE-14995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15854899#comment-15854899
]
Hive QA commented on HIVE-14995:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834077/HIVE-14995.1.patch
{color:red}ERROR:{color} -1 due to build exiting with an error
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3401/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3401/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3401/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-02-06 22:53:21.921
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-3401/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-02-06 22:53:21.924
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at def0cde HIVE-15801. Some logging improvements in
LlapTaskScheduler. (Siddharth Seth, reviewed by Sergey Shelukhin)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at def0cde HIVE-15801. Some logging improvements in
LlapTaskScheduler. (Siddharth Seth, reviewed by Sergey Shelukhin)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-02-06 22:53:22.953
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh
/data/hiveptest/working/scratch/build.patch
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: No
such file or directory
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12834077 - PreCommit-HIVE-Build
> double conversion can corrupt partition column values for insert with dynamic
> partitions
> ----------------------------------------------------------------------------------------
>
> Key: HIVE-14995
> URL: https://issues.apache.org/jira/browse/HIVE-14995
> Project: Hive
> Issue Type: Bug
> Reporter: Sergey Shelukhin
> Assignee: Jason Dere
> Priority: Critical
> Attachments: HIVE-14995.1.patch
>
>
> {noformat}
> set hive.mapred.mode=nonstrict;
> set hive.explain.user=false;
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.fetch.task.conversion=none;
> drop table iow1;
> create table iow1(key int) partitioned by (key2 int);
> select key, key + 1 as k1, key + 1 as k2 from src where key >= 0 order by k1
> desc limit 1;
> explain
> insert overwrite table iow1 partition (key2)
> select key + 1 as k1, key + 1 as k2 from src where key >= 0 order by k1 desc
> limit 1;
> insert overwrite table iow1 partition (key2)
> select key + 1 as k1, key + 1 as k2 from src where key >= 0 order by k1 desc
> limit 1;
> {noformat}
> The result of the select query has the column converted to double (because
> src.key is string).
> {noformat}
> 498 499.0 499.0
> {noformat}
> When inserting that into table, the value is converted correctly to integer
> for the regular column, but not for partition column.
> Explain for insert (extracted)
> {noformat}
> Map Operator Tree:
> ...
> Select Operator
> expressions: (UDFToDouble(key) + 1.0) (type: double)
> ...
> Reduce Output Operator
> key expressions: _col0 (type: double)
> ...
> Reduce Operator Tree:
> Select Operator
> expressions: KEY.reducesinkkey0 (type: double), KEY.reducesinkkey0
> (type: double)
> ...
> Select Operator
> expressions: UDFToInteger(_col0) (type: int), _col1 (type:
> double)
> ... followed by FSOP and load into table
> {noformat}
> The result of the select from the resulting table is:
> {noformat}
> POSTHOOK: query: select key, key2 from iow1
> ...
> POSTHOOK: Input: default@iow1@key2=499.0
> ...
> 499 NULL
> {noformat}
> Woops!
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)