[ 
https://issues.apache.org/jira/browse/HIVE-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213701#comment-14213701
 ] 

Hive QA commented on HIVE-8844:
-------------------------------



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12681751/HIVE-8844.3-spark.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/366/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/366/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-366/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-SPARK-Build-366/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-spark-source ]]
+ [[ ! -d apache-svn-spark-source/.svn ]]
+ [[ ! -d apache-svn-spark-source ]]
+ cd apache-svn-spark-source
+ svn revert -R .
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/SparkJobStatus.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/impl/JobStateListener.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/impl/SimpleSparkJobStatus.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/SparkJobMonitor.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/SparkStageProgress.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkClient.java'
++ svn status --no-ignore
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
+ rm -rf target datanucleus.log ant/target shims/0.20/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/common-secure/target shims/scheduler/target metastore/target 
common/target common/src/gen serde/target ql/target 
ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/impl/JobMetricsListener.java
+ svn update
U    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ShuffleTran.java
U    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapInput.java
U    ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java

Fetching external item into 'hcatalog/src/test/e2e/harness'
Updated external to revision 1639907.

Updated to revision 1639907.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12681751 - PreCommit-HIVE-SPARK-Build

> Choose a persisent policy for RDD caching [Spark Branch]
> --------------------------------------------------------
>
>                 Key: HIVE-8844
>                 URL: https://issues.apache.org/jira/browse/HIVE-8844
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Jimmy Xiang
>             Fix For: spark-branch
>
>         Attachments: HIVE-8844.1-spark.patch, HIVE-8844.2-spark.patch, 
> HIVE-8844.3-spark.patch
>
>
> RDD caching is used for performance reasons in some multi-insert queries. 
> Currently, we call RDD.cache(), which indicates a persistency policy of using 
> memory only. We should choose a better policy. I think memory+disk will be 
> good enough. Refer to RDD.persist() for more information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to