[
https://issues.apache.org/jira/browse/HIVE-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942105#comment-15942105
]
Hive QA commented on HIVE-12316:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12776619/HIVE-12316.5.patch
{color:red}ERROR:{color} -1 due to build exiting with an error
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4380/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4380/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4380/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-03-26 02:13:05.191
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-4380/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-03-26 02:13:05.194
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at b4a8af9 HIVE-16274: Support tuning of NDV of columns using
lower/upper bounds (Pengcheng Xiong, reviewed by Jason Dere)
+ git clean -f -d
Removing hbase-handler/src/java/org/apache/hadoop/hive/hbase/phoenix/
Removing
serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBaseWrapper.java
Removing serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyUtils.java.orig
Removing serde/src/java/org/apache/hadoop/hive/serde2/lazydio/LazyDioDate.java
Removing
serde/src/java/org/apache/hadoop/hive/serde2/lazydio/LazyDioTimestamp.java
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at b4a8af9 HIVE-16274: Support tuning of NDV of columns using
lower/upper bounds (Pengcheng Xiong, reviewed by Jason Dere)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-03-26 02:13:06.294
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh
/data/hiveptest/working/scratch/build.patch
error: patch failed:
hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HiveEndPoint.java:387
error:
hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HiveEndPoint.java:
patch does not apply
error:
itests/hive-unit/src/main/java/org/apache/hive/jdbc/miniHS2/MiniHS2.java: No
such file or directory
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12776619 - PreCommit-HIVE-Build
> Improved integration test for Hive
> ----------------------------------
>
> Key: HIVE-12316
> URL: https://issues.apache.org/jira/browse/HIVE-12316
> Project: Hive
> Issue Type: New Feature
> Components: Testing Infrastructure
> Affects Versions: 2.0.0
> Reporter: Alan Gates
> Assignee: Alan Gates
> Attachments: HIVE-12316.2.patch, HIVE-12316.5.patch, HIVE-12316.patch
>
>
> In working with Hive testing I have found there are several issues that are
> causing problems for developers, testers, and users:
> * Because Hive has many tunable knobs (file format, security, etc.) we end up
> with tests that cover the same functionality with different permutations of
> these features.
> * The Hive integration tests (ie qfiles) cannot be run on a cluster. This
> means we cannot run any of those tests at scale. The HBase community by
> contrast uses the same test suite locally and on a cluster, and has found
> that this helps them greatly in testing.
> * Golden files are a grievous evil. Test writers are forced to eyeball
> results the first time they run a test and decide whether they look
> reasonable, which is error prone and makes testing at scale impossible. And
> changes to one part of Hive often end up changing the plan (and the output of
> explain) thus breaking many tests that are not related. This is particularly
> an issue for people working on the optimizer.
> * The lack of ability to run on a cluster means that when people test Hive at
> scale, they are forced to develop custom frameworks which can't then benefit
> the community.
> * There is no easy mechanism to bring user queries into the test suite.
> I propose we build a new testing capability with the following requirements:
> * One test should be able to run all reasonable permutations (mr/tez/spark,
> orc/parquet/text/rcfile, secure/non-secure etc.) This doesn't mean it would
> run every permutation every time, but that the tester could choose which
> permutation to run.
> * The same tests should run locally and on a cluster. The tests should
> support scaling of input data from Ks to Ts.
> * Expected results should be auto-generated whenever possible, and this
> should work with the scaling of inputs. The dev should be able to provide
> expected results or custom expected result generation in cases where
> auto-generation doesn't make sense.
> * Access to the query plan should be available as an API in the tests so that
> golden files of explain output are not required.
> * This should run in maven, junit, and java so that developers do not need to
> manage yet another framework.
> * It should be possible to simulate user data (based on schema and
> statistics) and quickly incorporate user queries so that tests from user
> scenarios can be quickly incorporated.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)