Github user mccheah commented on the pull request:
https://github.com/apache/spark/pull/2662#issuecomment-58056000
Sorry about that. I think Jenkins should be catching these kinds of build
failures though. Jenkins should attempt to build the project against multiple
versions of
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2662#issuecomment-58056879
@mccheah agree about jenkins catching these, but at the same time is sort
of sketchy to rely on transitive dependencies of Hadoop exactly for that reason.
Github user mccheah commented on the pull request:
https://github.com/apache/spark/pull/2662#issuecomment-58057718
Fair enough. The bottom line is that we could be more explicit about this.
Perhaps something in the documentation?
---
If your project is set up for it, you can reply
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2662
SPARK-3794 [CORE] Building spark core fails due to inadvertent dependency
on Commons IO
Remove references to Commons IO FileUtils and replace with pure Java
version, which doesn't need to traverse
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2662#issuecomment-57937669
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21313/consoleFull)
for PR 2662 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2662#issuecomment-57939900
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2662#issuecomment-57939897
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21313/consoleFull)
for PR 2662 at commit
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2662#issuecomment-57954626
I believe this was introduced in https://github.com/apache/spark/pull/2609
-- any idea why Jenkins didn't catch the build issue?
cc @mccheah
---
If your project
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/2662#issuecomment-57962605
@ash211 I'd guess that is dependent on the version of Hadoop that we are
compiling with. It did cause failures on some versions of the master build.
@srowen
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2662
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/2662#discussion_r18439704
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -710,18 +708,20 @@ private[spark] object Utils extends Logging {
* Determines
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2662#discussion_r18440969
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -710,18 +708,20 @@ private[spark] object Utils extends Logging {
* Determines if
12 matches
Mail list logo