Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2711#issuecomment-61049248
Alright thanks, I'm merging this into master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2711
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61050366
@scwf Hmm, you mean the dev/run-test does not run pyspark? I locally run
dev/run-test today and months ago, and didn't met pyspark error. How can I
invoke pyspark test
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1031
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2711#issuecomment-61050455
Very good, thanks, @andrewor14 @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2956#issuecomment-61051328
[Test build #22522 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22522/consoleFull)
for PR 2956 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2956#issuecomment-61051336
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051413
The problem here is that Hive 0.13 upgrades the Kryo version from 2.21 to
2.22. Spark previously depends on Kryo 2.22 via chill. In Kryo 2.22 they made a
build change
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051681
@scwf I checked dev/run-tests, it does invoke python/run-tests. Didn't you
also run it locally and succeed, or I miss anything?
---
If your project is set up for it, you
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051691
Just to make it more intuitive, made a dependency graph to illustrate the
issue:
![dependency-hell](http://tinyurl.com/q5opqe2)
---
If your project is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051870
Based on the most recent failures, it seems like somehow the test classpath
is still using kryo 2.22.
---
If your project is set up for it, you can reply to this email
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051898
@pwendellï¼spark depends on kryo 2.21 which not shaded objenesis while
hive 0.13 depends on kryo 2.22 and it shaded objenesis. So excluding will not
fix the problem
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051983
actually the most recent failures, it is using kryo 2.21
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052028
```com.esotericsoftware.shaded.org.objenesis.strategy.InstantiatorStrategy```
is in kryo 2.22
---
If your project is set up for it, you can reply to this email and have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052007
@scwf the hive classes only link against kryo... they don't link against
objenesis directly. As long as kryo did not make a binary-incompatible change
between 2.21 and
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052076
Another thing to notice is that Kryo 2.21 is a really weird release. [Kryo
2.21
POM](https://repo1.maven.org/maven2/com/esotericsoftware/kryo/kryo/2.21/kryo-2.21.pom)
Github user saucam commented on the pull request:
https://github.com/apache/spark/pull/2841#issuecomment-61052070
yes. In task side metadata strategy, the tasks are spawned first, and each
task will then read the metadata and drop the row groups. So if I am using
yarn, and data is
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052124
@pwendell com.esotericsoftware is already shaded in hive. Will it work if
we keep it in hive-exec.jar? Please advice.
---
If your project is set up for it, you can reply
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052281
@pwendell, right in hive 0.13.1 it use the shaded
```com.esotericsoftware.shaded.org.objenesis.strategy.InstantiatorStrategy```
in kryo 2.22.
So if we exclude it, we
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2966
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052515
Okay I think the issue is pretty tough. Unfortunately hive is directly
using the shaded objenesis classes. However, Spark needs Kryo 2.21 which
depends on the original
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052493
@pwendell @scwf What I mean is that com.esotericsoftware is again shaded in
hive as org.apache.hive.com.esotericsoftware. I think that's the reason why the
original hive
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052558
Unfortunately the most recent version of Chill still stick on Kryo 2.21 :(
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052630
@liancheng yeah - I just noticed that :(
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052641
@pwendell link
https://github.com/twitter/chill/commit/3869b0122660c908e189ff08b615bd7221956224
chill revert kryo for unknown reason
---
If your project is set up for
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052655
Please refer to following hive-exec directory, as we can see
esotericsoftware are all in org.apache.hive.
HW11188:tmp1 zzhang$ ls
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052650
Actually @scwf found Chill had once tried to upgrade to Kryo 2.22, but
reverted it.
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052753
Another idea - what if we upgrade Kryo to 2.22 explicitly in our core pom?
If 2.22 is binary compatible with 2.21 it could work. If chill direclty uses
objenesis, we
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052820
upgrade Kryo in core will get compile error
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3003#issuecomment-61052816
[Test build #22523 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22523/consoleFull)
for PR 3003 at commit
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052784
Hm, since Kryo 2.21 refer to the non-shaded version of Objenesis, while
Kryo 2.22 refer to the shaded version, it should OK to let them coexist in
Spark, right?
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3003#issuecomment-61052819
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052933
yeah we can have a try to use kryo 2.22 and original objenesis in core
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2996
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052977
@pwendell @liancheng @scwf Folks, why the shaded com.esotericsoftware in
hive cannot coexist with com.esotericsoftware in spark?
---
If your project is set up for it,
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61053147
@zhzhan they can both exist. The issue is that Spark uses a library Chill
that requires Kryo 2.21. If 2.21 and 2.22 are not binary compatible, this will
break it and
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/2996#issuecomment-61053360
LGTM. Merged into master. If the performance gain is worth the extra code
complexity, we can switch to the new implementation. Thanks!
---
If your project is set up for
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61053748
I am testing as follows, it's ok?
```
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -125,10 +125,32 @@
dependency
groupIdcom.twitter/groupId
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61053806
Yeah this seems reasonable to try. Whether it will work depends on whether
kryo 2.22 and 2.21 are compatible.
---
If your project is set up for it, you can reply to
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3011#issuecomment-61053918
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3011#issuecomment-61053944
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61054034
@pwendell Kryo 2.22 in hive is already shaded by hive itself. My
understanding is that shaded actually make it private to hive itself, and it is
invisible to other
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61054248
@zhzhan we use a special hive-exec jar that doesn't shade any dependencies.
The original hive-exec jar includes a bunch of other stuff that we don't want.
However, it
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3011#issuecomment-61054406
[Test build #22525 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22525/consoleFull)
for PR 3011 at commit
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61054467
@pwendell Thanks for the clarification. That's what I mean to shade
com.esotericsoftware in spark-project:hive-exec.
---
If your project is set up for it, you can reply
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590597
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590614
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590608
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61054698
Actually before we merge the #2241, we even not test core with hive-0.13,
so this issue comes here:)
---
If your project is set up for it, you can reply to this email
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590602
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590595
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590617
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590630
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590615
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590632
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/evaluation/RegressionMetricsSuite.scala
---
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590610
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590606
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590631
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2978#discussion_r19590635
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/evaluation/RegressionMetricsSuite.scala
---
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61055173
@scwf I locally test it, but with the hive uber jar. Later, it seems that
the spark-project jar for 0.13.1 is not available
---
If your project is set up for it, you
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61055622
Still failed...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61055765
@zhzhan, you can use spark-project jar to test and it will failed, based on
#2241
---
If your project is set up for it, you can reply to this email and have your
reply
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61056033
locally test result
1
```
diff --git a/core/pom.xml b/core/pom.xml
index 5cd21e1..c87f661 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -131,6
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61056389
@scwf Yea, same here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2931#issuecomment-61056802
@harishreedharan I dont think so. The block location is called only once in
both, and the hdfs location is called only once and only if required. I dont
think there is any
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2931#discussion_r19591383
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/rdd/WriteAheadLogBackedBlockRDD.scala
---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/2931#issuecomment-61057132
@tdas you also missed my two other comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user chenghao-intel opened a pull request:
https://github.com/apache/spark/pull/3013
[SPARK-4152] [SQL] Avoid data change in CTAS while table already existed
CREATE TABLE t1 (a String);
CREATE TABLE t1 AS SELECT key FROM src; â throw exception
CREATE TABLE if not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3013#issuecomment-61058848
[Test build #22526 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22526/consoleFull)
for PR 3013 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3013#issuecomment-61058954
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3013#issuecomment-61058953
[Test build #22526 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22526/consoleFull)
for PR 3013 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2940#issuecomment-61059035
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2978#issuecomment-61059072
[Test build #22527 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22527/consoleFull)
for PR 2978 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3011#issuecomment-61059709
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2940#issuecomment-61059725
[Test build #22529 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22529/consoleFull)
for PR 2940 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3011#issuecomment-61059703
[Test build #22525 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22525/consoleFull)
for PR 3011 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2978#issuecomment-61059722
[Test build #22528 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22528/consoleFull)
for PR 2978 at commit
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/3014
[SPARK-4153][WebUI] Update the sort keys for HistoryPage
Sort Started, Completed, Duration and Last Updated by time.
You can merge this pull request into a Git repository by running:
$ git
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3014#issuecomment-61062665
[Test build #22530 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22530/consoleFull)
for PR 3014 at commit
Github user qiaohaijun commented on the pull request:
https://github.com/apache/spark/pull/1610#issuecomment-61062709
I will try it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/2615
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2940#issuecomment-61065130
[Test build #22529 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22529/consoleFull)
for PR 2940 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2940#issuecomment-61065135
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2978#issuecomment-61066249
[Test build #22527 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22527/consoleFull)
for PR 2978 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2978#issuecomment-61066257
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-61067173
[Test build #22531 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22531/consoleFull)
for PR 2607 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2978#issuecomment-61067997
[Test build #22528 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22528/consoleFull)
for PR 2978 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-61068006
[Test build #22532 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22532/consoleFull)
for PR 2607 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2978#issuecomment-61068002
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-61068865
[Test build #22533 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22533/consoleFull)
for PR 2607 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-61069431
[Test build #22534 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22534/consoleFull)
for PR 2607 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-61069535
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-61069532
[Test build #22534 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22534/consoleFull)
for PR 2607 at commit
Github user manishamde commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-61069736
@jkbradley @codedeft I think I have implemented all the suggestions on the
PR except for 1) public API and 2) warning when using non SquaredError loss
functions. I
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-61070581
[Test build #22535 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22535/consoleFull)
for PR 2607 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3014#issuecomment-61070755
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3014#issuecomment-61070748
[Test build #22530 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22530/consoleFull)
for PR 3014 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2931#issuecomment-61071886
@rxin, crap, i missed that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-61072219
[Test build #22536 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22536/consoleFull)
for PR 2607 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2931#issuecomment-61073837
[Test build #22537 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22537/consoleFull)
for PR 2931 at commit
1 - 100 of 618 matches
Mail list logo