Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1658#issuecomment-59882709
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21972/consoleFull)
for PR 1658 at commit
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59882835
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2844#discussion_r19130571
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -227,6 +217,7 @@ private object TorrentBroadcast extends Logging {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59883025
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21973/consoleFull)
for PR 2520 at commit
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2870#discussion_r19130591
--- Diff: python/pyspark/mllib/tests.py ---
@@ -202,6 +204,16 @@ def test_regression(self):
self.assertTrue(dt_model.predict(features[3]) 0)
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2844#discussion_r19130611
--- Diff:
core/src/test/scala/org/apache/spark/broadcast/BroadcastSuite.scala ---
@@ -84,6 +89,24 @@ class BroadcastSuite extends FunSuite with
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59883190
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59883187
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21963/consoleFull)
for PR 2520 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-59883237
@rxin that's a fair solution, too, although the bitmap needs to be
losslessly compressed.
I could imagine cases where data is already partitioned but a user
Github user Ishiihara commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-59883834
@JoshRosen I have been looking into the compressed bitmap and already get a
good idea of how to use roaring bitmap to perform the task. If this work is not
urgent, can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-59883993
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21964/consoleFull)
for PR 2866 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-59883998
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2844#discussion_r19130957
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -227,6 +217,7 @@ private object TorrentBroadcast extends Logging {
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2844#discussion_r19131009
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -227,6 +217,7 @@ private object TorrentBroadcast extends Logging {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2868#issuecomment-59884328
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21965/consoleFull)
for PR 2868 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2868#issuecomment-59884334
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-59884283
@marmbrus in #2499, i reproduce the golden answer and changed some *.ql
because of 0.13 changes, the tests passed in my local machine.
@zhzhan not get you, why to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2868#issuecomment-59884734
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21966/consoleFull)
for PR 2868 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2850#issuecomment-59884695
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21968/consoleFull)
for PR 2850 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2850#issuecomment-59884702
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2868#issuecomment-59884738
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user mvj101 opened a pull request:
https://github.com/apache/spark/pull/2872
[SPARK-3405] add subnet-id and vpc-id options to spark_ec2.py
Based on this gist:
https://gist.github.com/amar-analytx/0b62543621e1f246c0a2
We use security group ids instead of security
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2844#issuecomment-59884782
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21974/consoleFull)
for PR 2844 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2872#issuecomment-59884935
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/2850#issuecomment-59885063
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2850#issuecomment-59885505
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21975/consoleFull)
for PR 2850 at commit
Github user yu-iskw commented on the pull request:
https://github.com/apache/spark/pull/1964#issuecomment-59885635
Because this patch is not fit for the Spark design concept, I close this PR
without merging.
Github user yu-iskw closed the pull request at:
https://github.com/apache/spark/pull/1964
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/2087#issuecomment-59885695
Cool, updated patch addresses comments. It look like the failure is caused
by a failure to fetch from git.
---
If your project is set up for it, you can reply to this
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/2087#issuecomment-59885707
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2087#issuecomment-59886278
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21976/consoleFull)
for PR 2087 at commit
Github user Shiti commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59886741
The reason for this issue is that the Maven build definition and plugin
configuration of yarn-alpha and yarn-stable is the same as that for yarn
common. So, the
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2871#issuecomment-59887232
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2871#issuecomment-59887226
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21970/consoleFull)
for PR 2871 at commit
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-59887284
@scwf I mean change some *.ql you already did. The problem is that it
need to add another layer to take care of compatibility test suite. I have not
found a good way to
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1658#issuecomment-59887629
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1658#issuecomment-59887621
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21972/consoleFull)
for PR 1658 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59887766
Makes sense to me, this is the only way to solve this problem. I am okay
with this patch.
---
If your project is set up for it, you can reply to this email and have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2701#issuecomment-59888064
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2701#issuecomment-59888060
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21967/consoleFull)
for PR 2701 at commit
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-59888325
@scwf I am wondering how do you handle the decimal support, since hive-0.13
has new semantic for this type.
---
If your project is set up for it, you can reply to this
Github user chouqin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2868#discussion_r19132675
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/impl/NodeIdCache.scala ---
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2870#issuecomment-59888903
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21977/consoleFull)
for PR 2870 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2850#issuecomment-59889343
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2850#issuecomment-59889341
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21975/consoleFull)
for PR 2850 at commit
Github user chouqin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2868#discussion_r19132973
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/impl/NodeIdCache.scala ---
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-59889646
@marmbrus FYI: I ran the compatibility test, and so far the major
outstanding issues include 1st: decimal support, 2nd: udf7 and udf_round,
which can be fixed, but I am
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/2636#issuecomment-59889793
@mdagost Thanks for working on the SerDe! I tested it locally and it works
correctly, but the unit tests for the added methods are missing. Do you mind
adding them? You
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2673#issuecomment-59889967
You were right @pwendell. I was just imagining things.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2844#issuecomment-59890238
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2844#issuecomment-59890235
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21974/consoleFull)
for PR 2844 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2844#issuecomment-59891101
I've merged this into master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2087#issuecomment-59891135
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2087#issuecomment-59891129
**[Tests timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21971/consoleFull)**
for PR 2087 at commit
Github user kmader commented on a diff in the pull request:
https://github.com/apache/spark/pull/1658#discussion_r19133684
--- Diff:
core/src/main/scala/org/apache/spark/api/java/JavaSparkContext.scala ---
@@ -220,6 +227,83 @@ class JavaSparkContext(val sc: SparkContext) extends
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2844
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2852#issuecomment-59891599
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21978/consoleFull)
for PR 2852 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2087#issuecomment-59891774
Jenkins, retest this pleas.e
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2852#issuecomment-59891918
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21978/consoleFull)
for PR 2852 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2852#issuecomment-59891923
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2087#issuecomment-59892045
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21976/consoleFull)
for PR 2087 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2087#issuecomment-59892050
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2673#discussion_r19133971
--- Diff: pom.xml ---
@@ -248,7 +248,19 @@
/snapshots
/pluginRepository
/pluginRepositories
-
+ !--
+ This
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2673#discussion_r19134021
--- Diff: pom.xml ---
@@ -248,7 +248,19 @@
/snapshots
/pluginRepository
/pluginRepositories
-
+ !--
+ This
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2673#discussion_r19134051
--- Diff: pom.xml ---
@@ -994,6 +1006,34 @@
plugins
plugin
groupIdorg.apache.maven.plugins/groupId
+
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2673#discussion_r19134094
--- Diff: pom.xml ---
@@ -248,7 +248,19 @@
/snapshots
/pluginRepository
/pluginRepositories
-
+ !--
+ This
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59892553
**[Tests timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21973/consoleFull)**
for PR 2520 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59892555
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2673#discussion_r19134071
--- Diff: pom.xml ---
@@ -994,6 +1006,34 @@
plugins
plugin
groupIdorg.apache.maven.plugins/groupId
+
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2673#discussion_r19134114
--- Diff: pom.xml ---
@@ -994,6 +1006,34 @@
plugins
plugin
groupIdorg.apache.maven.plugins/groupId
+
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2673#issuecomment-59892653
Hey @ScrapCodes I had some small comments. If we can use the shade plug-in
to create these effective poms that would be great. If you add your scala 2.11
stuff on top
Github user chouqin commented on the pull request:
https://github.com/apache/spark/pull/2868#issuecomment-59893259
@codedeft Thanks for your nice work. I have added some comments inline.
Here are some high level comments:
1. Have you tested the performance after this
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2764#discussion_r19135372
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/dataTypes.scala
---
@@ -319,10 +315,8 @@ case object ByteType extends
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2870#issuecomment-59896014
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21977/consoleFull)
for PR 2870 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2870#issuecomment-59896022
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2860#issuecomment-59896106
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21979/consoleFull)
for PR 2860 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2860#issuecomment-59897027
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2860#issuecomment-59897021
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21979/consoleFull)
for PR 2860 at commit
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/2873
[SQL][DOC] Wrong package name scala.math.sql in sql-programming-guide.md
In sql-programming-guide.md, there is a wrong package name scala.math.sql.
You can merge this pull request into a Git
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2860#issuecomment-59900019
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21981/consoleFull)
for PR 2860 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2873#issuecomment-59900020
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21980/consoleFull)
for PR 2873 at commit
GitHub user SaintBacchus opened a pull request:
https://github.com/apache/spark/pull/2874
[SPARK-4033][Examples]Input of the SparkPi too big causes the emption
exception
If input of the SparkPi args is larger than the 25000, the integer 'n'
inside the code will be overflow, and
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2874#issuecomment-59901903
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2874#discussion_r19138396
--- Diff: examples/src/main/scala/org/apache/spark/examples/SparkPi.scala
---
@@ -27,7 +27,7 @@ object SparkPi {
val conf = new
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2860#issuecomment-59903991
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2860#issuecomment-59903980
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21981/consoleFull)
for PR 2860 at commit
Github user jacek-lewandowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/2739#discussion_r19138979
--- Diff: conf/ssl.conf.template ---
@@ -0,0 +1,27 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
--- End
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/2875
Fix a wrong format specifier
Just found a typo. Should not use %f for Long.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/zsxwing/spark
Github user jacek-lewandowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/2739#discussion_r19139237
--- Diff: core/src/main/scala/org/apache/spark/SSLOptions.scala ---
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2875#issuecomment-59905922
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21982/consoleFull)
for PR 2875 at commit
Github user jacek-lewandowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/2739#discussion_r19139391
--- Diff: core/src/main/scala/org/apache/spark/SSLOptions.scala ---
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2873#issuecomment-59906598
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2873#issuecomment-59906590
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21980/consoleFull)
for PR 2873 at commit
Github user jacek-lewandowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/2739#discussion_r19139582
--- Diff: core/src/main/scala/org/apache/spark/SSLOptions.scala ---
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation
GitHub user baishuo opened a pull request:
https://github.com/apache/spark/pull/2876
[SPARK-4034]change the scope of guava to compile
After click maven-reimport for spark project in idea, and begin to start
sparksqlclidriver in idea, we will get a exception:
Exception in
Github user jacek-lewandowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/2739#discussion_r19139794
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -192,6 +196,44 @@ private[spark] class SecurityManager(sparkConf:
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2876#issuecomment-59907547
CC @vanzin This should not be changed as it has to do with how Guava is
shaded, I believe. As I say, this does not seem to be a problem for the Maven
build or IDEA in
Github user jacek-lewandowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/2739#discussion_r19139861
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -18,7 +18,11 @@
package org.apache.spark
import
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2876#issuecomment-59907744
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user baishuo commented on the pull request:
https://github.com/apache/spark/pull/2876#issuecomment-59907899
i think the root cause is: the scope of guava in root pom.xml is
provided, every time when we do reimport (right click the whole project,
click maven-Reimport), the
1 - 100 of 443 matches
Mail list logo