Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4608#issuecomment-74540115
I added a 'test' for the API issue here that shows the problem and that the
change fixes it and still works with the common workaround people use today.
I'd like to push
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4514#issuecomment-74543897
[Test build #27558 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27558/consoleFull)
for PR 4514 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4622#issuecomment-74544862
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4625#issuecomment-74539927
[Test build #27555 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27555/consoleFull)
for PR 4625 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4625
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4622#issuecomment-74544855
[Test build #27556 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27556/consoleFull)
for PR 4622 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4625#issuecomment-74527208
[Test build #27555 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27555/consoleFull)
for PR 4625 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4608#issuecomment-74540748
[Test build #27557 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27557/consoleFull)
for PR 4608 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4611#issuecomment-74538795
This uses `jps`, but that is only included with the JDK. I think it's
reasonable to expect many production deployments will only use the JRE. How
about simply `ps -p 4391
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4625#issuecomment-74539940
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4514#issuecomment-74543422
@JoshRosen do you have an opinion on whether to go for this, or just remove
the easy two instances of Clock?
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4622#issuecomment-74532945
[Test build #27556 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27556/consoleFull)
for PR 4622 at commit
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/4550#issuecomment-74552535
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4626#issuecomment-74552512
[Test build #27562 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27562/consoleFull)
for PR 4626 at commit
Github user prabeesh commented on the pull request:
https://github.com/apache/spark/pull/4178#issuecomment-74553655
@srowen is there any more updates here?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4178#issuecomment-74555104
@prabeesh this looks like it's waiting on a change to address the comment
from @dragos - no need to catch and log the exception - and to the issue of
infinite looping if
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4618#issuecomment-74561173
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user MattWhelan opened a pull request:
https://github.com/apache/spark/pull/4627
SPARK-5841: remove DiskBlockManager shutdown hook on stop
After a call to stop, the shutdown hook is redundant, and causes a
memory leak.
You can merge this pull request into a Git
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4616#discussion_r24771625
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -51,6 +51,11 @@ object Main extends Logging {
def
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4616#discussion_r24771610
--- Diff:
repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -1064,15 +1064,18 @@ class SparkILoop(
private def
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4626#issuecomment-74565756
[Test build #27568 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27568/consoleFull)
for PR 4626 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4619#issuecomment-74566837
Good catch! Merging to master and 1.3.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4587
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4609
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-74568272
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4592#discussion_r24766338
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala
---
@@ -88,12 +88,12 @@ private[sql] class DataFrameImpl protected[sql](
}
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/4521#issuecomment-74550722
@dondrake Can you also create a PR against our master? In our master,
`_type_mappings` is in
https://github.com/apache/spark/blob/master/python/pyspark/sql/types.py and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4608
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4628#issuecomment-74562708
[Test build #27564 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27564/consoleFull)
for PR 4628 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4616#issuecomment-74563971
[Test build #27567 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27567/consoleFull)
for PR 4616 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4521#issuecomment-74565322
In general you should open all PRs against master, and we will backport
them manually when merging. Please suggest this in the comments (or after the
fact add the
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4616#issuecomment-74549382
@azagrebin thanks for working on this. However, deprecated just means it
will be released in a future released, but not in the current release. If we
log a warning
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/4626#issuecomment-74552238
test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/4622#issuecomment-74559761
@viirya For new algorithms, we need to discuss whether we want to include
it in MLlib on the JIRA page first (before implementing it). Could you describe
more about this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4618#issuecomment-74561163
[Test build #27561 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27561/consoleFull)
for PR 4618 at commit
Github user dondrake commented on the pull request:
https://github.com/apache/spark/pull/4521#issuecomment-74564901
So, I should not be updating the branch-1.3? Should I just create a branch
off of master with my changes?
Are my changes for 1.2.x okay to go on branch-1.2?
Github user mccheah commented on the pull request:
https://github.com/apache/spark/pull/4420#issuecomment-74558171
@andrewor14 what do you think about the comments from @mingyukim ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/4626#discussion_r24772103
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -163,7 +163,14 @@ private[hive] class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4616#issuecomment-74567551
[Test build #27563 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27563/consoleFull)
for PR 4616 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4616#issuecomment-74567563
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4604#issuecomment-74568537
Thanks, merged to master and 1.3
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4604
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4592
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4620#issuecomment-74549716
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3781#issuecomment-74550130
This looks good to me. Most of it is making some fields private that look
like they should be, simplifying one synchronization primitive, and does fix a
theoretical
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4620#issuecomment-74558293
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/4628
[SPARK-5840][SQL] HiveContext cannot be serialized due to tuple extraction
TODO: Add a test suite that checks serializability of all SQL classes that
should be serializable.
You can merge this pull
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4627#issuecomment-74562181
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4627#issuecomment-74562163
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/4550#issuecomment-74566293
@rxin would you mind taking a look at this, as someone familiar with the
shuffle? I noted some measurements of the affect this has on Shuffle Write
Time in the
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4619
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-74568258
[Test build #27569 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27569/consoleFull)
for PR 2851 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-74568269
[Test build #27569 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27569/consoleFull)
for PR 2851 at commit
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/4626
[SPARK-5839][SQL]HiveMetastoreCatalog does not recognize table names and
aliases of data source tables.
JIRA: https://issues.apache.org/jira/browse/SPARK-5839
You can merge this pull request into a
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4620#issuecomment-74550403
[Test build #27560 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27560/consoleFull)
for PR 4620 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4626#issuecomment-74551788
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4608#issuecomment-74552316
[Test build #27557 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27557/consoleFull)
for PR 4608 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4514#issuecomment-74554810
[Test build #27558 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27558/consoleFull)
for PR 4514 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4514#issuecomment-74554814
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4627#issuecomment-74562692
[Test build #27565 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27565/consoleFull)
for PR 4627 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4617#issuecomment-74562712
[Test build #27566 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27566/consoleFull)
for PR 4617 at commit
Github user dondrake commented on the pull request:
https://github.com/apache/spark/pull/4521#issuecomment-74550054
@yhuai I have commits for branch-1.2 and branch-1.3 for this fix. Is that
not correct?
I am finished changes for branch-1.2 to resolve the test failing. Please
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4608#issuecomment-74552325
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/3917#issuecomment-74556639
np @pwendell, I understand there's some complicated stability concerns here.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4616#issuecomment-74557556
[Test build #27563 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27563/consoleFull)
for PR 4616 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4627#issuecomment-74561352
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3781#discussion_r24771215
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala
---
@@ -31,16 +34,16 @@ private[spark] class
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/3637#issuecomment-74562509
@petro-rudenko (Apologies for the slow response; I've been without
Internet for a week.) About feature scaling, I don't think it's a problem in
terms of correctness
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4626#issuecomment-74562965
[Test build #27562 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27562/consoleFull)
for PR 4626 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4626#issuecomment-74562975
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user petro-rudenko commented on the pull request:
https://github.com/apache/spark/pull/3637#issuecomment-74563955
@jkbradley i can setValidateData in GLM, but not in the LogisticRegression
class from the new API. For my case found a trick to customize anything i want
(add
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4618#issuecomment-74550414
[Test build #27561 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27561/consoleFull)
for PR 4618 at commit
Github user azagrebin commented on the pull request:
https://github.com/apache/spark/pull/4616#issuecomment-74557764
Hi @andrewor14, thanks for the comment. So the variables need to be
supported until some next release and a warning should be logged that they are
deprecated (may be
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4620#issuecomment-74558284
[Test build #27560 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27560/consoleFull)
for PR 4620 at commit
Github user azagrebin commented on the pull request:
https://github.com/apache/spark/pull/4616#issuecomment-74564112
yeah, really better, thanks for the hint, committed again
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/4628#discussion_r24773167
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala ---
@@ -204,22 +204,25 @@ class HiveContext(sc: SparkContext) extends
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4628#discussion_r24773498
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala ---
@@ -204,22 +204,25 @@ class HiveContext(sc: SparkContext) extends
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-74568860
[Test build #27570 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27570/consoleFull)
for PR 2851 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4627#issuecomment-74571626
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4617#issuecomment-74571613
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4616#issuecomment-74573712
[Test build #27567 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27567/consoleFull)
for PR 4616 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4626#issuecomment-74574708
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4629#issuecomment-74575564
[Test build #27573 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27573/consoleFull)
for PR 4629 at commit
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4629#discussion_r24778077
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -961,7 +961,14 @@ class SparkContext(config: SparkConf) extends Logging
with
GitHub user JoshRosen opened a pull request:
https://github.com/apache/spark/pull/4633
[SPARK-1600] Refactor FileInputStream tests to remove Thread.sleep() calls
and SystemClock usage (branch-1.2 backport)
(This PR backports #3801 into `branch-1.2` (1.2.2))
This patch
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4633#issuecomment-74577468
[Test build #27577 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27577/consoleFull)
for PR 4633 at commit
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/4632#issuecomment-74578411
scala style check should pass now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4634#issuecomment-74579515
Yes, there is also a `Serializer` argument exposed in the Scala API, and
not yet in the Java API. My general theory is to expose one method in Java with
all parameters,
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/4626#discussion_r24779968
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -433,7 +441,13 @@ private[hive] class HiveMetastoreCatalog(hive:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4617#issuecomment-74582760
[Test build #27591 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27591/consoleFull)
for PR 4617 at commit
Github user MattWhelan commented on the pull request:
https://github.com/apache/spark/pull/4166#issuecomment-74583994
Hey, sorry I was away for a bit.
After an extended discussion with Vanzin on his
https://github.com/apache/spark/pull/3233, I'm convinced that PR covers the
Github user MattWhelan closed the pull request at:
https://github.com/apache/spark/pull/4166
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/4638
SPARK-5850: Remove experimental label for Scala 2.11 and FlumePollingStream
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/pwendell/spark
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/4618#discussion_r24775345
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -802,7 +809,14 @@ class SQLContext(@transient val sparkContext:
SparkContext)
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4630#issuecomment-74576192
[Test build #27575 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27575/consoleFull)
for PR 4630 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4632#issuecomment-74577799
[Test build #27578 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27578/consoleFull)
for PR 4632 at commit
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4629#discussion_r24778143
--- Diff: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
---
@@ -330,6 +331,15 @@ private[spark] object PythonRDD extends Logging {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4618#issuecomment-74584699
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4618#issuecomment-74584687
[Test build #27572 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27572/consoleFull)
for PR 4618 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4637#issuecomment-74585553
[Test build #27592 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27592/consoleFull)
for PR 4637 at commit
1 - 100 of 454 matches
Mail list logo