GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/5136
[SPARK-6468][Block Manager] Fix the race condition of subDirs in
DiskBlockManager
There are two race conditions of `subDirs` in `DiskBlockManager`:
1. `getAllFiles` does not use correct
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5080#discussion_r26934440
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -198,6 +203,44 @@ class SqlParser extends
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5136#issuecomment-84989778
[Test build #28997 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28997/consoleFull)
for PR 5136 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-84990737
[Test build #28996 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28996/consoleFull)
for PR 4419 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-84990748
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/5080#issuecomment-84996742
LGTM in general, except some small issues.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/5134#issuecomment-84998457
Seems more reasonable to me if we do this in `Optimizer`, what do you think?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-84999785
However, in my experience, if AMRMClient.unregisterApplicationMaster has
not been called, Yarn will restart the AM until exceeding the max attempts. So
if the user
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5080#discussion_r26934690
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -198,6 +203,44 @@ class SqlParser extends
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5080#discussion_r26935051
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -198,6 +203,44 @@ class SqlParser extends
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-84998768
@ryan-williams Are you seeing this exception with spark 1.3 then or with
older version? (ie pr4773 didn't fix this particular issue)
---
If your project is set up
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/5014#issuecomment-8460
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4697#issuecomment-84969583
[Test build #28994 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28994/consoleFull)
for PR 4697 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5127#issuecomment-84942724
[Test build #28995 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28995/consoleFull)
for PR 5127 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4504#issuecomment-84962071
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-84961719
[Test build #28996 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28996/consoleFull)
for PR 4419 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4504#issuecomment-84962033
[Test build #28990 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28990/consoleFull)
for PR 4504 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5135#issuecomment-84965019
It seems reasonable, but adds a fair bit of code to the Java example. I'm
not sure if the intent was that it be runnable, or simply illustrate a snippet
of the core API
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5129#issuecomment-84964984
[Test build #28991 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28991/consoleFull)
for PR 5129 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5129#issuecomment-84964991
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/5014#issuecomment-84945003
Not super familiar with this part of code. But according to the context and
discussion, I think this change makes sense. @yhuai Could you help confirming
this?
---
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5135#issuecomment-84963828
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user petro-rudenko opened a pull request:
https://github.com/apache/spark/pull/5135
[ML][docs][minor] Define LabeledDocument/Document classes in CV example
To easier copy/paste Cross-Validation example code snippet need to define
LabeledDocument/Document in it, since they
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5127#issuecomment-84967253
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5127#issuecomment-84967240
[Test build #28995 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28995/consoleFull)
for PR 5127 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4697#issuecomment-84969598
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-84971389
[Test build #28992 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28992/consoleFull)
for PR 5042 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-84971396
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user ypcat commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-84941735
I change the design to allow more general usage. User can set
spark.sql.parquet.output.committer.class to a class extending
ParquetOutputFormat.
I still include the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-84940461
[Test build #28993 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28993/consoleFull)
for PR 5042 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-84965366
[Test build #28993 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28993/consoleFull)
for PR 5042 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-84965374
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/4930#issuecomment-85044321
There is one of mapjoin_addjar, and I'll add that after I refactor #4586
. So I'd like this one merge first, thanks a lot!
---
If your project is set up for it, you
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/5136#discussion_r26945942
--- Diff:
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala ---
@@ -91,7 +90,12 @@ private[spark] class DiskBlockManager(blockManager:
GitHub user watermen opened a pull request:
https://github.com/apache/spark/pull/5132
[SPARK-6397][SQL] Check the missingInput simply
https://github.com/apache/spark/pull/5082
You can merge this pull request into a Git repository by running:
$ git pull
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5132#issuecomment-84855754
[Test build #28984 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28984/consoleFull)
for PR 5132 at commit
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-84869736
Also, how were we ending up with a success before? If anything forced us
to break out of that try block, it seems like we wouldn't call finish with
SUCCESS . Or does
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4930#issuecomment-84875097
[Test build #28981 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28981/consoleFull)
for PR 4930 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4930#issuecomment-84875142
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4697#issuecomment-84891332
[Test build #28985 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28985/consoleFull)
for PR 4697 at commit
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/5095#issuecomment-85030899
@marsishandsome This seems like a bit of an odd use case. If the
spark.driver.host is something different then actual host then errors are going
to occur. Why are
GitHub user yanboliang opened a pull request:
https://github.com/apache/spark/pull/5137
[SPARK-6255] [MLLIB] Python API parity check for classification
Python API parity check for classification
Support multiclass classification in pyspark
You can merge this pull request into a
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5136#discussion_r26943512
--- Diff:
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala ---
@@ -91,7 +90,12 @@ private[spark] class DiskBlockManager(blockManager:
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/4930#issuecomment-85025856
LGTM. Is there any Hive query test that we should enable?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-85030300
@tgravescs Thanks for the clarification
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5137#issuecomment-85035008
[Test build #28998 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28998/consoleFull)
for PR 5137 at commit
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/5134#issuecomment-85009015
Good suggestion. I will do that later. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5136#issuecomment-85022283
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5136#issuecomment-8508
[Test build #28997 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28997/consoleFull)
for PR 5136 at commit
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-85002672
@sryza you can request from Mesos fraction of CPU, however I haven't
realized that we have wrong type in this patch, we should change it to Double
instead of Int.
---
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-85010591
@zsxwing Can you clarify this? Are you running something that never
starts SparkContext? I'm not sure what you mean by the user doesn't create
spark context but the
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-85015152
That case is basically not handled right now. We expect one of the first
things is to create the SparkContext which is why the AM waits for the spark
context to be
Github user gvramana commented on the pull request:
https://github.com/apache/spark/pull/5138#issuecomment-85061841
@marmbrus , @yhuai Please review the same. Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user gvramana opened a pull request:
https://github.com/apache/spark/pull/5138
[SPARK-6451][SQL] supported code generation for CombineSum
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/gvramana/spark sum_fix_codegen
Github user gvramana commented on the pull request:
https://github.com/apache/spark/pull/4466#issuecomment-85063529
@yhuai, Submitted PR for the code gen with Sum #5138 . Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user yanboliang commented on the pull request:
https://github.com/apache/spark/pull/5137#issuecomment-85059666
This PR is work in progress, I still need to make
LogisticRegressionModel.predict can handle multiclass classification.
---
If your project is set up for it, you can
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/5136#discussion_r26946846
--- Diff:
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala ---
@@ -91,7 +90,12 @@ private[spark] class DiskBlockManager(blockManager:
Github user yuecong commented on the pull request:
https://github.com/apache/spark/pull/5111#issuecomment-85060322
It is the IPython 3.0. I believe one of the motivation of IPython notebook
is their chart drawing functionality. This is the reason why pylab library is
specially
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5138#issuecomment-85059842
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/5014#discussion_r26950618
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -557,7 +557,6 @@
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5134#issuecomment-84945642
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5134#issuecomment-84945622
[Test build #28988 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28988/consoleFull)
for PR 5134 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4996#issuecomment-84961259
[Test build #28989 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28989/consoleFull)
for PR 4996 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4996#issuecomment-84961267
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/3221#issuecomment-84827225
I looked more into it and I will open up an API in Breeze
QuadraticMinimizer where in-place of DenseMatrix gram, upper triangular gram
can be sent but the inner
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-84829083
If the driver throws an exception, the exception will be the cause of
`InvocationTargetException`. So you are logging the exception from the
reflection api rather than
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1482#issuecomment-84830436
[Test build #28980 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28980/consoleFull)
for PR 1482 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1482#issuecomment-84830480
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-84835800
As @zsxwing says, it appears that the code is already trying to handle this
case. Do `InvocationTargetExceptions` only wrap `Exception`s and not all
`Throwable`s? If
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5125#discussion_r27003271
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -70,7 +71,8 @@ class JdbcRDD[T: ClassTag](
}).toArray
}
-
Github user acvogel commented on the pull request:
https://github.com/apache/spark/pull/5109#issuecomment-85339056
@chammas Good question! I had tested all combinations except for the case
where the master is a spot instance and the slaves are on-demand instances.
There is a bug with
Github user gvramana commented on the pull request:
https://github.com/apache/spark/pull/5138#issuecomment-85340588
ok, I will combine them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4708#issuecomment-85346466
[Test build #29052 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29052/consoleFull)
for PR 4708 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4708#issuecomment-85346482
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
GitHub user zzcclp opened a pull request:
https://github.com/apache/spark/pull/5154
Improve ScalaUdf called performance.
As issue SPARK-6483 description, ScalaUdf is low performance because of
calling *asInstanceOf* to convert per record.
With this, the performance of ScalaUdf
Github user zzcclp commented on the pull request:
https://github.com/apache/spark/pull/5154#issuecomment-85353974
Before this change, it takes 17 minutes, and now takes 5 minutes, which is
the same as *HiveContext + udf floor* and *non-udf*
---
If your project is set up for it, you
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5125#issuecomment-85338836
[Test build #29053 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29053/consoleFull)
for PR 5125 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5125#issuecomment-85341286
[Test build #29054 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29054/consoleFull)
for PR 5125 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5135#issuecomment-85346375
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5135#issuecomment-85346371
[Test build #29051 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29051/consoleFull)
for PR 5135 at commit
Github user kellyzly commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-85348209
@steveloughran: I have updated code according to your comments:[make
CryptoOutputStream.scala#close
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-85348255
[Test build #29055 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29055/consoleFull)
for PR 4491 at commit
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/4708#issuecomment-85348619
Ok this failure doesn't make any sense because you already added the
relevant Mima exclude...my only guess is that these errors somethings happen
because something
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4588#issuecomment-85348733
[Test build #29050 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29050/consoleFull)
for PR 4588 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4588#issuecomment-85348738
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5154#issuecomment-85351687
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5125#issuecomment-85338913
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5125#issuecomment-85338909
[Test build #29053 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29053/consoleFull)
for PR 5125 at commit
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/5005#issuecomment-85348266
All the runtime enhancements are being added to Breeze in this PR:
https://github.com/scalanlp/breeze/pull/386
Please let me know if there are additional
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/3221#issuecomment-85351062
All the runtime enhancements are being added to Breeze in this PR:
https://github.com/scalanlp/breeze/pull/386
Please let me know if there are additional
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/5154#issuecomment-85352825
Hmm, have you try what the performance gain by this change? From my
understanding the bottleneck is in the function call
`ScalaReflection.convertToScala`
---
If
Github user hunglin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5124#discussion_r26977523
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -805,7 +806,7 @@ class DAGScheduler(
}
val
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5093#discussion_r26982276
--- Diff: dev/tests/pr_new_dependencies.sh ---
@@ -0,0 +1,85 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF)
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5093#discussion_r26982250
--- Diff: dev/run-tests-jenkins ---
@@ -176,7 +183,8 @@ done
# run tests
{
- timeout ${TESTS_TIMEOUT} ./dev/run-tests
+ # timeout
GitHub user tnachen opened a pull request:
https://github.com/apache/spark/pull/5144
[SPARK-][MESOS] Add cluster mode support for Mesos
This patch adds the support for cluster mode to run on Mesos.
It introduces a new Mesos framework dedicated to launch new apps/drivers,
and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5145#issuecomment-85223273
[Test build #29031 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29031/consoleFull)
for PR 5145 at commit
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5085#issuecomment-85223186
As I mentioned before, you need to take the invalid CEN header check from
AbstractCommandBuilder and do it in the shell scripts now, otherwise that check
is useless.
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/5079
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-85085332
After communicated with @adrian-wang offline. I realized this PR still
leave some class loader problem.So i close this one.
---
If your project is set up for it, you
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5014#issuecomment-85095685
[Test build #28999 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28999/consoleFull)
for PR 5014 at commit
1 - 100 of 514 matches
Mail list logo