Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83359940
The tests in 2 PRs are different, this PR is about the UDF jar, but #4586
is the SerDe jar. They may be loaded by difference class loader.
@jeanlyn can
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/5004#issuecomment-83369196
Cool, merging this into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83372666
@jeanlyn we are not getting same thing. Even our .q file differs. I don't
have CHAR in my .q file.
---
If your project is set up for it, you can reply to this email
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5008
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83371597
[Test build #28855 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28855/consoleFull)
for PR 4491 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83371598
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5055#issuecomment-83361565
[Test build #28854 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28854/consoleFull)
for PR 5055 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83381772
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83381297
[Test build #28858 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28858/consoleFull)
for PR 4491 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5004
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83381766
[Test build #28858 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28858/consoleFull)
for PR 4491 at commit
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/5044#issuecomment-83384314
@marmbrus I have updated the design on the JIRA.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83370946
[Test build #28855 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28855/consoleFull)
for PR 4491 at commit
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83372419
@chenghao-intel my full code is
```java
import org.apache.hadoop.hive.ql.exec.UDF;
public class hello extends UDF {
public String evaluate(String
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3314#issuecomment-83372401
[Test build #28856 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28856/consoleFull)
for PR 3314 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83379582
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83379060
[Test build #28857 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28857/consoleFull)
for PR 4491 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83379569
[Test build #28857 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28857/consoleFull)
for PR 4491 at commit
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83383402
I also don't have CHAR in `mapjoin_addjar.q`. I only find one
`mapjoin_addjar.q`,and the path of my file is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3314#issuecomment-83372560
[Test build #28856 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28856/consoleFull)
for PR 3314 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3314#issuecomment-83372561
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4850#discussion_r26803803
--- Diff:
core/src/main/scala/org/apache/spark/executor/CommitDeniedException.scala ---
@@ -22,14 +22,12 @@ import org.apache.spark.{TaskCommitDenied,
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5092#issuecomment-83789238
OK I am convinced, merge it. I think both hive profiles are needed in this
example?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4850#discussion_r26804299
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -156,12 +160,19 @@ private[spark] class Executor(
serializedTask:
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83796187
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83796166
[Test build #28896 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28896/consoleFull)
for PR 5093 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83796175
[Test build #28896 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28896/consoleFull)
for PR 5093 at commit
Github user davies closed the pull request at:
https://github.com/apache/spark/pull/5077
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83775810
[Test build #2 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/2/consoleFull)
for PR 5093 at commit
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-83776903
yeah if it can be parallelized by data it's best to do that and not do any
graphx joins because for graphx the painful thing is to balance the graph and
most of the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4850#issuecomment-83783774
[Test build #28892 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28892/consoleFull)
for PR 4850 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83783526
[Test build #28891 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28891/consoleFull)
for PR 5075 at commit
Github user coderxiang commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83783677
@rxin @mengxr per the comments, I created `MLPairRDDFunctions.scala` and
moved the function there in the update.
---
If your project is set up for it, you can reply
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4850#issuecomment-83787927
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83788993
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83788987
[Test build #28894 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28894/consoleFull)
for PR 5093 at commit
GitHub user marsishandsome opened a pull request:
https://github.com/apache/spark/pull/5095
Driver's Block Manager does not use spark.driver.host in Yarn-Client mode
In my cluster, the yarn node does not know the client's host name.
So I set spark.driver.host to the ip address
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r26809195
--- Diff: core/src/main/scala/org/apache/spark/api/r/RRDD.scala ---
@@ -0,0 +1,515 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4685#issuecomment-83823596
@kai-zeng in `HiveTypeCoercion`, there are lots of rules to
guarantee/produce the correct data type for the built-in expressions like here.
I mean, instead of
GitHub user nishkamravi2 reopened a pull request:
https://github.com/apache/spark/pull/5085
[SPARK-6406] Launcher backward compatibility issue-- hadoop should not be
mandatory in spark assembly name
You can merge this pull request into a Git repository by running:
$ git pull
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/5085#issuecomment-83825207
And btw, we need to check this in
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user nishkamravi2 closed the pull request at:
https://github.com/apache/spark/pull/5085
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5094#issuecomment-83778257
[Test build #28889 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28889/consoleFull)
for PR 5094 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5094#issuecomment-83778614
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5094#issuecomment-83778607
[Test build #28889 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28889/consoleFull)
for PR 5094 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83795883
[Test build #28897 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28897/consoleFull)
for PR 5075 at commit
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/3980#issuecomment-83797419
@srowen Don't worry, I'm gradually merging changes of this PR to #4851. An
[experimental Jenkins builder] [1] was also set up for this. These are still
WiP because
Github user liancheng closed the pull request at:
https://github.com/apache/spark/pull/3980
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83797579
[Test build #28898 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28898/consoleFull)
for PR 5075 at commit
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/5077#issuecomment-83803157
Close this one, will open an new one by @shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r26808894
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackend.scala ---
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r26808896
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackend.scala ---
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83823945
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83823917
[Test build #28898 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28898/consoleFull)
for PR 5075 at commit
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/5085#issuecomment-83824416
Please ignore the comment above (I misread the regex). However, we do need
to relax the check on hadoop. CDH itself names the outermost jar
spark-assembly.jar. As
GitHub user mbonaci opened a pull request:
https://github.com/apache/spark/pull/5097
[SPARK-6370][core] Documentation: Improve all 3 docs for RDD.sample
The docs for the `sample` method were insufficient, now less so.
You can merge this pull request into a Git repository by
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5096#issuecomment-83808522
[Test build #28899 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28899/consoleFull)
for PR 5096 at commit
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/5096#issuecomment-83807329
@pwendell @rxin We might push some more fixes as they come in, but I think
this should be ready for review
---
If your project is set up for it, you can reply to this
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r26808779
--- Diff: core/src/main/scala/org/apache/spark/api/r/RRDD.scala ---
@@ -0,0 +1,515 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4850#issuecomment-83810758
[Test build #28892 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28892/consoleFull)
for PR 4850 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4850#issuecomment-83810794
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83821404
[Test build #28897 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28897/consoleFull)
for PR 5075 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83821436
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/5085#issuecomment-83825622
Sorry, clicked on the close button in error.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5056#discussion_r26804471
--- Diff: graphx/src/main/scala/org/apache/spark/graphx/VertexRDD.scala ---
@@ -128,7 +128,7 @@ abstract class VertexRDD[VD](
*
* @param other
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5094#issuecomment-83787628
[Test build #28893 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28893/consoleFull)
for PR 5094 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5075#discussion_r26805662
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -163,6 +163,28 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
}
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/5096
[SPARK-5654] Integrate SparkR
This pull requests integrates SparkR, an R frontend for Spark. The SparkR
package contains both RDD and DataFrame APIs in R and is integrated with
Spark's submission
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r26809045
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackendHandler.scala
---
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4685#discussion_r26810586
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -18,21 +18,28 @@
package
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83775816
[Test build #2 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/2/consoleFull)
for PR 5093 at commit
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/5094
[SPARK-6367][SQL] Use the proper data type for those expressions that are
hijacking existing data types.
This PR adds internal UDTs for expressions that are hijacking existing data
types.
The
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5077#issuecomment-83775633
[Test build #28885 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28885/consoleFull)
for PR 5077 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83780611
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83794104
[Test build #28895 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28895/consoleFull)
for PR 5093 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-83794115
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5095#issuecomment-83802093
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83810324
[Test build #28891 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28891/consoleFull)
for PR 5075 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5075#issuecomment-83810341
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r26808740
--- Diff: core/src/main/scala/org/apache/spark/api/r/RRDD.scala ---
@@ -0,0 +1,515 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/5088#issuecomment-83810423
LGTM, FWIW :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r26809309
--- Diff: core/src/main/scala/org/apache/spark/api/r/RRDD.scala ---
@@ -0,0 +1,515 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r26809287
--- Diff: core/src/main/scala/org/apache/spark/api/r/RRDD.scala ---
@@ -0,0 +1,515 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5094#issuecomment-83812455
[Test build #28893 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28893/consoleFull)
for PR 5094 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5094#issuecomment-83812472
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/5031#discussion_r26811012
--- Diff: yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -903,6 +908,30 @@ object Client extends Logging {
}
/**
Github user kdatta commented on the pull request:
https://github.com/apache/spark/pull/3234#issuecomment-83823758
I had to add the Junit dependency in graphx/pom.xml to compile. Did you see
this issue? We might have to update the pom file.
-Kushal.
---
If your project is
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4930#discussion_r26811257
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/test/TestHive.scala ---
@@ -151,7 +152,15 @@ class TestHiveContext(sc: SparkContext)
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5097#issuecomment-83832497
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5044#issuecomment-83393681
[Test build #28859 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28859/consoleFull)
for PR 5044 at commit
Github user tanyinyan commented on the pull request:
https://github.com/apache/spark/pull/5055#issuecomment-83407453
Yesï¼I have made this constructor and setter public
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user advancedxy commented on the pull request:
https://github.com/apache/spark/pull/4783#issuecomment-83410489
@shivaram @srowen Tuple2(Int, Int) got specialized to Tuple2$mcII$sp class.
But the Tuple2$mcII$sp is a subclass of Tuple2. So in our implementation, the
specialized
GitHub user jongyoul opened a pull request:
https://github.com/apache/spark/pull/5088
[SPARK-6286][Mesos][minor] Handle missing Mesos case TASK_ERROR
- Made TaskState.isFailed for handling TASK_LOST and TASK_ERROR and
synchronizing CoarseMesosSchedulerBackend and
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4610#issuecomment-83433562
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5088#issuecomment-83440771
[Test build #28865 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28865/consoleFull)
for PR 5088 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3314#issuecomment-83440739
[Test build #28866 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28866/consoleFull)
for PR 3314 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4610#issuecomment-83399347
[Test build #28860 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28860/consoleFull)
for PR 4610 at commit
GitHub user ypcat opened a pull request:
https://github.com/apache/spark/pull/5087
[SPARK-6408] [SQL] Fix JDBCRDD filtering string literals
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ypcat/spark spark-6408
Alternatively
Github user haiyangsea commented on the pull request:
https://github.com/apache/spark/pull/5082#issuecomment-83411727
It looks like a greate feature!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-83422662
This patch is forgotten by us...
@srowen @markhamstra @kayousterhout
this patch can prevent from endless retry which may occurs after a
executor is killed or
1 - 100 of 359 matches
Mail list logo