Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15882
**[Test build #68614 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68614/consoleFull)**
for PR 15882 at commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15877#discussion_r87794405
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/CountMinSketchAgg.scala
---
@@ -0,0 +1,131 @@
+/*
+ *
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15877#discussion_r87799629
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/aggregate/CountMinSketchAggSuite.scala
---
@@ -0,0 +1,284 @@
+/*
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15880
Yeah, you are totally right about that. I like this approach, the only
bothering me is that this breaks backwards compatibility.
---
If your project is set up for it, you can reply to this
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15857#discussion_r87809078
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/FoldablePropagationSuite.scala
---
@@ -118,14 +118,30 @@ class
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15763#discussion_r87813912
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1041,12 +1070,24 @@ class Analyzer(
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15763#discussion_r87814081
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1069,11 +1110,19 @@ class Analyzer(
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15763#discussion_r87814868
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1069,11 +1110,19 @@ class Analyzer(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15866
**[Test build #68615 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68615/consoleFull)**
for PR 15866 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15879
**[Test build #68608 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68608/consoleFull)**
for PR 15879 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15879
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15879
**[Test build #68609 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68609/consoleFull)**
for PR 15879 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15879
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68609/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15879
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user aditya1702 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15871#discussion_r87766961
--- Diff: python/pyspark/ml/base.py ---
@@ -59,6 +59,12 @@ def fit(self, dataset, params=None):
return [self.fit(dataset, paramMap) for
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15683
**[Test build #3424 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3424/consoleFull)**
for PR 15683 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15882
**[Test build #68614 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68614/consoleFull)**
for PR 15882 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14612#discussion_r87787158
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -89,6 +89,22 @@ case class
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15867
Build started: [Streaming] `org.apache.spark.streaming.JavaAPISuite`
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/15865
@HyukjinKwon Thanks for the review and suggestion, I've updated it. Clear
the unused object `hasher` and add suppression rules for the method `finalize`
of `NioBufferedFileInputStream`. Please
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15877#discussion_r87799293
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/aggregate/CountMinSketchAggSuite.scala
---
@@ -0,0 +1,284 @@
+/*
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15867
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68616/
Test PASSed.
---
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
Build started: [CORE] `org.apache.spark.JavaAPISuite`
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15867
**[Test build #68616 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68616/consoleFull)**
for PR 15867 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15882
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15882
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68614/
Test PASSed.
---
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15877#discussion_r87797412
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/CountMinSketchAgg.scala
---
@@ -0,0 +1,131 @@
+/*
+ *
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15857#discussion_r87809029
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
---
@@ -428,43 +428,47 @@ object FoldablePropagation
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/15880
We can only know if this string is castable at runtime. BTW, other
databases(like MySQL) have special implicit type conversion rules for
constants, should we follow them?
---
If your project is
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15865#discussion_r87798899
--- Diff: dev/checkstyle-suppressions.xml ---
@@ -30,6 +30,8 @@
+
--- End diff --
Ah, I thought we could disable it
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15865#discussion_r87798658
--- Diff: dev/checkstyle-suppressions.xml ---
@@ -30,6 +30,8 @@
+
--- End diff --
Oh, sorry. Actually, I didn't mean
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15857
**[Test build #68617 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68617/consoleFull)**
for PR 15857 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15866
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15866
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68615/
Test PASSed.
---
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15803
Hm, so I looked at how the other UIs work, and they seem to not be in GMT
always. They happen to use the machines' default time zone by way of using a
`SimpleDateFormat` to render times. So it has
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15877#discussion_r87799880
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/CountMinSketchAgg.scala
---
@@ -0,0 +1,131 @@
+/*
+ *
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15871#discussion_r87807344
--- Diff: python/pyspark/ml/base.py ---
@@ -59,6 +59,12 @@ def fit(self, dataset, params=None):
return [self.fit(dataset, paramMap) for
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15881
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15877#discussion_r87796816
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/CountMinSketchAgg.scala
---
@@ -0,0 +1,131 @@
+/*
+ *
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15880
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15880
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68612/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15880
**[Test build #68612 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68612/consoleFull)**
for PR 15880 at commit
Github user ConeyLiu commented on a diff in the pull request:
https://github.com/apache/spark/pull/15865#discussion_r87799361
--- Diff: dev/checkstyle-suppressions.xml ---
@@ -30,6 +30,8 @@
+
--- End diff --
@HyukjinKwon Also we could try `//
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15865
**[Test build #3425 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3425/consoleFull)**
for PR 15865 at commit
Github user koeninger commented on a diff in the pull request:
https://github.com/apache/spark/pull/15849#discussion_r87795091
--- Diff:
examples/src/main/java/org/apache/spark/examples/sql/streaming/JavaStructuredKafkaWordCount.java
---
@@ -0,0 +1,96 @@
+/*
+ * Licensed
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15881
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68613/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15881
**[Test build #68613 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68613/consoleFull)**
for PR 15881 at commit
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15877#discussion_r87796671
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/CountMinSketchAgg.scala
---
@@ -0,0 +1,131 @@
+/*
+ *
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15877#discussion_r87797485
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/CountMinSketchAgg.scala
---
@@ -0,0 +1,131 @@
+/*
+ *
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15867
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15867
**[Test build #68616 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68616/consoleFull)**
for PR 15867 at commit
Github user nsyca commented on the issue:
https://github.com/apache/spark/pull/15763
@hvanhovell could you please review the latest PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user aditya1702 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15871#discussion_r87757954
--- Diff: python/pyspark/ml/base.py ---
@@ -59,6 +59,12 @@ def fit(self, dataset, params=None):
return [self.fit(dataset, paramMap) for
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15879
**[Test build #68608 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68608/consoleFull)**
for PR 15879 at commit
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15879
@HyukjinKwon Ah, exactly. We have three descriptions of "64MB" for
Scala/Java/Python.
@moomindani Could you fix the left of two "64MB"?
---
If your project is set up for it, you can reply to
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/15780
Needs `AssertNotNull()`
[here](https://github.com/kiszk/spark/blob/38991d00cbaa50ffc9d22c54f643ed03e51b4785/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala#L577)
if
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15840
**[Test build #68607 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68607/consoleFull)**
for PR 15840 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15879
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15655
Merged to 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15880
**[Test build #68612 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68612/consoleFull)**
for PR 15880 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15880
This might be a bad idea: should we follow the old casting strategy if we
cannot cast from string to atomic datatype?
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15879
Hi @sarutak and @moomindani, I happened to look thought this just for my
curiosity.
```
./docs/programming-guide.md:* The `textFile` method also takes an optional
second argument
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/15815
LGTM thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15863
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15863
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68606/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15863
**[Test build #68606 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68606/consoleFull)**
for PR 15863 at commit
Github user aditya1702 commented on the issue:
https://github.com/apache/spark/pull/15871
@HyukjinKwon I have updated the code. Could you please take a look
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15869#discussion_r87787468
--- Diff: docs/running-on-yarn.md ---
@@ -118,19 +118,6 @@ To use a custom metrics.properties for the application
master and executors, upd
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15869#discussion_r87787674
--- Diff: docs/running-on-yarn.md ---
@@ -495,6 +468,20 @@ To use a custom metrics.properties for the application
master and executors, upd
name
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15869#discussion_r87787316
--- Diff: docs/configuration.md ---
@@ -156,6 +156,13 @@ of the most common options to set are:
+ spark.executor.instances
--- End
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15869#discussion_r87787683
--- Diff: docs/running-on-yarn.md ---
@@ -495,6 +468,20 @@ To use a custom metrics.properties for the application
master and executors, upd
name
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15869#discussion_r87787755
--- Diff: docs/running-on-yarn.md ---
@@ -495,6 +468,20 @@ To use a custom metrics.properties for the application
master and executors, upd
name
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15871#discussion_r87759435
--- Diff: python/pyspark/ml/base.py ---
@@ -59,6 +59,12 @@ def fit(self, dataset, params=None):
return [self.fit(dataset, paramMap) for
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15683
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user zhengruifeng opened a pull request:
https://github.com/apache/spark/pull/15881
[SPARK-18434][ML] Add missing ParamValidations for ML algos
## What changes were proposed in this pull request?
Add missing ParamValidations for ML algos
## How was this patch tested?
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
Thank you Sean. Actually, this is a bit annoying.
Here is what happens in the original test.
1. Writes a file to read back by `wholeTextFiles`.
```scala
scala>
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15866
Let me add a comment here and will try to clean up more.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15879
LGTM
cc: @tgravescs @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15879
ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/15877
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15879
**[Test build #68611 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68611/consoleFull)**
for PR 15879 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15863
@cloud-fan and @gatorsmile .
For the `DataSource` options issue, I'm working on
[SPARK-18433](https://issues.apache.org/jira/browse/SPARK-18433) for the
followings.
- CSVOptions
-
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15838
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15881#discussion_r87784943
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -171,7 +171,10 @@ class LinearRegression @Since("1.3.0")
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15877#discussion_r87785949
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/CountMinSketchAgg.scala
---
@@ -0,0 +1,131 @@
+/*
+ *
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r87785862
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -667,9 +667,15 @@ object JdbcUtils extends Logging
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15879
O.K. Merging into `master`, `branch-2.0` and `branch-2.1`.
Thanks @moomindani !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15877
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68604/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15877
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15879
**[Test build #68611 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68611/consoleFull)**
for PR 15879 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15879
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15878
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15879
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68611/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15878
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68605/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15878
**[Test build #68605 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68605/consoleFull)**
for PR 15878 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/15880
cc @yhuai @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/15859
@zsxwing can you take another look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/15880
[SPARK-17913][SQL] compare long and string type column may return confusing
result
## What changes were proposed in this pull request?
Spark SQL follows MySQL to do the implicit type
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15877
**[Test build #68610 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68610/consoleFull)**
for PR 15877 at commit
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/15882
[SPARK-18400][STREAMING] NPE when resharding Kinesis Stream
## What changes were proposed in this pull request?
Avoid NPE in KinesisRecordProcessor when shutdown happens without
successful
1 - 100 of 437 matches
Mail list logo