Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18499
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18819
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18836
@BoleynSu Do you want to continue the PR? or you want us to take it over?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18790
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18790
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80215/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18790
**[Test build #80215 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80215/testReport)**
for PR 18790 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18833
**[Test build #80214 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80214/testReport)**
for PR 18833 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
It might help to document this in the Dataset `groupBy` comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18814
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80211/
Test PASSed.
---
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18786
Is it too late to change the Scala side output format? I suspect it doesn't
matter too much on Scala/Python which order they are in and preserving the
existing order in R could be helpful.
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18824#discussion_r131211342
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -413,7 +414,10 @@ private[hive] class HiveClientImpl(
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18281#discussion_r131215571
--- Diff: python/pyspark/ml/param/_shared_params_code_gen.py ---
@@ -152,6 +152,8 @@ def get$Name(self):
("varianceCol", "column name for
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131194926
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -50,6 +50,7 @@ private[hive]
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18814
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user kevinyu98 commented on the issue:
https://github.com/apache/spark/pull/12646
@gatorsmile Hello Xiao, can you help retest this ? Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user BoleynSu commented on the issue:
https://github.com/apache/spark/pull/18836
@gatorsmile I am not familiar with the PR process, it is great that you can
take it over. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18824#discussion_r131198143
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -413,7 +414,10 @@ private[hive] class HiveClientImpl(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18814
**[Test build #80211 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80211/testReport)**
for PR 18814 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18786
I see. I recall the method name discussion; though changing API and/or
output format is something we generally want to avoid. Something like this has
been called out in past releases as we
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18835
**[Test build #80212 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80212/testReport)**
for PR 18835 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18499
**[Test build #80216 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80216/testReport)**
for PR 18499 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18836
Thanks for fixing this. Please follow the contribution guideline.
Also, you need to add a test case. You can follow what we did in this PR:
https://github.com/apache/spark/pull/17339
Github user BoleynSu commented on the issue:
https://github.com/apache/spark/pull/18836
A test case to make the existing code fail.
@srowen I am sorry that this pull request is not well formatted but I just
want to help.
```scala
import org.apache.spark.sql.SparkSession
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18824#discussion_r131210013
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -616,15 +616,24 @@ private[spark] class
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18831#discussion_r131209057
--- Diff: R/pkg/R/mllib_regression.R ---
@@ -125,7 +127,7 @@ setClass("IsotonicRegressionModel", representation(jobj
= "jobj"))
#' @seealso
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/18797
Thanks! Waiting AFT testcode author to figure out how to modify the
testcase.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18836
You didn't read the link above, I take it?
http://spark.apache.org/contributing.html
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18835
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80212/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18835
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18836
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user BenFradet commented on the issue:
https://github.com/apache/spark/pull/18797
@srowen there shouldn't be any issue with removing the first row of the
test data afaict.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user BoleynSu opened a pull request:
https://github.com/apache/spark/pull/18836
Update SortMergeJoinExec.scala
fix a bug in outputOrdering
## What changes were proposed in this pull request?
Change `case Inner` to `case _: InnerLike` so that Cross will be
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18819
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18797
Yeah, the only issue is that the test set is generated and used in several
tests. Maybe we can just see if changing it works for all callers.
---
If your project is set up for it, you can reply to
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18790
**[Test build #80215 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80215/testReport)**
for PR 18790 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18831
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18831
**[Test build #80213 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80213/testReport)**
for PR 18831 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18831
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80213/
Test FAILed.
---
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/17849
If params are defined in the PySpark model, when that model is fit a Scala
version is created then the PySpark model is wrapped around it. The param
values from the Scala version are never
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/18820#discussion_r131208895
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1423,8 +1434,9 @@ def all_of_(xs):
subset = [subset]
# Verify we were
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18831#discussion_r131211844
--- Diff: R/pkg/R/mllib_regression.R ---
@@ -159,10 +161,16 @@ setMethod("spark.glm", signature(data =
"SparkDataFrame", formula = "formula"),
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18795
**[Test build #3877 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3877/testReport)**
for PR 18795 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18836
@BoleynSu Sure, I can do it. Will give all the credits to you. Please
continue to help us report new issues or fixes. Thanks!
---
If your project is set up for it, you can reply to this email
Github user icexelloss commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r131227296
--- Diff: python/pyspark/sql/tests.py ---
@@ -3036,6 +3052,9 @@ def test_toPandas_arrow_toggle(self):
pdf = df.toPandas()
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18824
**[Test build #80217 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80217/testReport)**
for PR 18824 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18824
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80217/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18795
**[Test build #3877 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3877/testReport)**
for PR 18795 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18824
I reworked the patch to try to merge the "create table" and "alter table"
paths, so they both do the translation the same way.
There are still some test failures but I wanted to get this up
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18833
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80214/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18833
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18824
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18824
**[Test build #80217 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80217/testReport)**
for PR 18824 at commit
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/18828#discussion_r131250896
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
---
@@ -181,17 +181,38 @@ abstract class QueryPlan[PlanType <:
Github user bravo-zhang commented on the issue:
https://github.com/apache/spark/pull/18820
What if the field is not nullable? I did a test:
```
val rows = spark.sparkContext.parallelize(Seq(
Row("Bravo", 28, 183.5),
Row("Jessie", 18, 165.8)))
val
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131262008
--- Diff: python/pyspark/ml/param/__init__.py ---
@@ -375,6 +375,18 @@ def copy(self, extra=None):
that._defaultParamMap = {}
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131262239
--- Diff: python/pyspark/ml/param/__init__.py ---
@@ -375,6 +375,18 @@ def copy(self, extra=None):
that._defaultParamMap = {}
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18831
**[Test build #80218 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80218/testReport)**
for PR 18831 at commit
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r131222861
--- Diff: python/pyspark/ml/base.py ---
@@ -116,3 +121,53 @@ class Model(Transformer):
"""
__metaclass__ = ABCMeta
+
+
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r131258120
--- Diff: python/pyspark/ml/tests.py ---
@@ -1957,6 +1987,24 @@ def test_chisquaretest(self):
self.assertTrue(all(field in fieldNames for
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r131223795
--- Diff: python/pyspark/ml/base.py ---
@@ -116,3 +121,53 @@ class Model(Transformer):
"""
__metaclass__ = ABCMeta
+
+
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r13190
--- Diff: python/pyspark/ml/base.py ---
@@ -116,3 +121,53 @@ class Model(Transformer):
"""
__metaclass__ = ABCMeta
+
+
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r13122
--- Diff: python/pyspark/ml/base.py ---
@@ -116,3 +121,53 @@ class Model(Transformer):
"""
__metaclass__ = ABCMeta
+
+
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r131258476
--- Diff: python/pyspark/ml/tests.py ---
@@ -1957,6 +1987,24 @@ def test_chisquaretest(self):
self.assertTrue(all(field in fieldNames for
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r131257864
--- Diff: python/pyspark/ml/tests.py ---
@@ -1957,6 +1987,24 @@ def test_chisquaretest(self):
self.assertTrue(all(field in fieldNames for
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18499
**[Test build #80216 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80216/testReport)**
for PR 18499 at commit
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/18836#discussion_r131239029
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -82,7 +82,7 @@ case class SortMergeJoinExec(
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/18833
@maropu that only works for literals. I am sort-of in favor of the Hive
default; it seems kinda bad to bring down a job because of negative value.
---
If your project is set up for it, you can
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131260736
--- Diff: python/pyspark/ml/param/__init__.py ---
@@ -375,6 +375,18 @@ def copy(self, extra=None):
that._defaultParamMap = {}
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131264975
--- Diff: python/pyspark/ml/util.py ---
@@ -283,3 +289,124 @@ def numFeatures(self):
Returns the number of features the model was trained on.
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131261798
--- Diff: python/pyspark/ml/param/__init__.py ---
@@ -375,6 +375,18 @@ def copy(self, extra=None):
that._defaultParamMap = {}
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131261087
--- Diff: python/pyspark/ml/param/__init__.py ---
@@ -375,6 +375,18 @@ def copy(self, extra=None):
that._defaultParamMap = {}
Github user bravo-zhang commented on the issue:
https://github.com/apache/spark/pull/18820
Hey @nchammas I made the logic much simpler.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18833
**[Test build #80214 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80214/testReport)**
for PR 18833 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18499
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80216/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18499
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80227 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80227/testReport)**
for PR 18668 at commit
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18815
Ok. That master, worker log and Executor log can be displayed in the WEB UI?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17980
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17980
**[Test build #80224 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80224/testReport)**
for PR 17980 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17980
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80224/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18820
**[Test build #80229 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80229/testReport)**
for PR 18820 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80226/
Test PASSed.
---
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131305760
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
ok,thanks @viirya
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18779
**[Test build #80232 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80232/testReport)**
for PR 18779 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18840
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user joseph-torres opened a pull request:
https://github.com/apache/spark/pull/18840
[SPARK-21565] Propagate metadata in attribute replacement.
## What changes were proposed in this pull request?
Propagate metadata in attribute replacement during streaming execution.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80227/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80227 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80227/testReport)**
for PR 18668 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18840
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309398
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131311220
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18829
Your comments I accepted, thank you.
If you really make these important metrics to WEB UI, the workload is not
small. I will try to do that.
---
If your project is set up for it, you
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131265094
--- Diff: mllib/src/main/scala/org/apache/spark/ml/util/ReadWrite.scala ---
@@ -471,3 +471,26 @@ private[ml] object MetaAlgorithmReadWrite {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18795
**[Test build #3878 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3878/testReport)**
for PR 18795 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18824
FYI, I'm rebuilding the environment where I found the bug, to see why the
code was failing even with the exception handler. I'll update the bug if
necessary.
---
If your project is set up for it,
Github user vanzin closed the pull request at:
https://github.com/apache/spark/pull/18824
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18824
I updated the bug, let me close this for now while I figure out why that
exception is happening.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
201 - 300 of 482 matches
Mail list logo