Github user srowen closed the pull request at:
https://github.com/apache/spark/pull/18834
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user raajay commented on the issue:
https://github.com/apache/spark/pull/18690
@jerryshao My CustomSInk has the report function defined. What I did not
have was an equivalent of JmxReporter defined in my CustomSink. The reporter
essentially periodically invokes the report
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18831
**[Test build #80219 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80219/testReport)**
for PR 18831 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131285941
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinalsSuite.scala
---
@@ -1,66 +0,0 @@
-/*
- *
Github user ArtRand commented on the issue:
https://github.com/apache/spark/pull/18837
@skonto @susanxhuynh Please review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18837
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80220 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80220/testReport)**
for PR 18742 at commit
GitHub user liu-zhaokun opened a pull request:
https://github.com/apache/spark/pull/18838
[SPARK-21632] There is no need to make attempts for createDirectory if the
dir had existed
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18829
I tend to agree with @ajbozarth , since we already have the APIs to access
metrics dump with json format, this looks like not so necessary. Also directly
displaying such kind of json dump on the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80226 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80226/testReport)**
for PR 18742 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18640
**[Test build #80221 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80221/testReport)**
for PR 18640 at commit
Github user mpjlu commented on the issue:
https://github.com/apache/spark/pull/18832
Thanks @sethah .
I strongly think we should update the commend or just delete the comment as
the current PR.
Another reason is: there are three kinds of feature: categorical, ordered
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18820
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18839
Some test on string form of the plan might fail. Let's see ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18839
[SPARK-21634][SQL] Change OneRowRelation from a case object to case class
## What changes were proposed in this pull request?
OneRowRelation is the only plan that is a case object, which causes
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r131308903
--- Diff: python/pyspark/sql/tests.py ---
@@ -3036,6 +3052,9 @@ def test_toPandas_arrow_toggle(self):
pdf = df.toPandas()
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18840#discussion_r131309601
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/EventTimeWatermarkSuite.scala
---
@@ -391,6 +391,30 @@ class EventTimeWatermarkSuite
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18838
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18460
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18460
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80223/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18460
**[Test build #80223 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80223/testReport)**
for PR 18460 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131299271
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18640
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18640
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80221/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18395
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18395
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80222/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #80228 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80228/testReport)**
for PR 18746 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18395
**[Test build #80222 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80222/testReport)**
for PR 18395 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80228/
Test PASSed.
---
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18829
Hbase WEB UI has metrics, Spark WEB UI should also have the function.
This is just my opinion.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #80228 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80228/testReport)**
for PR 18746 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80226 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80226/testReport)**
for PR 18742 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
@10110346 As using `resolveOperators` can solve the whole bug, let's do it
and simplify the whole change. Sorry for confusing.
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18839
**[Test build #80231 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80231/testReport)**
for PR 18839 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131308502
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309163
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309076
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/18764
Test failures in pyspark.ml.tests with python2.6, but I don't have the
environment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80230/
Test PASSed.
---
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18813
ping @cloud-fan May you have time to look at this? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/18815
Ok then I'm really confused, if the logs we're talking about can already be
viewed in the ui why do we need to display their location on the system?
---
If your project is set up for it, you can
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18690
So I think if you want to connect your custom sink to Spark Metrics System,
then you should at least follow what Spark and codahale metrics library did.
Adding a feature in Spark specifically
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131298714
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131300093
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
We need to backport this issue to branch-2.2? I think the opinion depends
on the backport decision. If no, I'm with your suggestion (keep this issue as a
blocker for branch-2.3).
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80230 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80230/testReport)**
for PR 18742 at commit
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/18829
I think if we really want these metrics in the UI we should look at adding
them to the UI in some way rather as a link to a json dump. I am not a fan of
json dumps as part of a UI in general, I
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131308421
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80230 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80230/testReport)**
for PR 18742 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18840
**[Test build #80233 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80233/testReport)**
for PR 18840 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131309985
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131311047
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,19 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #80225 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80225/testReport)**
for PR 18746 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131285585
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -1,54 +0,0 @@
-/*
- *
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18820
Hi @nchammas, while you are here, could you trigger the Jenkins build?
Looks I still have some problems with triggering it.
---
If your project is set up for it, you can reply to this email
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18831
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80219/
Test PASSed.
---
GitHub user ArtRand opened a pull request:
https://github.com/apache/spark/pull/18837
[Spark-20812] Add secrets support to the dispatcher
## What changes were proposed in this pull request?
Mesos has secrets primitives for environment and file-based secrets, this
PR adds that
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
Sorry @10110346 would you mind to try and see if
https://github.com/apache/spark/pull/18779#discussion_r131287607 works to solve
all the issues.
If it could, that maybe more simple.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18795
**[Test build #3878 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3878/testReport)**
for PR 18795 at commit
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/18829
I don't understand the point of this PR, why do we want links to json dumps
of information available on the UI? If a user wanted those dumps they can
already access them via the API, I don't
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18795
Merged to master/2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/18814
(test comment to test PR dashboard linking)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18640
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18460
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17980
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18623#discussion_r131283705
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -132,7 +132,7 @@ case class DataSource(
Github user nchammas commented on the issue:
https://github.com/apache/spark/pull/18820
Jenkins test this please.
(Let's see if I still have the magic power.)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131287820
--- Diff: python/pyspark/ml/util.py ---
@@ -237,6 +300,13 @@ def _load_java_obj(cls, clazz):
java_obj = getattr(java_obj, name)
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131284503
--- Diff: python/pyspark/ml/tests.py ---
@@ -1957,6 +1964,46 @@ def test_chisquaretest(self):
self.assertTrue(all(field in fieldNames for
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131285744
--- Diff: mllib/src/main/scala/org/apache/spark/ml/util/ReadWrite.scala ---
@@ -471,3 +471,24 @@ private[ml] object MetaAlgorithmReadWrite {
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131286314
--- Diff: python/pyspark/ml/util.py ---
@@ -61,20 +66,74 @@ def _randomUID(cls):
@inherit_doc
-class MLWriter(object):
+class
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
Well, maybe we should revisit this after #17770 gets merged. Because after
that, we won't go through analyzed plans anymore.
At that time, we can simply solve all the issues by making
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18831
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18831
**[Test build #80218 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80218/testReport)**
for PR 18831 at commit
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/18733#discussion_r131268294
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -112,16 +112,16 @@ class CrossValidator @Since("1.2.0")
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18831
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80218/
Test FAILed.
---
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/16158#discussion_r131270741
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -133,7 +134,10 @@ class CrossValidator @Since("1.2.0")
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/18815
I'm not a fan of this, I remember talking about surfacing this a while ago
and we decided that it was a bad idea, but we may have been talking about
surfacing the log not the path. Either way
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18795
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18395
**[Test build #80222 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80222/testReport)**
for PR 18395 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18460
**[Test build #80223 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80223/testReport)**
for PR 18460 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17980
**[Test build #80224 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80224/testReport)**
for PR 17980 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18831
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131286674
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinalsSuite.scala
---
@@ -1,66 +0,0 @@
-/*
- *
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #80225 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80225/testReport)**
for PR 18746 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18831
**[Test build #80219 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80219/testReport)**
for PR 18831 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80225/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/18831
Thanks for your comments, Felix.
Addressed all issues.
@yanboliang Could you take a quick look?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80220/
Test PASSed.
---
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131290132
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -528,7 +541,15 @@ class AstBuilder(conf: SQLConf)
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131290115
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -245,14 +246,26 @@ class AstBuilder(conf: SQLConf)
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/18746
@ajaysaini725Â Is there a JIRA for this PR? Please tag this PR in the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131265462
--- Diff: mllib/src/main/scala/org/apache/spark/ml/util/ReadWrite.scala ---
@@ -471,3 +471,26 @@ private[ml] object MetaAlgorithmReadWrite {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80220 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80220/testReport)**
for PR 18742 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18640
**[Test build #80221 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80221/testReport)**
for PR 18640 at commit
301 - 400 of 482 matches
Mail list logo