Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r133261665
--- Diff: python/pyspark/sql/tests.py ---
@@ -3036,6 +3052,9 @@ def test_toPandas_arrow_toggle(self):
pdf = df.toPandas()
self.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18918
**[Test build #80693 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80693/testReport)**
for PR 18918 at commit
[`97a3270`](https://github.com/apache/spark/commit/9
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18950
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18942
@poplav it looks good
@gatorsmile Do you think it is ok for backport now? The previous commit
included unnecessary changes.
---
If your project is set up for it, you can reply to this email and h
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18950
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80690/
Test PASSed.
---
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18933#discussion_r133254340
--- Diff: python/pyspark/sql/tests.py ---
@@ -2507,6 +2507,37 @@ def test_to_pandas(self):
self.assertEquals(types[2], np.bool)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18950
**[Test build #80690 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80690/testReport)**
for PR 18950 at commit
[`d3f8162`](https://github.com/apache/spark/commit/d
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18933#discussion_r133255672
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -912,6 +912,14 @@ object SQLConf {
.intConf
Github user redsanket commented on the issue:
https://github.com/apache/spark/pull/18940
@kiszk wouldn't the updated release notes/docs take care of that, which
configs can no longer be used and which are not. I don't mind adding a warning
msg saying please use another cache.size inst
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/11494
kindly ping @yzotov
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/18640#discussion_r133255040
--- Diff: sql/core/pom.xml ---
@@ -87,6 +87,16 @@
+ org.apache.orc
+ orc-core
+ ${orc.classifier}
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/16648
kindly ping @bdrillard
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18940
@redsanket I am thinking about the case that the same configuration file,
which explicitly sets a value (e.g. 4096) into
`spark.shuffle.service.index.cache.entries`, is used in Spark 2.3.
The user
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
@srowen @HyukjinKwon , retest this please :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this featur
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133249207
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -2034,4 +2034,25 @@ class JsonSuite extends Que
Github user omalley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18640#discussion_r133248648
--- Diff: sql/core/pom.xml ---
@@ -87,6 +87,16 @@
+ org.apache.orc
+ orc-core
+ ${orc.classifier}
--
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18949
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18930
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80688/
Test PASSed.
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18947
@viirya Could you close it? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enab
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18930
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18421
This is just to make it consistent with the partition spec in our current
INSERT statement. Could you justify why we need to make them inconsistent?
Thanks!
Also cc @sameeragarwal
--
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18930
**[Test build #80688 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80688/testReport)**
for PR 18930 at commit
[`ab16929`](https://github.com/apache/spark/commit/a
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18849
**[Test build #80694 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80694/testReport)**
for PR 18849 at commit
[`4a05b55`](https://github.com/apache/spark/commit/4a
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18266
Users should be allowed to specify the schema from the table properties by
using DDL-like strings.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18918
**[Test build #80693 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80693/testReport)**
for PR 18918 at commit
[`97a3270`](https://github.com/apache/spark/commit/97
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133239924
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -111,7 +111,22 @@ private[sql] case class JD
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133239835
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -111,7 +111,22 @@ private[sql] case class JD
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133239599
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -111,7 +111,22 @@ private[sql] case class JD
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18849#discussion_r133236830
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -1175,6 +1205,27 @@ private[spark] class HiveExternalCatalog(con
Github user eyalfa commented on the issue:
https://github.com/apache/spark/pull/18855
Funny enough, that's the approach I've chosen.
On Aug 15, 2017 19:17, "Marcelo Vanzin" wrote:
> *@vanzin* commented on this pull request.
> --
>
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18849#discussion_r133236101
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -342,6 +359,12 @@ private[spark] class HiveExternalCatalog(conf:
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133235997
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala
---
@@ -268,4 +275,44 @@ class OracleI
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18488
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18488
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80685/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18488
**[Test build #80685 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80685/testReport)**
for PR 18488 at commit
[`fbdc599`](https://github.com/apache/spark/commit/f
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133235311
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -197,11 +197,13 @@ class DataFrameReader private[sql](sparkSession:
Github user redsanket commented on the issue:
https://github.com/apache/spark/pull/18940
@kiszk I dont think that would be ideal, it is better to backport the
feature itself to a desired version or branch, having two conflicting configs
for the same task is not ideal, if that is what
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18266
The example in the PR description looks a little bit confusing.
```Scala
val dfRead = spark.read.schema(schema).jdbc(jdbcUrl,
"tableWithCustomSchema", new Properties())
```
C
Github user nchammas commented on the issue:
https://github.com/apache/spark/pull/18926
It's cleaner but less specific. Unless we branch on whether `startPos` and
`length` are the same type, we will give the same error message for mixed types
and for unsupported types. That seems like
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18943
**[Test build #80692 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80692/testReport)**
for PR 18943 at commit
[`b1e49fa`](https://github.com/apache/spark/commit/b1
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18951
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/18951
[SPARK-21738] Thriftserver doesn't cancel jobs when session is closed
## What changes were proposed in this pull request?
When a session is closed the Thriftserver doesn't cancel the jobs
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18943
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18943#discussion_r133233495
--- Diff:
examples/src/main/scala/org/apache/spark/examples/ml/BucketedRandomProjectionLSHExample.scala
---
@@ -21,9 +21,9 @@ package org.apache.spark.examp
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18855#discussion_r133232857
--- Diff:
core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala ---
@@ -1415,6 +1415,79 @@ class BlockManagerSuite extends SparkFunSuite with
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18926
```Python
if isinstance(startPos, int) and isinstance(length, int):
jc = self._jc.substr(startPos, length)
elif isinstance(startPos, Column) and isinstance(l
Github user debugger87 commented on the issue:
https://github.com/apache/spark/pull/18649
@dilipbiswal
Thanks for your reply. In my eyes, there have been some mechanism or
configuration to control the number of opening files generated by SQL
Operation. e.g:
```
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18907
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18907
Thanks! Merging to master.
Hit conflicts when trying to merge to the previous versions.
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18907
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the fe
Github user icexelloss commented on a diff in the pull request:
https://github.com/apache/spark/pull/18933#discussion_r133229705
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -912,6 +912,14 @@ object SQLConf {
.intConf
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/18622
@srowen any comment on this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled an
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17373
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17373
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80689/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17373
**[Test build #80689 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80689/testReport)**
for PR 17373 at commit
[`eedc647`](https://github.com/apache/spark/commit/e
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/18329
@zsxwing @tdas any comment on this? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this featu
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18940
nit: title should be "`[SPARK-21501] ...`".
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18940
I like this feature.
For backward compatibility, how about referring to
`spark.shuffle.service.index.cache.entries` only if
`spark.shuffle.service.index.cache.entries` is explicitly declared.
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18946
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user mbasmanova commented on the issue:
https://github.com/apache/spark/pull/18421
ping @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18940
**[Test build #80691 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80691/testReport)**
for PR 18940 at commit
[`e9afdf7`](https://github.com/apache/spark/commit/e9
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18946
LGTM
Thanks! Merging to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18947
LGTM
Merging to 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18907
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80682/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18907
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18907
**[Test build #80682 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80682/testReport)**
for PR 18907 at commit
[`0a18435`](https://github.com/apache/spark/commit/0
Github user redsanket commented on the issue:
https://github.com/apache/spark/pull/18940
@dbolshak there were no unit tests for google cache implementation here
before, I could add a simple test to check for cache behavior if it is
necessary but ideally a scale test is necessary to un
Github user redsanket commented on a diff in the pull request:
https://github.com/apache/spark/pull/18940#discussion_r133220047
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockResolver.java
---
@@ -104,15 +105,22 @@ public Extern
Github user thunterdb commented on the issue:
https://github.com/apache/spark/pull/18798
Thank you @yanboliang.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18950
**[Test build #80690 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80690/testReport)**
for PR 18950 at commit
[`d3f8162`](https://github.com/apache/spark/commit/d3
GitHub user dhruve opened a pull request:
https://github.com/apache/spark/pull/18950
[SPARK-20589][Core][Scheduler] Allow limiting task concurrency per job group
## What changes were proposed in this pull request?
This change allows the user to specify the maximum no. of tasks ru
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18786
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18786
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80687/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18786
**[Test build #80687 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80687/testReport)**
for PR 18786 at commit
[`9c9f0f6`](https://github.com/apache/spark/commit/9
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17373
**[Test build #80689 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80689/testReport)**
for PR 17373 at commit
[`eedc647`](https://github.com/apache/spark/commit/ee
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/17373
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18933#discussion_r133209362
--- Diff: python/pyspark/sql/tests.py ---
@@ -2507,6 +2507,37 @@ def test_to_pandas(self):
self.assertEquals(types[2], np.bool)
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133202748
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -2034,4 +2034,25 @@ class JsonSuite extends Qu
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17373
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17373
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80684/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17373
**[Test build #80684 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80684/testReport)**
for PR 17373 at commit
[`eedc647`](https://github.com/apache/spark/commit/e
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133200977
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -362,12 +362,12 @@ case class JsonTuple(childr
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18930
**[Test build #80688 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80688/testReport)**
for PR 18930 at commit
[`ab16929`](https://github.com/apache/spark/commit/ab
Github user poplav commented on the issue:
https://github.com/apache/spark/pull/18942
@kiszk , I updated the PR to remove the `prunePartionsByFilter` bit.
Please let me know now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18918
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80686/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18918
**[Test build #80686 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80686/testReport)**
for PR 18918 at commit
[`df7ecaa`](https://github.com/apache/spark/commit/d
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18918
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18918
**[Test build #80686 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80686/testReport)**
for PR 18918 at commit
[`df7ecaa`](https://github.com/apache/spark/commit/df
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18786
**[Test build #80687 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80687/testReport)**
for PR 18786 at commit
[`9c9f0f6`](https://github.com/apache/spark/commit/9c
Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/18918
sorry, Rang not rand
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes s
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18798#discussion_r133195248
--- Diff: mllib/src/main/scala/org/apache/spark/ml/stat/Summarizer.scala ---
@@ -0,0 +1,593 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user mike0sv commented on a diff in the pull request:
https://github.com/apache/spark/pull/18488#discussion_r133194603
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/ExpressionInfo.java
---
@@ -79,7 +79,7 @@ public ExpressionInfo(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18488
**[Test build #80685 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80685/testReport)**
for PR 18488 at commit
[`fbdc599`](https://github.com/apache/spark/commit/fb
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18926
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18926
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80683/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18926
**[Test build #80683 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80683/testReport)**
for PR 18926 at commit
[`a7fea20`](https://github.com/apache/spark/commit/a
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18930
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the featur
Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18798#discussion_r133191237
--- Diff: mllib/src/main/scala/org/apache/spark/ml/stat/Summarizer.scala ---
@@ -0,0 +1,593 @@
+/*
+ * Licensed to the Apache Software Foundatio
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133190654
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -362,12 +362,12 @@ case class JsonTuple(childre
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133190497
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -426,10 +426,11 @@ case class JsonTuple(childre
201 - 300 of 486 matches
Mail list logo