Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18877#discussion_r132583520
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/ChildProcAppHandle.java ---
@@ -118,14 +116,40 @@ void setChildProc(Process childProc, String
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18872
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18872
**[Test build #80508 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80508/testReport)**
for PR 18872 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18872
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18872
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80508/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18872
**[Test build #80508 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80508/testReport)**
for PR 18872 at commit
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18912
[SQL] Remove unused getTableOption in ExternalCatalog
## What changes were proposed in this pull request?
This patch removes the unused SessionCatalog.getTableMetadataOption and
ExternalCatalog.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18519
**[Test build #80509 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80509/testReport)**
for PR 18519 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18877
**[Test build #80511 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80511/testReport)**
for PR 18877 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18912
**[Test build #80510 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80510/testReport)**
for PR 18912 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18912
LGTM except the PR title
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user bOOm-X commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r132593478
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -39,98 +40,114 @@ import org.apache.spark.util.Utils
* has
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/18837#discussion_r132593586
--- Diff: resource-managers/mesos/pom.xml ---
@@ -29,7 +29,7 @@
Spark Project Mesos
mesos
-1.0.0
+1.3.0
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18519
**[Test build #80503 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80503/testReport)**
for PR 18519 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18519
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18519
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80503/
Test PASSed.
---
Github user hhbyyh commented on the issue:
https://github.com/apache/spark/pull/18902
Hi @zhengruifeng Thanks for the idea and implementation. Definitely
something worth exploring.
As I understand, the new implementation improves the locality yet it
leverages RDD API
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/18913
[SPARK-21563][CORE] Fix race condition when serializing TaskDescriptions
and adding jars
## What changes were proposed in this pull request?
Fix the race condition when serializing
Github user ajaysaini725 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1#discussion_r132595823
--- Diff: python/pyspark/ml/tests.py ---
@@ -1142,6 +1142,35 @@ def test_nested_pipeline_persistence(self):
except OSError:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18913
**[Test build #80512 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80512/testReport)**
for PR 18913 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18843
**[Test build #80504 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80504/testReport)**
for PR 18843 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18843
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80504/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18843
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18724
**[Test build #80505 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80505/testReport)**
for PR 18724 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18724
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80505/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18724
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ajaysaini725 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1#discussion_r132597367
--- Diff: python/pyspark/ml/pipeline.py ---
@@ -242,3 +327,65 @@ def _to_java(self):
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/18837#discussion_r132597784
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -529,18 +570,120 @@
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/1
**[Test build #80513 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80513/testReport)**
for PR 1 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18897
**[Test build #80494 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80494/testReport)**
for PR 18897 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132509593
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala
---
@@ -149,4 +149,75 @@ class WholeStageCodegenSuite
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18421
This looks great!
Regarding this https://github.com/apache/spark/pull/18421/files#r132493566,
could we just follow Hive? That is consistent with what we are doing in INSERT.
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18907#discussion_r132514182
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -412,6 +412,11 @@ case class CatalogRelation(
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/15770
@WeichenXu123 Thanks for reviewing! I will address the comments soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18897
**[Test build #80496 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80496/testReport)**
for PR 18897 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18908
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18897
> have the capability to safely shut down the spark application
That's not true; that works just like "yarn kill". This is the code that
does it in `DriverRunner.scala`, which is where the
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18849#discussion_r132526607
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/Hive_2_1_DDLSuite.scala
---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18897
I think the kill is to more cleanly shutdown on the yarn side of things.
If you yarn kill an application on yarn it doesn't set the history
appropriately, etc. Its also just nice to have one
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18897
Yes, I'd really like a more thorough explanation of the RPC changes. Once
that's sorted out, adding something like a "kill" command should be trivial.
---
If your project is set up for it, you can
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/17849#discussion_r132530521
--- Diff: python/pyspark/ml/tests.py ---
@@ -417,6 +417,54 @@ def test_logistic_regression_check_thresholds(self):
LogisticRegression,
GitHub user susanxhuynh opened a pull request:
https://github.com/apache/spark/pull/18910
[SPARK-21694][MESOS] Support Mesos CNI network labels
JIRA ticket: https://issues.apache.org/jira/browse/SPARK-21694
## What changes were proposed in this pull request?
Spark
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18910
**[Test build #80498 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80498/testReport)**
for PR 18910 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18910
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18910
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80498/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17849
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17849
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80499/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17849
**[Test build #80499 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80499/testReport)**
for PR 17849 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18861
@gatorsmile ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18874
To answer a few of your last questions.
It doesn't hurt the common case, the common case is all your executors have
tasks on them as long as there are tasks to run. Normally scheduler can
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r132474881
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -39,98 +40,114 @@ import org.apache.spark.util.Utils
* has
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18895
**[Test build #80490 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80490/testReport)**
for PR 18895 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18849#discussion_r132525593
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/Hive_2_1_DDLSuite.scala
---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18849#discussion_r132530396
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -1175,6 +1205,27 @@ private[spark] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18884
**[Test build #80493 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80493/testReport)**
for PR 18884 at commit
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/17849
Thanks for reviewing @viirya and @HyukjinKwon !
Btw, the temporary fix I talk about here is an optional addition to this PR
to allow users to access model param values this way
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/17849#discussion_r132541048
--- Diff: python/pyspark/ml/wrapper.py ---
@@ -144,7 +158,9 @@ def _transfer_params_from_java(self):
if
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18874
so I think the issue with the locality is that it resets the time (3s
wait) whenever it schedules any task at the particular locality level (in this
case node local) on any node. So it can take
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18849#discussion_r132527085
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/Hive_2_1_DDLSuite.scala
---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/17849#discussion_r132531194
--- Diff: python/pyspark/ml/tests.py ---
@@ -1572,7 +1588,8 @@ def test_java_params(self):
for name, cls in inspect.getmembers(module,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18884
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18884
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80493/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18910
**[Test build #80498 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80498/testReport)**
for PR 18910 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17849
**[Test build #80499 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80499/testReport)**
for PR 17849 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18875
**[Test build #80500 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80500/testReport)**
for PR 18875 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18892
**[Test build #80483 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80483/testReport)**
for PR 18892 at commit
Github user caneGuy commented on a diff in the pull request:
https://github.com/apache/spark/pull/18901#discussion_r132419551
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -366,6 +366,17 @@ object SparkSubmit extends CommandLineUtils {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18899
**[Test build #80485 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80485/testReport)**
for PR 18899 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18892
**[Test build #80486 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80486/testReport)**
for PR 18892 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18899
**[Test build #80485 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80485/testReport)**
for PR 18899 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18899
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80485/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18899
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18810
**[Test build #80481 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80481/testReport)**
for PR 18810 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r132507914
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -39,98 +40,114 @@ import org.apache.spark.util.Utils
* has
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132509287
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala
---
@@ -149,4 +149,75 @@ class WholeStageCodegenSuite
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18907
**[Test build #80495 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80495/testReport)**
for PR 18907 at commit
Github user ProtD commented on the issue:
https://github.com/apache/spark/pull/18872
I added the unit test, please review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18488
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18488
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80489/
Test PASSed.
---
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18786#discussion_r132514011
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2973,15 +2974,48 @@ setMethod("describe",
dataFrame(sdf)
})
+#' summary
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18786#discussion_r132504096
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2973,15 +2974,48 @@ setMethod("describe",
dataFrame(sdf)
})
+#' summary
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18786#discussion_r132503973
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2973,15 +2974,48 @@ setMethod("describe",
dataFrame(sdf)
})
+#' summary
GitHub user coyotehills opened a pull request:
https://github.com/apache/spark/pull/18908
[SPARK-21644][SQL] fix LocalLimit.maxRows
## What changes were proposed in this pull request?
Since `LocalLimit` is only about partition level limits, the max output
rows should be
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/17849#discussion_r132518262
--- Diff: python/pyspark/ml/wrapper.py ---
@@ -135,6 +135,20 @@ def _transfer_param_map_to_java(self, pyParamMap):
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18421
**[Test build #80492 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80492/testReport)**
for PR 18421 at commit
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/17849#discussion_r132522347
--- Diff: python/pyspark/ml/wrapper.py ---
@@ -263,7 +284,8 @@ def _fit_java(self, dataset):
def _fit(self, dataset):
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18899
**[Test build #80517 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80517/testReport)**
for PR 18899 at commit
Github user mpjlu commented on a diff in the pull request:
https://github.com/apache/spark/pull/18899#discussion_r132610165
--- Diff: project/MimaExcludes.scala ---
@@ -1012,6 +1012,10 @@ object MimaExcludes {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18913
**[Test build #80512 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80512/testReport)**
for PR 18913 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18913
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80512/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18913
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18810
**[Test build #80518 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80518/testReport)**
for PR 18810 at commit
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132610543
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala
---
@@ -149,4 +149,75 @@ class WholeStageCodegenSuite
Github user mpjlu commented on the issue:
https://github.com/apache/spark/pull/18899
Hi @sethah , the unit test is added. Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132610861
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -572,6 +572,14 @@ object SQLConf {
"disable logging or
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132611153
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala
---
@@ -370,6 +370,14 @@ case class
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132611346
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -572,6 +572,14 @@ object SQLConf {
"disable logging or
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132611289
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala
---
@@ -149,4 +150,56 @@ class WholeStageCodegenSuite extends
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132611417
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala
---
@@ -149,4 +150,56 @@ class WholeStageCodegenSuite extends
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18810
LGTM except for few comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
201 - 300 of 411 matches
Mail list logo