Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17561
@ueshin please take a look at this pr, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17557
**[Test build #75593 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75593/testReport)**
for PR 17557 at commit
[`30949a1`](https://github.com/apache/spark/commit/30
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17561
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user shaolinliu closed the pull request at:
https://github.com/apache/spark/pull/17560
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17560
@ueshin i resubmit the pr, please close this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fea
GitHub user shaolinliu opened a pull request:
https://github.com/apache/spark/pull/17561
[SPARK-20248][ SQL]Spark SQL add limit parameter to enhance the reliability.
## What changes were proposed in this pull request?
Add a parameter "spark.sql.thriftServer.retainedResu
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17560
yes, i am fixing, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wi
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/17560
@shaolinliu Can you fix conflicts?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17560
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17552
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17552
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75587/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17552
**[Test build #75587 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75587/testReport)**
for PR 17552 at commit
[`0fbd4a6`](https://github.com/apache/spark/commit/0
GitHub user shaolinliu opened a pull request:
https://github.com/apache/spark/pull/17560
[SPARK-20248][ SQL]Spark SQL add limit parameter to enhance the reliability.
## What changes were proposed in this pull request?
Add a parameter "spark.sql.thriftServer.retainedResults" with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17559
**[Test build #75592 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75592/testReport)**
for PR 17559 at commit
[`77896e9`](https://github.com/apache/spark/commit/77
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/17559
[SPARK-20246][SQL] Don't pushdown non-deterministic expression through
Aggregate
## What changes were proposed in this pull request?
import org.apache.spark.sql.functions._
val
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17558
@wangyum what if the task requires that jar? From your fix what I got is
that you catch the exception and make it warning log instead, but what if that
task requires the jar, will you fix suppress
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110317549
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
---
@@ -328,7 +329,7 @@ object PartitioningUtils {
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110298557
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala
---
@@ -396,7 +397,7 @@ object Partitioni
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110317695
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -222,7 +225,7 @@ case class PreprocessTableCreation(spa
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110317441
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
---
@@ -128,7 +128,8 @@ object PartitioningUtils {
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110314669
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/StringKeyHashMap.scala
---
@@ -25,7 +27,7 @@ object StringKeyHashMap {
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110315394
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -82,8 +84,8 @@ case class OptimizeMetadataOnlyQ
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110314541
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala
---
@@ -26,11 +28,12 @@ package org.apache.spark.sql.
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110317272
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFsRelation.scala
---
@@ -52,7 +54,11 @@ case class HadoopFsRelation(
Github user ioana-delaney commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110318802
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -134,7 +132,7 @@ case class CostBased
Github user ioana-delaney commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110318621
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -736,6 +736,12 @@ object SQLConf {
.checkValue(wei
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17516
Don't we also need the skip if cran statement ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17516
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17516
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75589/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17516
**[Test build #75589 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75589/testReport)**
for PR 17516 at commit
[`a3e8b35`](https://github.com/apache/spark/commit/a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17557
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17557
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75588/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17557
**[Test build #75588 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75588/testReport)**
for PR 17557 at commit
[`27e94fd`](https://github.com/apache/spark/commit/2
Github user ioana-delaney commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110318101
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorder
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17222
I'll try and follow up this weekend.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enable
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17494
Thanks @holdenk
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or i
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17494
LGTM as well
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if t
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17222
**[Test build #75591 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75591/testReport)**
for PR 17222 at commit
[`4da2994`](https://github.com/apache/spark/commit/4d
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/17222
@viirya Thanks for careful review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17494
Thanks @jkbradley
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17222
LGTM, see if @marmbrus or @holdenk have any more comments about this change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pr
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17222#discussion_r110316824
--- Diff: python/pyspark/sql/tests.py ---
@@ -436,6 +436,20 @@ def test_udf_with_order_by_and_limit(self):
res.explain(True)
self.as
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17558
**[Test build #75590 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75590/testReport)**
for PR 17558 at commit
[`de5b5fe`](https://github.com/apache/spark/commit/de
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17546
This looks pretty good over all.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110316465
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -736,6 +736,12 @@ object SQLConf {
.checkValue(weight =>
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17558
[SPARK-20247][CORE] Add jar but this jar is missing later shouldn't affect
jobs that doesn't use this jar
## What changes were proposed in this pull request?
Catch exception when jar is mi
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17516
**[Test build #75589 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75589/testReport)**
for PR 17516 at commit
[`a3e8b35`](https://github.com/apache/spark/commit/a3
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17557
**[Test build #75588 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75588/testReport)**
for PR 17557 at commit
[`27e94fd`](https://github.com/apache/spark/commit/27
GitHub user zero323 opened a pull request:
https://github.com/apache/spark/pull/17557
[SPARK-20208][WIP][R][DOCS] Document R fpGrowth support
## What changes were proposed in this pull request?
Document fpGrowth in:
- vignettes
- programming guide
- code ex
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/15770
Any update on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17553#discussion_r110315204
--- Diff: examples/src/main/r/ml/glm.R ---
@@ -56,6 +56,15 @@ summary(binomialGLM)
# Prediction
binomialPredictions <- predict(binomialGLM, bin
Github user ioana-delaney commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110314839
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/StarJoinCostBasedReorderSuite.scala
---
@@ -0,0 +1,428 @@
+/*
+
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17556
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user ioana-delaney commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110314588
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/StarJoinCostBasedReorderSuite.scala
---
@@ -0,0 +1,428 @@
+/*
+
GitHub user facaiy opened a pull request:
https://github.com/apache/spark/pull/17556
[SPARK-16957][MLlib] Use weighted midpoints for split values.
## What changes were proposed in this pull request?
Use weighted midpoints for split values.
## How was this patch test
Github user ioana-delaney commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110314369
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -736,6 +736,12 @@ object SQLConf {
.checkValue(wei
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110313675
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -54,14 +54,12 @@ case class CostBasedJoi
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110313661
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -134,7 +132,7 @@ case class CostBasedJoi
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110313369
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -736,6 +736,12 @@ object SQLConf {
.checkValue(weight
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110313349
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -736,6 +736,12 @@ object SQLConf {
.checkValue(weight
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17552#discussion_r110312633
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -18,39 +18,21 @@ package org.apache.spark.sql.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17552
LGTM pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishe
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17552
**[Test build #75587 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75587/testReport)**
for PR 17552 at commit
[`0fbd4a6`](https://github.com/apache/spark/commit/0f
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17552#discussion_r110311641
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -18,39 +18,21 @@ package org.apache.spark.sql.e
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110309359
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/StarJoinCostBasedReorderSuite.scala
---
@@ -0,0 +1,428 @@
+/*
+ * Lice
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110309073
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/StarJoinCostBasedReorderSuite.scala
---
@@ -0,0 +1,428 @@
+/*
+ * Lice
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110308327
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -736,6 +736,12 @@ object SQLConf {
.checkValue(weight =>
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110307898
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP exte
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110307786
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP exte
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110307666
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP exten
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110306486
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP exte
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17555
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17555
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and w
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110305903
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP exten
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17555
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75586/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17555
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17555
**[Test build #75586 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75586/testReport)**
for PR 17555 at commit
[`6084d95`](https://github.com/apache/spark/commit/6
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17495
Ping @vanzin @tgravescs again. Sorry to bother you and really appreciate
your time.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
I see. The current code leverages `SparkListenerBlockUpdated` event to
calculate memory usage, let me try to investigate the feasibility of using
`taskEnd.taskMetrics.updatedBlocks`, to see if it
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14617
yeah, we definitely don't want to start logging more events. But it seems
like this info is already available -- taskEnd.taskMetrics.updatedBlocks
already has everything, doesn't it?
---
If your p
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/17534#discussion_r110303522
--- Diff: docs/monitoring.md ---
@@ -299,12 +299,12 @@ can be identified by their `[attempt-id]`. In the API
listed below, when running
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110303409
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP exte
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110302895
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP exten
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17546#discussion_r110300420
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -327,3 +345,104 @@ object JoinReorderDP exte
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Thanks @squito .
Regarding showing memory usage in history server. My major concern is that
putting so many block update event into event log will significantly increase
the file size and
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17551
@barnardb only in Spark standalone mode HistoryServer is embedded into
Master process for convenience IIRC. You can always start a standalone
HistoryServer process.
Also `FsHistoryProvid
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17553
Could you add [SPARKR] to the PR title please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16648
Build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16648
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75585/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16648
**[Test build #75585 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75585/testReport)**
for PR 16648 at commit
[`320db91`](https://github.com/apache/spark/commit/3
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15009
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75584/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15009
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15009
**[Test build #75584 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75584/testReport)**
for PR 15009 at commit
[`0cfd4a7`](https://github.com/apache/spark/commit/0
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17555
**[Test build #75586 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75586/testReport)**
for PR 17555 at commit
[`6084d95`](https://github.com/apache/spark/commit/60
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17555
[SPARK-19495][SQL] Make SQLConf slightly more extensible - addendum
## What changes were proposed in this pull request?
This is a tiny addendum to SPARK-19495 to remove the private visibility for
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17554
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17554
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishe
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17552
LGTM except only one comment
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17552#discussion_r110287008
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -18,39 +18,21 @@ package org.apache.spark.sql.
Github user Yunni commented on the issue:
https://github.com/apache/spark/pull/17092
Ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the featur
1 - 100 of 299 matches
Mail list logo