Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13736
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64051/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13320
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13736
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13320
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64052/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13320
**[Test build #64052 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64052/consoleFull)**
for PR 13320 at commit
[`6440370`](https://github.com/apache/spark/commit/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13736
**[Test build #64051 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64051/consoleFull)**
for PR 13736 at commit
[`7ccd981`](https://github.com/apache/spark/commit/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14716
**[Test build #64057 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64057/consoleFull)**
for PR 14716 at commit
[`19cb3ad`](https://github.com/apache/spark/commit/1
GitHub user yanboliang opened a pull request:
https://github.com/apache/spark/pull/14716
[SPARK-17141] [ML] MinMaxScaler should remain NaN value.
## What changes were proposed in this pull request?
```MinMaxScaler``` should remain ```NaN``` value.
## How was this pa
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14715
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64056/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14715
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14715
**[Test build #64056 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64056/consoleFull)**
for PR 14715 at commit
[`6d1c52f`](https://github.com/apache/spark/commit/
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75446103
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -200,22 +375,77 @@ private[spark] class HiveExternalCatalog(cl
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/14181#discussion_r75446046
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -148,6 +148,21 @@ class SimpleTestOptimizer extends Opti
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75445428
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -584,13 +579,8 @@ case class AlterTableSetLocationCommand(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14714
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14715
**[Test build #64056 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64056/consoleFull)**
for PR 14715 at commit
[`6d1c52f`](https://github.com/apache/spark/commit/6
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75445214
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -264,10 +261,8 @@ case class AlterTableUnsetPropertiesCommand(
Github user zjffdu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14666#discussion_r75444995
--- Diff: R/pkg/R/utils.R ---
@@ -689,3 +689,33 @@ getSparkContext <- function() {
sc <- get(".sparkRjsc", envir = .sparkREnv)
sc
}
+
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75444916
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -233,226 +229,21 @@ case class CreateDataSour
GitHub user jagadeesanas2 opened a pull request:
https://github.com/apache/spark/pull/14715
[SPARK-17085] [Streaming] [Documentation and actual code differs -
Unsupported Operations]
You can merge this pull request into a Git repository by running:
$ git pull https://github.c
GitHub user jianran opened a pull request:
https://github.com/apache/spark/pull/14714
paged jdbcRDD for like mysql limit start,pageSize
## What changes were proposed in this pull request?
new feature for jdbcRDD with mysql limit query
## How was this patch tested?
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75444695
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -97,16 +92,17 @@ case class CreateDataSourceT
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14038
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14038
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64049/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14038
**[Test build #64049 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64049/consoleFull)**
for PR 14038 at commit
[`d53ad8e`](https://github.com/apache/spark/commit/
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14674#discussion_r75444301
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -282,6 +282,9 @@ private[spark] class SecurityManager(sparkConf:
SparkConf)
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14452
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64045/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14452
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14452
**[Test build #64045 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64045/consoleFull)**
for PR 14452 at commit
[`e094c14`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14181
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64048/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14181
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14181
**[Test build #64048 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64048/consoleFull)**
for PR 14181 at commit
[`c947583`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14181
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64046/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14181
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14181
**[Test build #64046 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64046/consoleFull)**
for PR 14181 at commit
[`b0f5dd5`](https://github.com/apache/spark/commit/
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14118
I believe we can change the default vale of `nullValue` to
`'\u'.toString` in order to express any value is not `null`. I remember
this matches with no empty string nor any other string alth
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14700
**[Test build #3226 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3226/consoleFull)**
for PR 14700 at commit
[`24bcf05`](https://github.com/apache/spark/commit
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75441096
--- Diff: R/pkg/R/DataFrame.R ---
@@ -932,7 +932,7 @@ setMethod("sample_frac",
#' @param x a SparkDataFrame.
#' @family SparkDataFrame functions
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/10896
Could you update?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14118
@rxin Please let me leave my though why I thought it looks good to me in
case it is helpful.
Yes, but we should set `nullValue` for writing `null`. So, I think, setting
`""` for `nullVa
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14666
**[Test build #64055 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64055/consoleFull)**
for PR 14666 at commit
[`54fe8a9`](https://github.com/apache/spark/commit/5
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14475
The change looks simple & good. Left couple minor comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14475#discussion_r75440822
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
---
@@ -31,6 +34,9 @@
* of the file format).
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14709
**[Test build #64054 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64054/consoleFull)**
for PR 14709 at commit
[`442918f`](https://github.com/apache/spark/commit/4
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14666#discussion_r75440848
--- Diff: R/pkg/R/utils.R ---
@@ -689,3 +689,33 @@ getSparkContext <- function() {
sc <- get(".sparkRjsc", envir = .sparkREnv)
sc
}
+
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14475#discussion_r75440799
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
---
@@ -50,7 +56,21 @@ public UnsafeSorterSpillReader(
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r75440356
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +642,146 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
function(object
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14709#discussion_r75440152
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LocalRelation.scala
---
@@ -75,4 +76,16 @@ case class LocalRelation(outp
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14709#discussion_r75440135
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LocalRelation.scala
---
@@ -75,4 +76,16 @@ case class LocalRelation(outp
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14713
**[Test build #64053 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64053/consoleFull)**
for PR 14713 at commit
[`82935a7`](https://github.com/apache/spark/commit/8
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14705
Hmm, I'm beginning to think we could do this:
```
generics.R
setGeneric("spark.naiveBayes", function(data, formula, ...) {
standardGeneric("spark.naiveBayes") })
```
```
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14713#discussion_r75439649
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1208,17 +1208,27 @@ object PushDownPredicate extends Rule
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r75439413
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -95,6 +95,12 @@ abstract class LogicalPlan extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/10896
**[Test build #3228 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3228/consoleFull)**
for PR 10896 at commit
[`572db4c`](https://github.com/apache/spark/commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r75438886
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
---
@@ -32,5 +32,11 @@ package org.apache.spark.sql.catalys
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/14713#discussion_r75438847
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1208,17 +1208,27 @@ object PushDownPredicate extends Ru
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/10896
**[Test build #3228 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3228/consoleFull)**
for PR 10896 at commit
[`572db4c`](https://github.com/apache/spark/commit/
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/10896
okay, thanks!!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14713#discussion_r75438681
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1208,17 +1208,27 @@ object PushDownPredicate extends
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/10896
Yeah, lets pick this up again. Thanks for the ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r75438629
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -95,6 +95,12 @@ abstract class LogicalPlan extends
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75438540
--- Diff: R/pkg/R/DataFrame.R ---
@@ -932,7 +932,7 @@ setMethod("sample_frac",
#' @param x a SparkDataFrame.
#' @family SparkDataFrame functions
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14713#discussion_r75438304
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1208,17 +1208,27 @@ object PushDownPredicate extends Rule
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14639
but to keep in mind, currently even spark.master can change while in-flight
- doesn't seem like Spark Scala code prevents that - we could get some very
wrong values. I'm not sure that is super r
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/12790
@BryanCutler @MechCoder The current fix of removing the default value for
the ```stages``` param is OK for me. But we also should discuss the behavior of
```stages=[]``` which is inconsistent bet
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/14384
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14713#discussion_r75437875
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1208,17 +1208,27 @@ object PushDownPredicate extend
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14639
It's all of the Runtime Config from the current active SparkSession which
includes all SparkConf.
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.SparkSess
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75437776
--- Diff: R/pkg/R/functions.R ---
@@ -2276,9 +2276,8 @@ setMethod("n_distinct", signature(x = "Column"),
countDistinct(x, ...)
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64043/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64043 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64043/consoleFull)**
for PR 14639 at commit
[`fef88cd`](https://github.com/apache/spark/commit/
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14713#discussion_r75437433
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1208,17 +1208,27 @@ object PushDownPredicate extends
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75437423
--- Diff: R/pkg/R/functions.R ---
@@ -1335,7 +1336,7 @@ setMethod("rtrim",
#' @note sd since 1.6.0
setMethod("sd",
signature(x = "Co
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14713
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the fea
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75437149
--- Diff: R/pkg/R/mllib.R ---
@@ -917,14 +922,14 @@ setMethod("spark.lda", signature(data =
"SparkDataFrame"),
# Returns a summary of the AFT survival
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r75436918
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -95,6 +95,12 @@ abstract class LogicalPlan extends
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/10896
@hvanhovell ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or i
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13320
**[Test build #64052 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64052/consoleFull)**
for PR 13320 at commit
[`6440370`](https://github.com/apache/spark/commit/6
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r75436727
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -95,6 +95,12 @@ abstract class LogicalPlan ext
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r75436746
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
---
@@ -32,5 +32,11 @@ package org.apache.spark.sql.catalyst
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14705
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64047/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14705
**[Test build #64047 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64047/consoleFull)**
for PR 14705 at commit
[`870279a`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14705
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13736
**[Test build #64051 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64051/consoleFull)**
for PR 13736 at commit
[`7ccd981`](https://github.com/apache/spark/commit/7
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14712
**[Test build #3227 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3227/consoleFull)**
for PR 14712 at commit
[`4375e76`](https://github.com/apache/spark/commit/
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14118
What if I am writing explicitly an empty string out? Does it become just
1,,2?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your p
Github user sun-rui commented on the issue:
https://github.com/apache/spark/pull/14639
Does this API get only the Spark SQL configurations or including SparkConf?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pr
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14713
**[Test build #64050 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64050/consoleFull)**
for PR 14713 at commit
[`9be428a`](https://github.com/apache/spark/commit/9
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14583
I'm fixing this differently here: https://github.com/apache/spark/pull/14713
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/14713
[SPARK-16994][SQL] Whitelist operators for predicate push down
## What changes were proposed in this pull request?
This patch changes predicate push down optimization rule
(PushDownPredicate) from
601 - 691 of 691 matches
Mail list logo