Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16828
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16828
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72488/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16828
**[Test build #72488 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72488/testReport)**
for PR 16828 at commit
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16750#discussion_r99760937
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -298,6 +299,8 @@ class DataFrameReader private[sql](sparkSession:
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16750#discussion_r99760946
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JSONOptions.scala
---
@@ -31,10 +31,11 @@ import
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16740
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72487/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16740
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16740
**[Test build #72487 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72487/testReport)**
for PR 16740 at commit
Github user windpiger commented on the issue:
https://github.com/apache/spark/pull/16809
thanks a lot! It seems that add a REFRESH command is to not modify the
default behavior. if user want to refresh, they call the command manually.
@gatorsmile @sameeragarwal @hvanhovell
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16800#discussion_r99755967
--- Diff: R/pkg/inst/tests/testthat/test_mllib_classification.R ---
@@ -27,6 +27,44 @@ absoluteSparkPath <- function(x) {
file.path(sparkHome, x)
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16800#discussion_r99755912
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/LinearSVCWrapper.scala
---
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16800
**[Test build #72492 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72492/testReport)**
for PR 16800 at commit
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16800#discussion_r99755781
--- Diff: R/pkg/R/generics.R ---
@@ -1376,6 +1376,10 @@ setGeneric("spark.kstest", function(data, ...) {
standardGeneric("spark.kstest")
#'
Github user budde commented on the issue:
https://github.com/apache/spark/pull/16797
> BTW, what behavior do we expect if a parquet file has two columns whose
lower-cased names are identical?
I can take a look at how Spark handled this prior to 2.1, although I'm not
sure if
Github user budde commented on the issue:
https://github.com/apache/spark/pull/16797
> how about we add a new SQL command to refresh the table schema in
metastore by inferring schema with data files? This is a compatibility issue
and we should have provided a way for users to
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/16376
Awesome always enthusiastic about fixing minor nits!! I merged this into
master. I didn't merge it into 2.1 but I don't feel strongly about it.
---
If your project is set up for it, you
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16376
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gczsjdy commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r99753353
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -20,8 +20,12 @@ package
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16497
**[Test build #72491 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72491/testReport)**
for PR 16497 at commit
Github user tanejagagan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16497#discussion_r99753046
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -125,10 +139,17 @@ case class
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/16758
I addressed all the comments. However, @zsxwing, our offline discussion of
throwing error on `.update(null)` ran into a problem. Since its typed as S, the
behavior is odd when S is primitive type. See
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16758
**[Test build #72490 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72490/testReport)**
for PR 16758 at commit
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/16797
The proposal to restore schema inference with finer grained control on when
it is performed sounds reasonable to me. The case I'm most interested in is
turning off schema inference entirely,
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16827
Working on UT failure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16827
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72486/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16827
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16827
**[Test build #72486 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72486/testReport)**
for PR 16827 at commit
Github user windpiger commented on the issue:
https://github.com/apache/spark/pull/16828
cc @gatorsmile @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16376
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72484/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16476
**[Test build #72489 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72489/testReport)**
for PR 16476 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16376
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16828
**[Test build #72488 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72488/testReport)**
for PR 16828 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16376
**[Test build #72484 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72484/testReport)**
for PR 16376 at commit
GitHub user windpiger opened a pull request:
https://github.com/apache/spark/pull/16828
[SPARK-19484][SQL]continue work to create hive table with an empty schema
## What changes were proposed in this pull request?
after
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
It will reduce the function call on
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16762#discussion_r99749089
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -213,7 +213,12 @@ abstract class SparkStrategies extends
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99749006
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/KeyedStateImpl.scala
---
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99749020
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/object.scala
---
@@ -313,6 +313,56 @@ case class MapGroups(
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748948
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16762#discussion_r99748912
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/BroadcastNestedLoopJoinExec.scala
---
@@ -339,6 +340,33 @@ case class
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748834
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
---
@@ -46,8 +46,13 @@ object
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16762#discussion_r99748730
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/BroadcastNestedLoopJoinExec.scala
---
@@ -339,6 +340,33 @@ case class
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748711
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -235,3 +234,79 @@ case class StateStoreSaveExec(
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748668
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStore.scala
---
@@ -58,6 +58,8 @@ trait StateStore {
*/
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748496
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748491
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748329
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748256
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala
---
@@ -111,6 +111,25 @@ class
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748308
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99748260
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/MapGroupsWithStateSuite.scala
---
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99747470
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/MapGroupsWithStateSuite.scala
---
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99747455
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/MapGroupsWithStateSuite.scala
---
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99747354
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/KeyedStateImpl.scala
---
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/16686
@zsxwing please merge if you think your concerns were addressed correctly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/16686
LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16762
Cross join is a physical concept. In Spark 2.0, we detected it like what
this PR did. In Spark 2.1, we moved it to Optimizer. Basically, this PR is to
change it back.
---
If your project is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16815
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16740
@sethah Thanks for the comments.
OK, added more tests to cover all families. It's not possible to test all
family and link combination if that's what you mean: the tweedie family
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16740
**[Test build #72487 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72487/testReport)**
for PR 16740 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16815
LGTM. Merging to master and 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16827
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16827
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72485/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16827
**[Test build #72485 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72485/testReport)**
for PR 16827 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16827
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16827
**[Test build #72483 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72483/testReport)**
for PR 16827 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16827
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72483/
Test FAILed.
---
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16747
CC @rxin, if we are going to expose `CalendarInterval` and
`CalendarIntervalType` officially, shall we move `CalendarInterval` to the same
package as `Decimal`, or create a new class as the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16744
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16744
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72481/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16744
**[Test build #72481 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72481/testReport)**
for PR 16744 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16744
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72480/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16744
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16744
**[Test build #72480 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72480/testReport)**
for PR 16744 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16747
Then, It looks okay to me as describing the current state and I just
checked it after building the doc with this, and also
we can already use it as below:
```scala
scala>
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16827
**[Test build #72486 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72486/testReport)**
for PR 16827 at commit
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16827
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16827#discussion_r99743099
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -779,6 +781,30 @@ private[spark] object SparkConf extends Logging {
}
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16737#discussion_r99742193
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/source/libsvm/LibSVMOptions.scala ---
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16737#discussion_r99742158
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/text/TextSuite.scala
---
@@ -125,6 +124,25 @@ class TextSuite extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16795
@srowen and @liancheng
Could you review this PR again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16795#discussion_r99740767
--- Diff: resource-managers/mesos/pom.xml ---
@@ -49,6 +49,13 @@
+ org.apache.spark
+
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16815
yea! - I found this earlier but forgot to track it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16827
**[Test build #72485 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72485/testReport)**
for PR 16827 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16795
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16795
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72479/
Test PASSed.
---
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16827#discussion_r99739390
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -779,6 +781,31 @@ private[spark] object SparkConf extends Logging {
}
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16795
**[Test build #72479 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72479/testReport)**
for PR 16795 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16376
**[Test build #72484 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72484/testReport)**
for PR 16376 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16827
**[Test build #72483 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72483/testReport)**
for PR 16827 at commit
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16376
yes, I think this is ready (I just noticed a couple of minor nits with a
fresh read but no real changes)
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16656
cc @tdas also.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16827#discussion_r99738940
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -779,6 +781,30 @@ private[spark] object SparkConf extends Logging {
}
GitHub user uncleGen opened a pull request:
https://github.com/apache/spark/pull/16827
[SPARK-19482][CORE] Fail it if 'spark.master' is set with different value
## What changes were proposed in this pull request?
First, there is no need to set 'spark.master' multi-times
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/16797
If the use case where we want to infer the schema but not attempt to write
it back as a property as suggested by @budde, is making sense, then the new SQL
command approach might not work for it. But
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16747
Actually `CalendarInterval` is already exposed to users, e.g. we can call
`collect` on a DataFrame with `CalendarIntervalType` field, and get rows
containing `CalendarInterval`. We don't support
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16171
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72482/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16171
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16171
**[Test build #72482 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72482/testReport)**
for PR 16171 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16762
is CROSS JOIN a logical or physical concept?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16795
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72473/
Test FAILed.
---
1 - 100 of 557 matches
Mail list logo