Github user rxin commented on the issue:
https://github.com/apache/spark/pull/12004
I've pointed out this before, and again: FWIW I really don't see what this
pull request is trying to accomplish
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15192
The original support by array type is done in
https://github.com/apache/spark/pull/9662. Could you resolve the conflicts and
reproduce the error in the latest code base?
---
If your project is
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16589
merged to master and branch-2.1.
@shivaram let me know if you have anything specific about this file re:
your comment
Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15467#discussion_r96696307
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ColumnarBatchScan.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16624
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71608/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16624
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16624
**[Test build #71608 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71608/testReport)**
for PR 16624 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16589
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/15125
LGTM. @srowen, can you recommend an mllib committer to review these
changes? I'm not familiar with that team.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16623
@wangmiao1981 could you close this PR - backport doesn't get closed
automatically.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user hustfxj commented on the issue:
https://github.com/apache/spark/pull/16450
@srowen masterUrl maybe defaults set, but the user's program's configure
should have high priority. The user's program's configure can be delivered by
the REST API's params "sparkProperties".
---
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16605
Hi, @maropu .
In the current master with your example, we can do the following. How do
you think about this?
```scala
scala> import scala.collection.mutable.WrappedArray
Github user hustfxj commented on the issue:
https://github.com/apache/spark/pull/16450
If we submit the application submission by SPARK REST API, we transmit the
configure by the sparkProperties. Like that:
```
curl -X POST
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16605#discussion_r96691353
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
---
@@ -84,7 +86,9 @@ case class ScalaUDF(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16634
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71606/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16634
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16634
**[Test build #71606 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71606/consoleFull)**
for PR 16634 at commit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16609
If this is just for the storage tab, something like `storageName` could be
enough ? As @felixcheung said R users might not know what an RDD so I'd avoid
introducing that in the name
---
If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16557
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71604/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16557
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16514
**[Test build #71610 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71610/testReport)**
for PR 16514 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16557
**[Test build #71604 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71604/testReport)**
for PR 16557 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16514
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16630
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16620
**[Test build #3540 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3540/testReport)**
for PR 16620 at commit
Github user jsoltren commented on a diff in the pull request:
https://github.com/apache/spark/pull/16346#discussion_r96681179
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsTab.scala
---
@@ -157,4 +162,47 @@ class ExecutorsListener(storageStatusListener:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16346
**[Test build #71609 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71609/testReport)**
for PR 16346 at commit
Github user jsoltren commented on a diff in the pull request:
https://github.com/apache/spark/pull/16346#discussion_r96681207
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsTab.scala
---
@@ -157,4 +162,47 @@ class ExecutorsListener(storageStatusListener:
Github user jsoltren commented on the issue:
https://github.com/apache/spark/pull/16346
The test failures are due to s/nodeId/hostId/ in the JSON code but not the
tests. This is passing locally now with the changes I'm about to push to this
PR. Thanks.
---
If your project is set up
Github user jsoltren commented on a diff in the pull request:
https://github.com/apache/spark/pull/16346#discussion_r96681232
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/BlacklistTrackerSuite.scala ---
@@ -88,6 +91,15 @@ class BlacklistTrackerSuite extends
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16346
the test failures look real and from the changes you've made, can you take
a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16593
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16613
is there a feature flag that is used to determine if we use this new
approach? I feel it will be good to have an internal feature flag to determine
the code path. So, if there is something wrong that
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16633
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16633
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71605/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16593
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71603/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16593
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16593
**[Test build #71603 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71603/testReport)**
for PR 16593 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16633
**[Test build #71605 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71605/testReport)**
for PR 16633 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16593
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71602/
Test PASSed.
---
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96673145
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16593
**[Test build #71602 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71602/testReport)**
for PR 16593 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16624
**[Test build #71608 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71608/testReport)**
for PR 16624 at commit
Github user Krimit commented on a diff in the pull request:
https://github.com/apache/spark/pull/16607#discussion_r96672243
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala
---
@@ -320,14 +341,29 @@ object Word2VecModel extends
MLReadable[Word2VecModel] {
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/16346#discussion_r96668414
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/BlacklistTrackerSuite.scala ---
@@ -88,6 +91,15 @@ class BlacklistTrackerSuite extends SparkFunSuite
Github user miketrewartha commented on the issue:
https://github.com/apache/spark/pull/15192
@sureshthalamati Any plans to get the conflicts fixed here and get things
merged?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/16346#discussion_r96667288
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsTab.scala
---
@@ -157,4 +162,47 @@ class ExecutorsListener(storageStatusListener:
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/16346#discussion_r96667888
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsTab.scala
---
@@ -157,4 +162,47 @@ class ExecutorsListener(storageStatusListener:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16634
**[Test build #71606 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71606/consoleFull)**
for PR 16634 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16557
**[Test build #71607 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71607/testReport)**
for PR 16557 at commit
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16634
[SPARK-16968][SQL][Backport-2.0]Add additional options in jdbc when
creating a new table
### What changes were proposed in this pull request?
This PR is to backport the PRs
Github user koeninger commented on the issue:
https://github.com/apache/spark/pull/16629
I don't think it's a problem to make disabling the cache configurable, as
long as it's on by default. I don't think the additional static constructors in
kafka utils are necessary, are they?
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/16585
@rxin @cloud-fan Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16611
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71601/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16611
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16611
**[Test build #71601 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71601/testReport)**
for PR 16611 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16585
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16585
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16633
**[Test build #71605 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71605/testReport)**
for PR 16633 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16557
**[Test build #71604 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71604/testReport)**
for PR 16557 at commit
Github user alunarbeach commented on the issue:
https://github.com/apache/spark/pull/16339
Thanks Team. Deleting the branch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/16633
[SPARK-19274][SQL] Make GlobalLimit without shuffling data to single
partition
## What changes were proposed in this pull request?
A logical `Limit` is performed actually by two physical
Github user imatiach-msft commented on the issue:
https://github.com/apache/spark/pull/16557
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16593
**[Test build #71603 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71603/testReport)**
for PR 16593 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16593
**[Test build #71602 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71602/testReport)**
for PR 16593 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/12739
@bomeng, do you mind if I ask to point out the PR fixing this issue if
possible please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user wesm commented on the issue:
https://github.com/apache/spark/pull/15821
Shall we update this PR to the latest and solicit from involvement from
Spark committers?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/12004
(Continuing email thread): Yes, try `./dev/test-dependencies.sh
--replace-manifest`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16552
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16552
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71599/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16552
**[Test build #71599 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71599/testReport)**
for PR 16552 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16387#discussion_r96634742
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalAppendOnlyMap.scala
---
@@ -192,12 +193,16 @@ class ExternalAppendOnlyMap[K, V, C](
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16308
I went through all of the changes, LGTM overall, left some minor comments.
My main concern is, the newly added tests in `DateFunctionsSuite` are very
hard to reason about the expected
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12064
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71598/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12064
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12064
**[Test build #71598 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71598/testReport)**
for PR 12064 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96631900
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala ---
@@ -103,6 +150,27 @@ class DateFunctionsSuite extends QueryTest with
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96631616
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala ---
@@ -91,6 +107,37 @@ class DateFunctionsSuite extends QueryTest with
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96631280
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala ---
@@ -69,11 +70,26 @@ class DateFunctionsSuite extends QueryTest with
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96631159
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -869,6 +870,30 @@ class DataFrameSuite extends QueryTest with
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96630662
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -655,6 +655,11 @@ object SQLConf {
.booleanConf
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96630541
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala
---
@@ -137,7 +138,8 @@ abstract class
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96630262
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -314,7 +315,7 @@ object FileFormatWriter
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96630066
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala ---
@@ -139,27 +140,15 @@ class QueryExecution(val sparkSession:
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96629902
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala ---
@@ -200,8 +189,10 @@ class QueryExecution(val sparkSession:
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96629686
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -104,7 +105,8 @@ case class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15467
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71597/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15467
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16586
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16586
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71595/
Test PASSed.
---
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96629536
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -250,6 +252,8 @@ class Dataset[T] private[sql](
val hasMoreData =
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15467
**[Test build #71597 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71597/testReport)**
for PR 15467 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96629481
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -260,6 +264,10 @@ class Dataset[T] private[sql](
case binary:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16586
**[Test build #71595 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71595/testReport)**
for PR 16586 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96629155
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Column.scala ---
@@ -177,7 +177,7 @@ class Column(val expr: Expression) extends Logging {
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16585
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71596/
Test PASSed.
---
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96628885
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/util/DateTimeUtilsSuite.scala
---
@@ -177,180 +177,186 @@ class DateTimeUtilsSuite
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16585
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16585
**[Test build #71596 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71596/testReport)**
for PR 16585 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16308#discussion_r96628249
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -238,7 +297,8 @@ class
301 - 400 of 538 matches
Mail list logo