Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15119#discussion_r95261483
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -283,8 +284,17 @@ object SparkSubmit extends CommandLineUtils {
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16464
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71093/
Test PASSed.
---
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15119#discussion_r95261994
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -974,23 +967,102 @@ private[spark] object SparkSubmitUtils {
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16464
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16503
Good catch. Looks good to me.
@vanzin The RPC layer only guarantees at-most-once. Retry may be still
helpful in some case, but the receiver should be idempotent. Either the current
change
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16503
> That was the case with akka (I think, not really sure), but the netty RPC
layer doesn't drop messages. The new one is "exactly once".
It doesn't drop but the connection may be broken.
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16503
> It doesn't drop but the connection may be broken
In which case the executor will die (see
`CoarseGrainedExecutorBackend::onDisconnected`).
---
If your project is set up for it, you can
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/16518
Merging to branch-2.1. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16522
[SPARK-19137][SQL][SS] Garbage left in source tree after SQL tests ran
## What changes were proposed in this pull request?
`DataStreamReaderWriterSuite` makes test files in source
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16523
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71101/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16523
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
Oh, I see. Then, I'll look inside the `temp folder` generation code and fix
that.
Thank you for review, @vanzin and @zsxwing .
---
If your project is set up for it, you can reply to
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16523
**[Test build #71101 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71101/testReport)**
for PR 16523 at commit
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16377#discussion_r95275614
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/tree/impl/RandomForestSuite.scala ---
@@ -176,6 +203,18 @@ class RandomForestSuite extends SparkFunSuite
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16377#discussion_r95182716
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tree/impl/RandomForest.scala ---
@@ -828,8 +828,27 @@ private[spark] object RandomForest extends Logging {
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16377#discussion_r95183469
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/tree/impl/RandomForestSuite.scala ---
@@ -161,6 +161,33 @@ class RandomForestSuite extends SparkFunSuite
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16249
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71098/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16249
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16520
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r95281046
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -340,3 +344,102 @@ object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16522#discussion_r95281973
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala ---
@@ -94,7 +94,13 @@ private[sql] trait SQLTestUtils
*/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15018
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71087/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16521
**[Test build #71095 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71095/testReport)**
for PR 16521 at commit
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/16454
I just posted a fix to this also. I'll close that one in favor of this and
add comments here for what it did differently that we should consider.
---
If your project is set up for it, you can reply
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16514#discussion_r95258103
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -754,6 +754,7 @@ case class AlterTableSetLocationCommand(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16518
**[Test build #71090 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71090/testReport)**
for PR 16518 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16518
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16518
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71090/
Test PASSed.
---
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/15119#discussion_r95272071
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -974,23 +967,102 @@ private[spark] object SparkSubmitUtils {
}
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15119
**[Test build #71100 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71100/testReport)**
for PR 15119 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16522
**[Test build #71099 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71099/testReport)**
for PR 16522 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16523
**[Test build #71101 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71101/testReport)**
for PR 16523 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16522
Looks ok since this is what other tests here do; but I wonder why this case
isn't handled in `StreamingQueryManager.scala`; it seems to either throw an
error or create a new temp directory, but not
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
I found the root cause.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16521
**[Test build #71104 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71104/testReport)**
for PR 16521 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16523
**[Test build #71103 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71103/testReport)**
for PR 16523 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16514
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16503
> The RPC layer only guarantees at-most-once
That was the case with akka (I think, not really sure), but the netty RPC
layer doesn't drop messages. The new one is "exactly once".
---
If
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/16464
@felixcheung I made modifications and don't save the two metrics of
DistributedModels.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16503
> In which case the executor will die (see
CoarseGrainedExecutorBackend::onDisconnected).
Yeah. Didn't recall that. Then I agree that using `ask` is better.
---
If your project is set up
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/15119#discussion_r95271119
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -283,8 +284,17 @@ object SparkSubmit extends CommandLineUtils {
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15119#discussion_r95272422
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -974,23 +967,102 @@ private[spark] object SparkSubmitUtils {
}
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16524
**[Test build #71102 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71102/testReport)**
for PR 16524 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16521
Hmm, my final cleanup broke some tests, let me fix those...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
Hi, @vanzin and @zsxwing .
It was a bug of `withSQLConf`.
I think this is correct fix, but we need to see the result of whole result
because this is test utility issue.
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16522
**[Test build #71106 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71106/testReport)**
for PR 16522 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r95282465
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -340,3 +344,102 @@ object
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16522
**[Test build #71107 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71107/testReport)**
for PR 16522 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16464
**[Test build #71093 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71093/testReport)**
for PR 16464 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16514
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71091/
Test FAILed.
---
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/16361
Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15119#discussion_r95271376
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -283,8 +284,17 @@ object SparkSubmit extends CommandLineUtils {
}
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16522
@dongjoon-hyun The expected behavior is this test should use a temp folder
instead. Looks like it gets `` from some place.
---
If your project is set up for it, you can reply to this email and
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/16524
[SPARK-19110][MLLIB][FollowUP]: Add a unit test
## What changes were proposed in this pull request?
#16491 added the fix to mllib and a unit test to ml. This followup PR, add
unit tests
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16497
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16497
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71094/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16497
**[Test build #71094 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71094/testReport)**
for PR 16497 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16520
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71096/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16520
**[Test build #71096 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71096/testReport)**
for PR 16520 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/16522#discussion_r95281803
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala ---
@@ -94,7 +94,13 @@ private[sql] trait SQLTestUtils
*/
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r95281883
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
---
@@ -99,7 +99,7 @@ class
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r95282681
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -340,3 +344,102 @@ object
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16517
Maybe a better title is "Port Hive writing to use FileFormat interface"?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r95228308
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -128,34 +128,32 @@ object FileFormatWriter extends
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16431
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16361
**[Test build #71088 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71088/testReport)**
for PR 16361 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16518
**[Test build #71090 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71090/testReport)**
for PR 16518 at commit
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r95237150
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
@@ -592,47 +579,6 @@ class TaskSetManagerSuite extends
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r95237792
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskDescription.scala ---
@@ -52,7 +55,36 @@ private[spark] class TaskDescription(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16514#discussion_r95247659
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -119,7 +119,30 @@ private[hive] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16497
**[Test build #71094 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71094/testReport)**
for PR 16497 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16519
**[Test build #71097 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71097/testReport)**
for PR 16519 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16514
**[Test build #71091 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71091/testReport)**
for PR 16514 at commit
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15119#discussion_r95267649
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -974,23 +967,102 @@ private[spark] object SparkSubmitUtils {
}
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/16523
[SPARK-19142][SparkR]:spark.kmeans should take seed, initSteps, and tol as
parameters
## What changes were proposed in this pull request?
spark.kmeans doesn't have interface to set
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16377#discussion_r95275971
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/tree/impl/RandomForestSuite.scala ---
@@ -161,6 +161,33 @@ class RandomForestSuite extends SparkFunSuite
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16377#discussion_r95275814
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/tree/impl/RandomForestSuite.scala ---
@@ -176,6 +203,18 @@ class RandomForestSuite extends SparkFunSuite
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16521
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16441#discussion_r95276132
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/GBTClassifier.scala ---
@@ -248,12 +269,38 @@ class GBTClassificationModel private[ml](
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16521
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71095/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16521
**[Test build #71095 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71095/testReport)**
for PR 16521 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16249
**[Test build #71098 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71098/testReport)**
for PR 16249 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15211
**[Test build #71105 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71105/testReport)**
for PR 15211 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r95281248
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -340,3 +344,102 @@ object
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r95281159
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -340,3 +344,102 @@ object
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r95282270
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -340,3 +344,102 @@ object
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
Thank you. I updated it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/16476#discussion_r95283107
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -1528,6 +1528,18 @@ object functions {
def factorial(e: Column):
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/16518
[BACKPORT][SPARK-18952] Regex strings not properly escaped in codegen for
aggregations
## What changes were proposed in this pull request?
Backport for #16361 to 2.1 branch.
##
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/16376#discussion_r95238913
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -54,7 +54,7 @@ import org.apache.spark.util.{AccumulatorV2,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16492
**[Test build #71086 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71086/testReport)**
for PR 16492 at commit
Github user imatiach-msft commented on the issue:
https://github.com/apache/spark/pull/16441
ping @sethah @jkbradley could you please take another look since I've
updated the code review based on your comments? Thank you!
---
If your project is set up for it, you can reply to this
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/16519
Yeah, it looks like this is basically the same problem. I'll add some
review comments to the other issue.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16361
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16361
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71088/
Test PASSed.
---
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/15119#discussion_r95229575
--- Diff: docs/configuration.md ---
@@ -450,8 +452,20 @@ Apart from these, the following properties are also
available, and may be useful
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16518
**[Test build #71089 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71089/testReport)**
for PR 16518 at commit
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/16520
[SPARK-19140][SS]Allow update mode for non-aggregation streaming queries
## What changes were proposed in this pull request?
This PR allow update mode for non-aggregation streaming
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16514#discussion_r95258292
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -119,7 +119,30 @@ private[hive] class
Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/16454#discussion_r95259073
--- Diff: python/pyspark/sql/session.py ---
@@ -214,8 +214,12 @@ def __init__(self, sparkContext, jsparkSession=None):
self._wrapped =
1 - 100 of 459 matches
Mail list logo