Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17544#discussion_r110031650
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala
---
@@ -20,339 +20,13 @@ package org.apache.spark.sql.catalyst
Github user jsoltren commented on the issue:
https://github.com/apache/spark/pull/14617
This looks good to me. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17531
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/17532
Btw, I'd really like to get this into 2.2, which will be cut soon. Let me
know if you'd like me to take it over. Thanks!
---
If your project is set up for it, you can reply to this email and ha
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17531
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75553/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17531
**[Test build #75553 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75553/testReport)**
for PR 17531 at commit
[`42f49f2`](https://github.com/apache/spark/commit/4
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17537
**[Test build #75558 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75558/testReport)**
for PR 17537 at commit
[`1fb23cf`](https://github.com/apache/spark/commit/1f
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17544#discussion_r110029591
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/StarSchemaDetection.scala
---
@@ -0,0 +1,351 @@
+/*
+ * Licensed t
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17531
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75552/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17531
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17531
**[Test build #75552 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75552/testReport)**
for PR 17531 at commit
[`28ebc94`](https://github.com/apache/spark/commit/2
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17531
I will leave it around a bit in case @JoshRosen has any further comments.
Feel free to merge btw if you dont !
---
If your project is set up for it, you can reply to this email and have your
reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17544
**[Test build #75557 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75557/testReport)**
for PR 17544 at commit
[`99732ff`](https://github.com/apache/spark/commit/99
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/17531#discussion_r110028203
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -432,7 +432,7 @@ private[spark] class Executor(
setTaskFinishedA
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17544
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17544
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/17543
That JIRA is great. I'll close this PR for now and link my JIRA in there.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
Github user brkyvz closed the pull request at:
https://github.com/apache/spark/pull/17543
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/17543
In theory (as you may know), the way this is supposed to work is that,
since each reduce task reads the map outputs in random order, we delay
re-scheduling the earlier stage, to try to collect
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/17543
Let me try to draw a graph to better explain this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/17543
Yes, your explanation is on point. If I have 4+ executors that died, then
all retries of Stage B will also eventually fail. If we didn't ignore these
failures, we could have re-computed the outputs o
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/17543#discussion_r110014380
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1281,10 +1281,24 @@ class DAGScheduler(
val failedSta
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17541#discussion_r110013198
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/broadcastMode.scala
---
@@ -26,10 +26,7 @@ import org.apache.spark.sql.cata
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17545
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
GitHub user dgingrich opened a pull request:
https://github.com/apache/spark/pull/17545
[SPARK-20232][Python] Improve combineByKey docs
## What changes were proposed in this pull request?
Improve combineByKey documentation:
* Add note on memory allocation
* Chan
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17543#discussion_r110011464
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1281,10 +1281,24 @@ class DAGScheduler(
val failedStage = st
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17543#discussion_r110011422
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1281,10 +1281,24 @@ class DAGScheduler(
val failedStage = st
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17530
I'm sorry but you won't convince me that it's a useful feature to have
Spark be a big security hole when inserted into a kerberos environment.
As I said, if you want to change your approach t
Github user ioana-delaney commented on the issue:
https://github.com/apache/spark/pull/17544
@gatorsmile I did a small refactoring for star schema. Would you please
review. Thank you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17539
**[Test build #75556 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75556/testReport)**
for PR 17539 at commit
[`c78ebe8`](https://github.com/apache/spark/commit/c7
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17544
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/17543#discussion_r110010527
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1281,10 +1281,24 @@ class DAGScheduler(
val failedSta
GitHub user ioana-delaney opened a pull request:
https://github.com/apache/spark/pull/17544
[SPARK-20231] [SQL] Refactor star schema code for the subsequent star join
detection in CBO
## What changes were proposed in this pull request?
This commit moves star schema code fro
Github user themodernlife commented on the issue:
https://github.com/apache/spark/pull/17530
Said another way people need another layer to use spark standalone in
secured environments anyway.
---
If your project is set up for it, you can reply to this email and have your
reply appea
Github user themodernlife commented on the issue:
https://github.com/apache/spark/pull/17530
To me it's basically the same as users including S3 credentials when
submitting to spark standalone. Kerberos just requires more machinery. It might
be a little harder to get at the spark conf
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/17543#discussion_r110009919
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1281,10 +1281,24 @@ class DAGScheduler(
val failedSta
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17512
**[Test build #7 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/7/testReport)**
for PR 17512 at commit
[`5ed1950`](https://github.com/apache/spark/commit/5e
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17523
In _theory_ I can command Jenkins (the pr-status board says "asked to test"
rather than "admin needed" but idk).
Lets see if Jenkins retest this please does anything.
---
If your project is se
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17512#discussion_r110009267
--- Diff: R/pkg/R/SQLContext.R ---
@@ -544,12 +544,15 @@ sql <- function(x, ...) {
dispatchFunc("sql(sqlQuery)", x, ...)
}
-#' Crea
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17523
probably need someone who can command Jenkins
(I can't)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17296
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17296
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75551/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17296
**[Test build #75551 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75551/testReport)**
for PR 17296 at commit
[`7c7ce13`](https://github.com/apache/spark/commit/7
Github user kalvinnchau commented on a diff in the pull request:
https://github.com/apache/spark/pull/17413#discussion_r110004837
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -422,6 +431,2
Github user kalvinnchau commented on a diff in the pull request:
https://github.com/apache/spark/pull/17413#discussion_r110004474
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -422,6 +431,2
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/17543
cc @kayousterhout @markhamstra for feedback.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17543
**[Test build #75554 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75554/testReport)**
for PR 17543 at commit
[`bc80aab`](https://github.com/apache/spark/commit/bc
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/17543
[SPARK-20230] FetchFailedExceptions should invalidate file caches in
MapOutputTracker even if newer stages are launched
## What changes were proposed in this pull request?
If you lose insta
Github user kalvinnchau commented on a diff in the pull request:
https://github.com/apache/spark/pull/17413#discussion_r110003294
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -67,6 +67,8 @
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17413#discussion_r110002201
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -422,6 +431,21 @@
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17413#discussion_r110002396
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -67,6 +67,8 @@ pri
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17413#discussion_r110002068
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -422,6 +431,21 @@
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17531
**[Test build #75553 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75553/testReport)**
for PR 17531 at commit
[`42f49f2`](https://github.com/apache/spark/commit/42
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17530
That is not what you change does, though.
If you want to change the master / worker scripts to refresh kerberos
credentials, that would be a lot more acceptable. This change is just not
acce
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17413
LGTM.
@srowen Can we please get a merge? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user kalvinnchau commented on a diff in the pull request:
https://github.com/apache/spark/pull/17413#discussion_r10663
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -408,6 +410,2
Github user themodernlife commented on the issue:
https://github.com/apache/spark/pull/17530
That's right, but you still need a separate out of band process refreshing
with the KDC. My thinking is why not have spark do that on your behalf?
---
If your project is set up for it, you ca
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17531
**[Test build #75552 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75552/testReport)**
for PR 17531 at commit
[`28ebc94`](https://github.com/apache/spark/commit/28
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/17531#discussion_r109998390
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -432,7 +432,7 @@ private[spark] class Executor(
setTaskFinishedAnd
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16793
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/16793
Merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/16793
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the featu
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17533
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75547/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17533
**[Test build #75547 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75547/testReport)**
for PR 17533 at commit
[`b0c3abc`](https://github.com/apache/spark/commit/b
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17533
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17469
Jenkins OK to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17523
not sure why Jenkins hasn't triggured. Lets try Jenkins Test This Please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17296
**[Test build #75551 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75551/testReport)**
for PR 17296 at commit
[`7c7ce13`](https://github.com/apache/spark/commit/7c
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75549/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75549 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75549/testReport)**
for PR 17540 at commit
[`f9342b5`](https://github.com/apache/spark/commit/f
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17413#discussion_r109991082
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -408,6 +410,22 @
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17537
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17537
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75548/
Test FAILed.
---
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17530
Then in your setup you can configure things so that the cluster already has
the user's keytab; Spark doesn't need to distribute it for you.
---
If your project is set up for it, you can reply to thi
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17537
**[Test build #75548 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75548/testReport)**
for PR 17537 at commit
[`9187cca`](https://github.com/apache/spark/commit/9
Github user w3iBStime commented on the issue:
https://github.com/apache/spark/pull/17542
I don't have a local clone of the code--just used the GitHub web UI for a
small change. Unfortunately, it doesn't look like GitHub can search for "1.0]" .
---
If your project is set up for it, yo
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17413#discussion_r109989797
--- Diff: docs/running-on-mesos.md ---
@@ -368,6 +368,15 @@ See the [configuration page](configuration.html) for
information on Spark config
Github user map222 commented on the issue:
https://github.com/apache/spark/pull/17469
I think the latest commit addresses the formatting issues from above:
removed spaces insides `(..)`, removed the `\n` newlines, and made the
blockquotes more consistent with the rest of the code.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17541
**[Test build #75550 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75550/testReport)**
for PR 17541 at commit
[`02f4a02`](https://github.com/apache/spark/commit/0
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17541
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17541
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75550/
Test FAILed.
---
Github user map222 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109989329
--- Diff: python/pyspark/sql/column.py ---
@@ -250,11 +250,39 @@ def __iter__(self):
raise TypeError("Column is not iterable")
# s
Github user themodernlife commented on the issue:
https://github.com/apache/spark/pull/17530
In our setup each user gets their own standalone cluster. Users cannot
submit jobs to each other's clusters. By providing a keytab on cluster creation
and having Spark manage renewal on behalf
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17542
That's fine, can you look for other instances?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this featu
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/17540
@srowen, agreed. Closely related but not the same code paths. The question
is: when should `withNewExecutionId` get called?
I'm running the test suite now and this patch causes test failures
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17542
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17541#discussion_r109986108
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/ExchangeSuite.scala ---
@@ -98,7 +98,7 @@ class ExchangeSuite extends SparkPlanTest with
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17541#discussion_r109986073
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/ExchangeSuite.scala ---
@@ -70,7 +70,7 @@ class ExchangeSuite extends SparkPlanTest with
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17541#discussion_r109985386
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -748,7 +748,7 @@ object SQLConf {
.doubleConf
.
GitHub user w3iBStime opened a pull request:
https://github.com/apache/spark/pull/17542
Corrects interval notation in doc comment
The random number generated by XORShiftRandom.nextDouble() is a value
between zero and one, including zero but not including one. I.e., 0 <= x < 1 .
I'v
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17541
**[Test build #75550 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75550/testReport)**
for PR 17541 at commit
[`02f4a02`](https://github.com/apache/spark/commit/02
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17541
cc @rxin @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/17541
[SPARK-20229][SQL] add semanticHash to QueryPlan
## What changes were proposed in this pull request?
Like `Expression`, `QueryPlan` should also have a `semanticHash` method,
then we can
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17535
Looks closely related to https://github.com/apache/spark/pull/17540 ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project d
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17540
Looks closely related to https://github.com/apache/spark/pull/17535 ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project d
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/17471
I did not see we already have an open PR for this. Sure, I will add test
to this PR and also file a separate JIRA.
---
If your project is set up for it, you can reply to this email and have y
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17531#discussion_r109980881
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -432,7 +432,7 @@ private[spark] class Executor(
setTaskFinishe
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75549 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75549/testReport)**
for PR 17540 at commit
[`f9342b5`](https://github.com/apache/spark/commit/f9
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17531#discussion_r109979571
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -432,7 +432,7 @@ private[spark] class Executor(
setTaskFinishe
101 - 200 of 360 matches
Mail list logo