Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17582
As @vanzin said I think this is fine for now to get this fixed quickly, but
filing a follow up jira makes sense.Actually this might be good to get into
the 2.1.1 release if they are going to
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17658
+1. @vanzin any further comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17582
changes lgtm. Did you file a jira to track changing to not use withSparkUI?
If user is downloading because the file is huge and takes a long time to
render or causes history server to have issue
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17445
there is a large discussion about how to handle fetch failures going on in
https://issues.apache.org/jira/browse/SPARK-20178. The fact that you got a
fetch failure does not mean that all blocks
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17700
Yes that is what I was thinking from the conversation in the jira. We
should do that now as to not cause more compatibility issues.
---
If your project is set up for it, you can reply to this
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17658
the idea here was mine. I agree that it could be confusing but its also
confusing as is and its hard to find the version that was run. I was figuring
this way would be consistent with the live
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17582
so we should definitely fix the /api/v1/applications//logs to go
through the acls. It looks like it should be protected in
ApiRootResource.java. You have the app id so it needs to do something
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17658
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17582
Sorry again the wording above and all the different configs are a bit
confusing to me as to what the real issues are here.
>Here actually has two list of acls, one is controlled
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17625
I haven't looked through the code at all, but I definitely like the idea of
tracking the netty memory usage. Breaking into 2 pieces makes sense.
If we end up creating any new UI
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17625
we also just exposed more memory information for storage memory in the
executors page in SPARK-17019.
If we now have a memory tab it could be confusing to the users where to go
to see
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17582
Sorry but I'm confused by the explanation in the description. I didn't
completely follow what problems you are seeing that aren't intended and I don't
understand how you
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17495
Sorry @jerryshao I know you have a few up but I'm swamped and probably
won't get to them to next week.
---
If your project is set up for it, you can reply to this email and have
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17500
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17485
see the discussion on the mailing list. We now have 4 different jira for
handling fetch failures. I think we should get a design for the entire thing
first.
personally I don't wa
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17113
sorry for the delay on this we have been having some discussion about
scheduler changes and the fetch failure handling in the scheduler. Since this
is related holding off on this.
---
If your
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r108801550
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -735,7 +749,12 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17297
Sounds good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17387
Yeah if you plan on adding support for secure hdfs access in standalone
mode, it needs a feature jira, probably go through SPIP and make sure
everything truly works and is documented. I remember
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/15009
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r107508739
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -735,7 +749,12 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/15009
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17387
Here is a jira from a long time back:
https://issues.apache.org/jira/browse/SPARK-2541
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17387
I didn't think Spark officially supported kerberos in standalone mode. I'm
pretty sure it doesn't work at all even if kinit'd due to a change that went in
a long time ba
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17088
> (a) even the existing behavior will make you do unnecessary work for
transient failures and (b) this just slightly increases the amount of work that
has to be repeated for those transi
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14617
Checkbox sounds good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14617
while I kind of like the hover because it doesn't clutter the page, it does
bring up a couple concerns:
- user can't sort by them
- user might not know to hover (none of the o
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17238
> Actually, to play devil's advocate, the problem @morenn520 is describing
is a little more involved. You have a driver running, which has its own view of
what the cluster topology is,
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17113
> Another thing I thought about as I was reviewing this -- spark currently
assumes that a fetchfailure is always the fault of the source, never the
destination. I almost wonder if we should co
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/17113#discussion_r106287979
--- Diff: docs/configuration.md ---
@@ -1411,6 +1411,15 @@ Apart from these, the following properties are also
available, and may be useful
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r106258149
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -735,7 +749,12 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/15009
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17238
Ok checked tez and mr and they don't do this.
Actually in a couple of the input formats it actually adds DEFAULT_RACK if
there wasn't any topology information so you would end u
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17238
If you aren't adding in machines to rack and configuring yarn properly
before adding it to your cluster that is a process issue you should fix on your
end.I would assume a unracking/rack
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17303
this should not be needed just to use to write to hdfs. The regular hadoop
input/output type formats have support for it if you are using the right
version (I think hadoop 2.8).
This
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r106049395
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala
---
@@ -252,20 +307,55 @@ class YarnClusterSuite
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r106049046
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala
---
@@ -226,6 +243,44 @@ class YarnClusterSuite
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17238
Sorry if I'm missing something here but I don't see why this is a problem?
If you have YARN misconfigured or not configured everything is going to
default to DEFAULT_RACK. If you wa
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/15009
@kishorvpatil please resolve the conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/15009
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/15009
Test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17113
sorry haven't had a chance to get to this to do full review, hopefully
tomorrow.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r105261723
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -719,7 +716,24 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r105259983
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -719,7 +716,24 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r105267721
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/config.scala
---
@@ -349,4 +350,8 @@ package object config
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r105260515
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -719,7 +716,24 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r105261284
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -735,7 +749,12 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r105273069
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala
---
@@ -193,6 +193,74 @@ class YarnClusterSuite
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r105282085
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala
---
@@ -193,6 +193,74 @@ class YarnClusterSuite
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r105266591
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkAppHandle.java ---
@@ -100,6 +100,8 @@ public boolean isFinal
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/15009
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/15009
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r104743802
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -174,6 +174,11 @@ private[spark] class
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r104673881
--- Diff:
examples/src/main/java/org/apache/spark/examples/JavaWordCount.java ---
@@ -36,17 +36,13 @@
public static void main(String[] args
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r104674280
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -78,9 +78,9 @@
public static final String
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r104673708
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -719,13 +716,17 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17088
Note alternatively we could change it to not fail on fetch failure. This
would seem better to me since there is no reason to throw away all the work you
have done but I'm sure that is a
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17088
In this particular case are your map tasks fast or slow. If they are really
fast rerunning everything now makes sense, if each of those took 1 hour+ to
run, failing all when they don't
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17113
I was not talking about designing this around the killing task part of
this, other then in reference to being able to count the # of fetch failures
before triggering the blacklisting, but I think
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17113
> Whether running tasks are interrupted on stage abort or not depends on
the state of a config boolean -- and ideally we'd like to get to the point
where we can confidently set that c
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17113
So I looked at this a little more. I'm more ok with this since Spark
doesn't actually invalidate the shuffle output. You are basically just trying
to stop new tasks from running on the
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17088
fyi, this is somewhat related to https://github.com/apache/spark/pull/17113
I mention it because I think both depend on how we handle failures and
retries. This and that together could cause
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16291
@sitalkedia are you still working on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17113
@jerryshao are you actually seeing issues with this on real
customer/production jobs? How often? NM failure for us is very rare. I'm not
familiar with how mesos would fail differently
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/17113
can you clarify the situations you are seeing issues? What happened to the
NM in this case. If you have work preserving restart I would think this would
actually cause you more problems. The NM
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16819
I agree with others, this is not the way to do this. There are different
schedulers in yarn, each with different configs that could affect the actual
resources you get.
If you want to
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16946
On vacation back next Monday and will review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/16923#discussion_r101304225
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -106,21 +106,31 @@ private[hive] class HiveClientImpl
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/16923#discussion_r101302275
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -106,21 +106,31 @@ private[hive] class HiveClientImpl
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r101085286
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -528,13 +582,41 @@ public SparkAppHandle
startApplication
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r101095913
--- Diff:
core/src/main/scala/org/apache/spark/launcher/LauncherBackend.scala ---
@@ -71,6 +100,9 @@ private[spark] abstract class LauncherBackend
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r101084346
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -528,13 +582,41 @@ public SparkAppHandle
startApplication
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r101061331
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -685,9 +686,8 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r101087592
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/package-info.java ---
@@ -49,6 +49,39 @@
*
*
*
+ * Currently, while
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r101092206
--- Diff:
core/src/main/scala/org/apache/spark/launcher/LauncherBackend.scala ---
@@ -71,6 +100,9 @@ private[spark] abstract class LauncherBackend
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r101085524
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkSubmitRunner.java ---
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r101062900
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -719,7 +719,23 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r101083260
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -107,6 +121,30 @@ public static void setConfig(String name, String
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16916
we should not remove symlink resolution.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100320712
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -1149,13 +1179,23 @@ private object Client extends
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100313659
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -719,7 +719,20 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100312242
--- Diff: core/src/main/scala/org/apache/spark/SparkApp.scala ---
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100318015
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/package-info.java ---
@@ -49,6 +49,38 @@
*
*
*
+ * Currently, for
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100318779
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/package-info.java ---
@@ -49,6 +49,38 @@
*
*
*
+ * Currently, for
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100312144
--- Diff: core/src/main/scala/org/apache/spark/SparkApp.scala ---
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100313512
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -719,7 +719,20 @@ object SparkSubmit extends CommandLineUtils
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100318547
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/package-info.java ---
@@ -49,6 +49,38 @@
*
*
*
+ * Currently, for
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100316402
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -94,6 +103,13 @@
static final Map launcherConfig = new
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100316424
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -94,6 +103,13 @@
static final Map launcherConfig = new
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100317631
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkSubmitRunner.java ---
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r100316057
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/LauncherServer.java ---
@@ -89,11 +89,32 @@
private static volatile LauncherServer
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/16650#discussion_r98782219
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -187,6 +198,19 @@ private[scheduler] class BlacklistTracker
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/16650#discussion_r98781265
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -187,6 +198,19 @@ private[scheduler] class BlacklistTracker
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/16650#discussion_r98781173
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -173,6 +174,16 @@ private[scheduler] class BlacklistTracker
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16695
So I'm just curious, in the specific case you saw this issue, what were the
configs? The configs on NM had the correct path or the ones on the gateways
were only pointing to gateway. I
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16695
this seems really really specific to the scripts being in the hadoop conf
directory and the user using default mapping. I assume the hadoop confs on the
nodemanagers have a different config then
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16704
thanks for fixing, forgot we moved those.
+1, go ahead and merge once jenkins passes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16667
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/16650#discussion_r97342524
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -50,10 +50,11 @@ import org.apache.spark.util.{Clock, SystemClock
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/16650#discussion_r97341837
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -168,6 +169,21 @@ private[scheduler] class BlacklistTracker
901 - 1000 of 3291 matches
Mail list logo