mridulm edited a comment on pull request #30876:
URL: https://github.com/apache/spark/pull/30876#issuecomment-751979314


   Before answering specific queries below, I want to set the context.
   a) Enabling proactive replication could result in reduced recomputation cost 
when executors fail.
   b) Enabling it will result in increased transfers when executor(s) are lost.
   (Ignoring other minor impacts) 
   
   I was trying to understand what the impact would be, what the tradeoffs 
involved are, when we enable by default:
   
   1) Are the replication costs (b) lower now ? How do we estimate that cost ?
   (There was non-trivial impact when I had last done some expt's earlier)
   
   2) Are we (community) running into cases where we benefit from (a) but are 
not (very) negatively impacted by (b) ?
   Is there any commonality when this happens ?
   (application types/characterstics ? resource manager ? almost all usage ?)
   
   3) What is the impact to the application (and cluster) when we have 
nontrivial executor loss - executor release in DRA is one example of this, 
preemption is another.
   
   4) Anything else to watch out for ?
   
   As I mentioned earlier, I am fine with collecting data by enabling this flag 
by default.
   I am hoping this and other discussions will help us understand what 
questions to better evaluate before we release 3.2.
   
   
   > 1. For this question, I answered at the beginning that this is a kind of 
self-healing feature 
[here](https://github.com/apache/spark/pull/30876#discussion_r547031257)
   > 
   > > Making it default will impact all applications which have replication > 
1: given this PR is proposing to make it the default, I would like to know if 
there was any motivating reason to make this change ?
   
   Spark is self-healing via lineage :-)
   Having said that, as mentioned above, I want to understand what the tradeoff 
for enabling this flag are.
   
   > 
   > 1. For the following question, I asked your evidence first because I'm not 
aware of. :)
   > 
   > > If the cost of proactive replication is close to zero now (my 
experiments were from a while back), ofcourse the discussion is moot - did we 
have any results for this ?
   
   I am not proposing to change the default behavior, you are ... hence my 
query :-)
   As I mentioned above, when I had looked at this in the past - it was very 
helpful for some applications, but not others : it depended on the application 
and their requirements - `replication > 1` itself was not very commonly used 
then.
   
   > 
   > 1. For the following question, it seems that you assume that the current 
Spark's behavior is the best. I don't think this question justifies that the 
loss of data inside Spark side is good.
   > 
   > > What is the ongoing cost when application holds RDD references, but they 
are not in active use for rest of the application (not all references can be 
cleared by gc) - resulting in replication of blocks for an RDD which is 
legitimately not going to be used again ?
   
   Couple of points here:
   a) There is no data loss - spark recomputes when a lost block is required 
(but at some recomputation cost).
   b) My query was specifically about the cost for replication - given what I 
described is a common pattern in user applications : I was not saying this is 
desired code pattern, but it is a commonly observed behavior.
   
   
   > 
   > 1. For the following, yes, but `exacerbates` doesn't look like a proper 
term here because we had better make Spark smarter to handle those cases as I 
replied at 
[here](https://github.com/apache/spark/pull/30876#discussion_r547421217) 
already.
   > 
   > > Note that the above is orthogonal to DRA evicting an executor via 
storage timeout configuration. That just exacerbates the problem : since a 
larger number of executors could be lost.
   
   If we can do better on this, I am definitely very keen on it !
   Until that happens, we need to continue supporting existing scenarios where 
DRA impacts use of this flag.
   
   
   > 
   > 1. For the following, I didn't make this PR for that specific use case. I 
made this PR to improve this feature in various environment in Apache Spark 
3.2.0 timeframe 
[here](https://github.com/apache/spark/pull/30876#issuecomment-749953223).
   > 
   > > Specifically for this usecase, we dont need to make it a spark default 
right ? ...
   
   This was in response to the 
[scenario](https://github.com/apache/spark/pull/30876#issuecomment-750471287) 
described.
   Let us decouple discussion of that scenario from our discussion here - and 
focus on what we need to evaluate for enabling this by default.
   
   
   > 
   > 1. For the following, I replied that YARN environment also can suffer from 
disk loss or executor loss 
[here](https://github.com/apache/spark/pull/30876#issuecomment-751060200) 
because you insisted that YARN doesn't need this feature from the beginning. 
I'm still not sure that YARN environment is so invincible like that.
   > 
   > > But this feels sufficiently narrow enough not to require a global 
default, right ? It feels more like a deployment/application default and not a 
platform level default ?
   
   I am not sure where we got this from my comments ("_because you insisted 
that YARN doesn't need this feature from the beginning. I'm still not sure that 
YARN environment is so invincible like that_") ? I clearly miscommunicated 
something here !
   
   My comment on yarn was in agreement with @HyukjinKwon's 
[suggestion](https://github.com/apache/spark/pull/30876#discussion_r547016082). 
The other was in response to the specific k8s scenario you presented - 
"currently K8s environment is more aggressive than the other existing resource 
managers".
   
   > 
   > 1. For `replication == 1`, `spark.storage.replication.proactive` only 
tries to replicate when there exists at least live data. So, replication 
doesn't occur.
   > 
   > > Shuffle ? Replicated RDD where replication == 1 ?
   > 
   > 1. I'm trying to utilize all features from Apache Spark and open for that 
too . We are developing this and Spark is not a bible written on the rock.
   > 
   > > Perhaps better tuning for (c) might help more holistically ?
   > 
   > I know that this is a holiday season and I'm really grateful about your 
opinions. If you don't mind, can we have a Zoom meeting when you are available, 
@mridulm ? I think we have different ideas on the open source development and 
about the scope of this work. I want to make a progress in this area in Apache 
Spark 3.2.0 by completing a document or a better implementation or anything 
more. Please let me know if you can have a Zoom meeting. Thanks!
   
   Sure !
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to