Github user WeichenXu123 commented on the issue:

    https://github.com/apache/spark/pull/17014
  
    I think about this double-cache issue for a few days. One big problem is 
that, we are hard get precise storage level info. For example, we may add `map` 
transform on cached dataset and then pass it to ML algos, which make the 
storage level info submerged. This big problem make the auto-decision for 
`handlePersistence` meaningless.
    
    So my proposal is, add a parameter `handlePersistence` for these algos, let 
user to control whether to persistence input data (So if user already persist 
input data in upstream, he can set the parameter  to `false`). But we can let 
the `handlePersistence` default value to be `true` for these algo, this won't 
break the current activity of these algos.
    
    cc @jkbradley @yanboliang @smurching  What do you think about this?  Once 
we reach consensus this PR can move forward.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to