Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/5250#issuecomment-87788812
  
    I'm fairly torn about this. I can see why it'd be much easier for users 
with the introduction of a new config since this could be a common problem, but 
at the same time as I said earlier, this is a much wider net that would catch 
any exceptions -- e.g. maybe the data node was just down for a second, and then 
suddenly Spark will return incorrect result.
    
    Let's leave the discussion open for others to chime in.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to