Github user wzhfy commented on the issue:

    https://github.com/apache/spark/pull/19560
  
    My main concern is, we'd better not to put burden on Spark to deal with 
metastore failures, because Spark doesn't have control on metastores. The 
system using Spark and metastore should be responsible for consistency.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to