Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/19560
>My main concern is, we'd better not to put burden on Spark to deal with
metastore failures
I think this make sense. I was also thinking about this when proposing this
pr. I do agree with you on some level. But in the product env, reasons of
failing to update the stats seems various and we find it hard to build a strong
redo mechanism.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]