zhangdove commented on issue #2317: URL: https://github.com/apache/iceberg/issues/2317#issuecomment-796395354
We are using the Spark computing engine to read MySQL data and overwrite Iceberg (used by HadoopCatalog). When the timeout with MySQL is too long, the data file is cleaned up, while the file new-metadata.json still exists. I'm not sure if it's the same as this ISSUE, but it looks a bit like @RussellSpitzer 's analysis. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
