guixiaowen opened a new pull request #30681: URL: https://github.com/apache/spark/pull/30681
### What changes were proposed in this pull request? The main modification of this pull request is the write method of FileFormatWriter. This method prints out the relevant information of generating hdfs. Through configuration, you can flexibly determine whether you need to play. ### Why are the changes needed? Such changes can help users or produce detailed information about HDFS. This change is very user friendly ### Does this PR introduce _any_ user-facing change? For the user, this change user does not need to make any changes. ### How was this patch tested? **Results before modification** [[email protected] ~/sparkdir/test_client]$ spark-sql -e "insert overwrite table bigdata_qa.external_table_hive partition (par_dt = '20201010') select consume_stability, consume_stability from bigdata_qa.travel_feature_hive limit 20;" Application Id: application_1606795588298_74026, Tracking URL: http://xxxx.xxxx.xxxx:8088/proxy/application_1606795588298_74026/ DriverContainer URL: http://xxxx.xxxx.xxxx:8042/node/containerlogs/container_e15_1606795588298_74026_01_000001/prod_bigdata_qa **Modified result** [[email protected] ~/sparkdir/test_client]$ spark-sql -e "insert overwrite table bigdata_qa.external_table_hive partition (par_dt = '20201010') select consume_stability, consume_stability from bigdata_qa.travel_feature_hive limit 20;" Application Id: application_1606795588298_74026, Tracking URL: http://xxxx.xxxx.xxxx:8088/proxy/application_1606795588298_74026/ DriverContainer URL: http://xxxx.xxxx.xxxx:8042/node/containerlogs/container_e15_1606795588298_74026_01_000001/prod_bigdata_qa File stats [ numFiles=1; numOutputBytes=738; numOutputRows=20; numParts=0 ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
