Github user fjh100456 commented on the issue:
https://github.com/apache/spark/pull/19218
@gatorsmile
1. Non-partitioned tables do not have this problem,
'spark.sql.parquet.compression.codec' can take effect normally, because the
process of writing data differs from that of a partitioned table.
2. ORC does not have a configuration similar to 'spark. sql.* ', but can
only use ' orc. compress ' which may not be a spark configuration.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]