[
https://issues.apache.org/jira/browse/SPARK-39743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17572308#comment-17572308
]
Yeachan Park commented on SPARK-39743:
--------------------------------------
OK, thanks for the clarification. Should we look at improving the documentation
for this?
For example, for AVRO files, we have a config option to determine the
compression level in https://spark.apache.org/docs/latest/configuration.html,
is there a reason we also don't do that for parquet? I don't see it in the
config docs or the parquet files docs:
https://spark.apache.org/docs/latest/sql-data-sources-parquet.html. Or maybe we
should add a link to
https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/README.md?
The docs also say that `spark.io.compression.zstd.level` controls the
compression level for the zstd codec. Based on this, I didn't realise that this
only applied to internal data. Should we make that clearer in the config?
> Unable to set zstd compression level while writing parquet files
> ----------------------------------------------------------------
>
> Key: SPARK-39743
> URL: https://issues.apache.org/jira/browse/SPARK-39743
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 3.2.0
> Reporter: Yeachan Park
> Priority: Minor
>
> While writing zstd compressed parquet files, the following setting
> `spark.io.compression.zstd.level` does not have any affect with regards to
> the compression level of zstd.
> All files seem to be written with the default zstd compression level, and the
> config option seems to be ignored.
> Using the zstd cli tool, we confirmed that setting a higher compression level
> for the same file tested in spark resulted in a smaller file.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]