I believe those are currently only respected as table properties and not as "spark write" properties although there is a case to be made that we should accept them there as well. You can alter your table so that it contains those properties and new files will be created with the compression you would like.
> On Mar 5, 2021, at 7:15 AM, Javier Sanchez Beltran > <jabelt...@expediagroup.com.INVALID> wrote: > > Hello Iceberg team! > > I have been researching Apache Iceberg to see how would work in our > environment. We are still trying out things. We would like to have Parquet > format with SNAPPY compression type. > > I already try changing these two properties to SNAPPY, but it didn’t work > (https://iceberg.apache.org/configuration/ > <https://iceberg.apache.org/configuration/>): > > > write.avro.compression-codec > > Gzip -> SNAPPY > > write.parquet.compression-codec > > Gzip -> SNAPPY > > In this way: > > dataset > .writeStream() > .format("iceberg") > .outputMode("append") > .option("write.parquet.compression-codec", "SNAPPY") > .option("write.avro.compression-codec", "SNAPPY") > …start() > > > Did I do something in a bad way? Or maybe we need to take care of the > implementation of this SNAPPY compression? > > Thank you in advance, > Javier.