Ok, thanks Godfrey.

On Wed, Jul 15, 2020 at 3:03 PM godfrey he <godfre...@gmail.com> wrote:

> hi Flavio,
>
> Parquet format supports configuration from ParquetOutputFormat
> <https://www.javadoc.io/doc/org.apache.parquet/parquet-hadoop/1.10.0/org/apache/parquet/hadoop/ParquetOutputFormat.html>.
>  please
> refer to [1] for details
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/formats/parquet.html#format-options
>
> Best,
> Godfrey
>
>
>
> Flavio Pompermaier <pomperma...@okkam.it> 于2020年7月15日周三 下午8:44写道:
>
>> Hi to all,
>> in my current code I use the legacy Hadoop Output format to write my
>> Parquet files.
>> I wanted to use the new Parquet format of Flink 1.11 but I can't find how
>> to migrate the following properties:
>>
>> ParquetOutputFormat.setBlockSize(job, parquetBlockSize);
>> ParquetOutputFormat.setEnableDictionary(job, true);
>> ParquetOutputFormat.setCompression(job, CompressionCodecName.SNAPPY);
>>
>> Is there a way to set those configs?
>> And if not, is there a way to handle them without modifying the source of
>> the flink connector (i.e. extending some class)?
>>
>> Best,
>> Flavio
>>
>

Reply via email to