Hi James,

I'm not an expert on s3, but in general this should be a matter of
configuring the s3 filesystem implementation that Flink is using (that's
what ends up writing the actual files to s3).

Flink currently comes with the Hadoop & Presto (also kind of Hadoop based)
based implementations. Looking at the `hive-presto` dependency, that is
used in 1.14.x, the server side encryption should be supported [1]. You
should be able to pass these configuration to the "Hadoop based
configuration" used by presto, by properly prefixing the keys in your Flink
configuration [2].

I hope this helps.

[1]
https://github.com/prestodb/presto/blob/df401fd26096f1b140e7335275a2e669a1cacda4/presto-hive/src/main/java/com/facebook/presto/hive/s3/PrestoS3FileSystem.java#L782
[2]
https://github.com/apache/flink/blob/release-1.14.2/flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/S3FileSystemFactory.java#L39

Best,
D.

On Wed, Jan 5, 2022 at 12:51 AM James Timotiwu <jtimot...@salesforce.com>
wrote:

> We are trying to write objects with encryption at rest. To enable this,
> the request containing the payload we intend to upload must include a
> x-amz-server-side-encryption header. [1]. I would imagine this is a
> common use case, but after some digging, I cannot find any article that
> covers this. Has anybody configured FileSink to enable server side
> encryption for objects written into an S3 bucket? Or would we have to write
> a custom http sink that encapsulates the payload with this header?
>
> Best,
> James
>
> [1]
> https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html
>

Reply via email to