Steven Zhen Wu commented on FLINK-9061:

[~jgrier] Amazon doesn't want to reveal internal details, hence sometimes are 
pretty vague. Mu understanding is that prefix (before the random/entropy part) 
has to be fixed. Either way, latest proposal doesn't prevent user from setting 
the first a few chars as random/entropy part. 

> S3 checkpoint data not partitioned well -- causes errors and poor performance
> -----------------------------------------------------------------------------
>                 Key: FLINK-9061
>                 URL: https://issues.apache.org/jira/browse/FLINK-9061
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystem, State Backends, Checkpointing
>    Affects Versions: 1.4.2
>            Reporter: Jamie Grier
>            Priority: Critical
> I think we need to modify the way we write checkpoints to S3 for high-scale 
> jobs (those with many total tasks).  The issue is that we are writing all the 
> checkpoint data under a common key prefix.  This is the worst case scenario 
> for S3 performance since the key is used as a partition key.
> In the worst case checkpoints fail with a 500 status code coming back from S3 
> and an internal error type of TooBusyException.
> One possible solution would be to add a hook in the Flink filesystem code 
> that allows me to "rewrite" paths.  For example say I have the checkpoint 
> directory set to:
> s3://bucket/flink/checkpoints
> I would hook that and rewrite that path to:
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original 
> path
> This would distribute the checkpoint write load around the S3 cluster evenly.
> For reference: 
> https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
> Any other people hit this issue?  Any other ideas for solutions?  This is a 
> pretty serious problem for people trying to checkpoint to S3.
> -Jamie

This message was sent by Atlassian JIRA

Reply via email to