[
https://issues.apache.org/jira/browse/FLINK-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yun Tang updated FLINK-11695:
-----------------------------
Description:
We meet this annoying problem many times when the {{sharedStateDir}} in
checkpoint path exceeded the directory items limit due to large checkpoint:
{code:java}
org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException:
The directory item limit of xxx is exceeded: limit=1048576 items=1048576
{code}
Currently, our solution is to let {{FsCheckpointStorage}} could create sub-dirs
when calling {{resolveCheckpointStorageLocation}}. The default value for the
number of sub-dirs is zero, just keep backward compatibility as current
situation. The created sub-dirs have the name as integer value of [{{0,
num-of-sub-dirs}})
was:
We meet this annoying problem many times when the {{sharedStateDir}} in
checkpoint path exceeded the directory items limit due to large checkpoint:
{code:java}
org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException:
The directory item limit of xxx is exceeded: limit=1048576 items=1048576
{code}
Currently, our solution is to let {{FsCheckpointStorage}} could create sub-dirs
when calling {{resolveCheckpointStorageLocation}}. The default value for the
number of sub-dirs is zero, just keep backward compatibility as current
situation. The created sub-dirs have the name as integer value of \{{0 ~
num-of-sub-dirs }}
> [checkpoint] Make sharedStateDir could create sub-directories to avoid
> MaxDirectoryItemsExceededException
> ---------------------------------------------------------------------------------------------------------
>
> Key: FLINK-11695
> URL: https://issues.apache.org/jira/browse/FLINK-11695
> Project: Flink
> Issue Type: Improvement
> Components: State Backends, Checkpointing
> Reporter: Yun Tang
> Assignee: Yun Tang
> Priority: Major
>
> We meet this annoying problem many times when the {{sharedStateDir}} in
> checkpoint path exceeded the directory items limit due to large checkpoint:
> {code:java}
> org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException:
> The directory item limit of xxx is exceeded: limit=1048576 items=1048576
> {code}
> Currently, our solution is to let {{FsCheckpointStorage}} could create
> sub-dirs when calling {{resolveCheckpointStorageLocation}}. The default value
> for the number of sub-dirs is zero, just keep backward compatibility as
> current situation. The created sub-dirs have the name as integer value of
> [{{0, num-of-sub-dirs}})
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)