[
https://issues.apache.org/jira/browse/FLINK-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17687102#comment-17687102
]
Piotr Nowojski commented on FLINK-18235:
----------------------------------------
[~dianfu], may I ask if you have considered implementing a snapshot strategy
similar to one in the {{AsyncWaitOperator}}? Namely,
# After serializing incoming records, keep them buffered in the Flink's
operator memory (not on the state yet!)
# If record has been successfully processed, remove it from the buffer.
# If checkpoint happens ({{snapshotState}} call), just copy in-flight records
from the buffer, to a {{ListState}} - no need to flush or wait for the
in-flight records to finish processing.
# During recovery, re process the records from the recovered {{ListState}}.
> Improve the checkpoint strategy for Python UDF execution
> --------------------------------------------------------
>
> Key: FLINK-18235
> URL: https://issues.apache.org/jira/browse/FLINK-18235
> Project: Flink
> Issue Type: Improvement
> Components: API / Python
> Reporter: Dian Fu
> Assignee: Dian Fu
> Priority: Major
> Labels: auto-deprioritized-major, stale-assigned
>
> Currently, when a checkpoint is triggered for the Python operator, all the
> data buffered will be flushed to the Python worker to be processed. This will
> increase the overall checkpoint time in case there are a lot of elements
> buffered and Python UDF is slow. We should improve the checkpoint strategy to
> improve this. One way to implement this is to control the number of data
> buffered in the pipeline between Java/Python processes, similar to what
> [FLIP-183|https://cwiki.apache.org/confluence/display/FLINK/FLIP-183%3A+Dynamic+buffer+size+adjustment]
> does to control the number of data buffered in the network. We can also let
> users to config the checkpoint strategy if needed.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)