I’m working with a kafka environment where I’m limited to 100 partitions @
1GB log.retention.bytes each. I’m looking to implement exactly once
processing from this kafka source to a S3 sink.
If I have understood correctly, Flink will only commit the kafka offsets
when the data has been saved to S
I've seen the class definition for source function:
class SinkFunction(JavaFunctionWrapper):"""The base class for
SinkFunctions."""
def __init__(self, sink_func: Union[str, JavaObject]):"""
Constructor of SinkFunction.
:param sink_func: The java SinkFunction