[
https://issues.apache.org/jira/browse/FLINK-28021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17556023#comment-17556023
]
Shubham Bansal commented on FLINK-28021:
----------------------------------------
[~jingge] I looked at the code and understood what is required and it's
different from Kafka metrics as the Kafka client provides most of the metrics
that Flink is looking for, but FileWriterBucket used by file connector doesn't.
It accepts the element to be inserted and asks InProgressFileWriter's like
BulkPartWriter, RowWisePartWriter, and HadoopPathBasedPartFileWriter to write
that element using the corresponding encoders or Cvs/Avro/Orc writers which
don't really provide the length of the encoded element by default. So if we
need to calculate the bytes sent as mentioned in FLIP-33, then we need to make
changes in those Writers and propagate those encoded sizes to the
FileWriterBucket.
This is what I could figure out. Let me know what you think.
> Add FLIP-33 metrics to FileSystem connector
> -------------------------------------------
>
> Key: FLINK-28021
> URL: https://issues.apache.org/jira/browse/FLINK-28021
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / FileSystem
> Reporter: Martijn Visser
> Assignee: Shubham Bansal
> Priority: Major
>
> Both the current FileSource and FileSink have no metrics implemented. They
> should have the FLIP-33 metrics implemented.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)