或者可以通过 Kafka-Manager 来查看 guanyq <[email protected]> 于2020年6月3日周三 下午4:45写道:
> > > > 找到了,原生就有的committedOffsets-currentOffsets > > https://ci.apache.org/projects/flink/flink-docs-release-1.9/monitoring/metrics.html#reporter > Connectors > Kafka Connectors > | Scope | Metrics | User Variables | Description | Type | > | Operator | commitsSucceeded | n/a | The total number of successful > offset commits to Kafka, if offset committing is turned on and > checkpointing is enabled. | Counter | > | Operator | commitsFailed | n/a | The total number of offset commit > failures to Kafka, if offset committing is turned on and checkpointing is > enabled. Note that committing offsets back to Kafka is only a means to > expose consumer progress, so a commit failure does not affect the integrity > of Flink's checkpointed partition offsets. | Counter | > | Operator | committedOffsets | topic, partition | The last successfully > committed offsets to Kafka, for each partition. A particular partition's > metric can be specified by topic name and partition id. | Gauge | > | Operator | currentOffsets | topic, partition | The consumer's current > read offset, for each partition. A particular partition's metric can be > specified by topic name and partition id. | Gauge | > > > > > > > > > > > > 在 2020-06-03 15:02:24,"guanyq" <[email protected]> 写道: > >kafka挤压量的metrics的demo有么,或者参考资料 > > > > > > > > > > > > > > > > > >在 2020-06-03 14:31:56,"1530130567" <[email protected]> 写道: > >>Hi: > >> 可以考虑用prometheus采集kafka的metrics,在grafana上展示 > >> > >> > >> > >> > >>------------------ 原始邮件 ------------------ > >>发件人: "Zhonghan Tang"<[email protected]>; > >>发送时间: 2020年6月3日(星期三) 下午2:29 > >>收件人: "user-zh"<[email protected]>; > >>抄送: "user-zh"<[email protected]>; > >>主题: 回复:flink1.9,如何实时查看kafka消费的挤压量 > >> > >> > >> > >>一般是kafka自带的查看消费组的命令工具可以看 > >>./kafka-consumer-groups.sh --describe --group test-consumer-group > --bootstrap-server > >> > >> > >>| | > >>Zhonghan Tang > >>| > >>| > >>[email protected] > >>| > >>签名由网易邮箱大师定制 > >> > >> > >>在2020年06月3日 14:10,guanyq<[email protected]> 写道: > >>请加个问题 > >> > >>1.消费kafka时,是如何实时查看kafka topic的挤压量的? >
