[ 
https://issues.apache.org/jira/browse/FLINK-21747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17300246#comment-17300246
 ] 

Chesnay Schepler commented on FLINK-21747:
------------------------------------------

Apparently the 
[64kb|https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_reference/#data-types]
 that InfluxDB imposes applies to the entire set of tags.

You should be able to work around the issue by excluding these specific 
variables like this:
{code}
metrics.reporter.<name>.scope.variables.excludes: task_name;operator_name
{code}

> Encounter an exception that contains "max key length exceeded ..."  when 
> reporting metrics to influxdb
> ------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-21747
>                 URL: https://issues.apache.org/jira/browse/FLINK-21747
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Metrics
>    Affects Versions: 1.10.0
>            Reporter: tim yu
>            Priority: Major
>
> I run a stream job with insert statement whose size is too long, it report 
> metrics to influxdb. I find many influxdb exceptions that contains "max key 
> length exceeded ..." in the log file of job manager . The job could not write 
> the metrics to the influxdb, because "task_name" and "operator_name" is too 
> long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to