[
https://issues.apache.org/jira/browse/AMBARI-25572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17232545#comment-17232545
]
akiyamaneko commented on AMBARI-25572:
--------------------------------------
[~payert] We have also encountered this problem. Is there any plan to fix it?
If no one handles it, I want to try to fix it
> Metrics cannot be stored if mutation size is not set properly
> -------------------------------------------------------------
>
> Key: AMBARI-25572
> URL: https://issues.apache.org/jira/browse/AMBARI-25572
> Project: Ambari
> Issue Type: Task
> Components: ambari-metrics
> Affects Versions: 2.7.3, 2.7.4, 2.7.5
> Reporter: Tamas Payer
> Priority: Major
> Labels: metric-collector, reliability
>
> Ambari Metrics Collector sometimes fails to store metric because the batch
> size settings does not fit to the current row size and the assembled batch
> probably does not fit into the maximum batch memory space.
> {code:java}
> 2020-10-20 10:29:04,752 WARN
> org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Failed on
> insert records to store : ERROR 730 (LIM02): MutationState size is bigger
> than maximum allowed number of bytes
> 2020-10-20 10:29:04,752 WARN
> org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Metric that
> cannot be stored :
> [kafka.server.ConsumerGroupMetrics.CommittedOffset.clientId.-.group.console-consumer-46399.partition.0.topic.PREPROD_KAFKA_CBS_TRANSACTION._sum,kafka_broker]{1603186125049=2.65274693E8}{code}
> This error can be eliminated by the tweaking of *phoenix.mutate.batchSize*
> and *phoenix.mutate.batchSizeBytes* configuration settings.
> *Asses the feasibility of changing the AMS implementation that retries upon
> failure after manipulation the auto-commit or batch size, or whatever is
> needed for the successful commit and notifies the user about sub optimal
> batch size settings.*
--
This message was sent by Atlassian Jira
(v8.3.4#803005)