[
https://issues.apache.org/jira/browse/BEAM-10294?focusedWorklogId=449698&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-449698
]
ASF GitHub Bot logged work on BEAM-10294:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 23/Jun/20 09:03
Start Date: 23/Jun/20 09:03
Worklog Time Spent: 10m
Work Description: davidak09 commented on a change in pull request #12063:
URL: https://github.com/apache/beam/pull/12063#discussion_r444071819
##########
File path:
runners/spark/src/main/java/org/apache/beam/runners/spark/metrics/MetricsAccumulator.java
##########
@@ -87,13 +86,13 @@ public static MetricsContainerStepMapAccumulator
getInstance() {
}
}
- private static Optional<MetricsContainerStepMap> recoverValueFromCheckpoint(
+ private static Optional<SparkMetricsContainerStepMap>
recoverValueFromCheckpoint(
JavaSparkContext jsc, CheckpointDir checkpointDir) {
try {
Path beamCheckpointPath = checkpointDir.getBeamCheckpointDir();
checkpointFilePath = new Path(beamCheckpointPath,
ACCUMULATOR_CHECKPOINT_FILENAME);
fileSystem = checkpointFilePath.getFileSystem(jsc.hadoopConfiguration());
- MetricsContainerStepMap recoveredValue =
+ SparkMetricsContainerStepMap recoveredValue =
Review comment:
I'm not sure about this - whether it's backward compatible
##########
File path:
runners/spark/src/main/java/org/apache/beam/runners/spark/metrics/SparkBeamMetric.java
##########
@@ -51,11 +51,12 @@
}
for (MetricResult<DistributionResult> metricResult :
metricQueryResults.getDistributions()) {
DistributionResult result = metricResult.getAttempted();
- metrics.put(renderName(metricResult) + ".count", result.getCount());
- metrics.put(renderName(metricResult) + ".sum", result.getSum());
- metrics.put(renderName(metricResult) + ".min", result.getMin());
- metrics.put(renderName(metricResult) + ".max", result.getMax());
- metrics.put(renderName(metricResult) + ".mean", result.getMean());
+ String name = renderName(metricResult);
+ metrics.put(name + ".count", result.getCount());
+ metrics.put(name + ".sum", result.getSum());
+ metrics.put(name + ".min", result.getMin());
+ metrics.put(name + ".max", result.getMax());
+ metrics.put(name + ".mean", result.getMean());
Review comment:
I'd personally prefer single metric, possibly with `.distribution`
suffix, which could include all 5 stats (count, sum, min, max, mean), it would
definitely be more readable in Spark UI
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 449698)
Time Spent: 0.5h (was: 20m)
> Beam metrics are unreadable in Spark history server
> ---------------------------------------------------
>
> Key: BEAM-10294
> URL: https://issues.apache.org/jira/browse/BEAM-10294
> Project: Beam
> Issue Type: Bug
> Components: runner-spark
> Reporter: David Janicek
> Priority: P2
> Attachments: image-2020-06-22-13-48-08-880.png
>
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> Beam metrics shown in Spark history server are not readable. They're rendered
> as JSON which is created from protobuffer defined in *beam_job_api.proto*
> where metric's value is defined as bytes.
> !image-2020-06-22-13-48-08-880.png!
> Similar issue was already addressed and fixed in BEAM-6062 but was broken by
> BEAM-4552.
> Solution could be using *SparkMetricsContainerStepMap* instead of
> *MetricsContainerStepMap* inside *MetricsContainerStepMapAccumulator* as in
> BEAM-6062.
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)