[
https://issues.apache.org/jira/browse/BEAM-5987?focusedWorklogId=168289&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-168289
]
ASF GitHub Bot logged work on BEAM-5987:
----------------------------------------
Author: ASF GitHub Bot
Created on: 21/Nov/18 14:09
Start Date: 21/Nov/18 14:09
Worklog Time Spent: 10m
Work Description: VaclavPlajt commented on a change in pull request
#7091: [BEAM-5987] Spark: Share cached side inputs between tasks.
URL: https://github.com/apache/beam/pull/7091#discussion_r235399531
##########
File path:
runners/spark/src/main/java/org/apache/beam/runners/spark/translation/MultiDoFnFunction.java
##########
@@ -61,6 +66,17 @@
public class MultiDoFnFunction<InputT, OutputT>
implements PairFlatMapFunction<Iterator<WindowedValue<InputT>>,
TupleTag<?>, WindowedValue<?>> {
+ private static final Logger LOG =
LoggerFactory.getLogger(MultiDoFnFunction.class);
+
+ /** JVM wide side input cache. */
+ private static final Map<String, CachedSideInputReader> sideInputReaders =
+ Collections.synchronizedMap(new WeakHashMap<>());
+
+ /**
+ * Id that is consistent among executors. We can not use stepName because of
possible collisions.
Review comment:
I do not get meaning of 'consistent' here. Do you mean random (most likely
distinct) even within one JVM ?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 168289)
Time Spent: 1.5h (was: 1h 20m)
> Spark SideInputReader performance
> ---------------------------------
>
> Key: BEAM-5987
> URL: https://issues.apache.org/jira/browse/BEAM-5987
> Project: Beam
> Issue Type: Bug
> Components: runner-spark
> Affects Versions: 2.8.0
> Reporter: David Moravek
> Assignee: David Moravek
> Priority: Major
> Fix For: 2.9.0
>
> Attachments: Screen Shot 2018-11-06 at 13.05.36.png
>
> Time Spent: 1.5h
> Remaining Estimate: 0h
>
> We did some profiling of a spark job and 90% of the application time was
> spent on side input deserialization.
> For spark, an easy fix is to cache materialized side inputs per bundle. This
> improved running time of the profiled job from 3 hours to 30 minutes.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)