[
https://issues.apache.org/jira/browse/BEAM-4783?focusedWorklogId=144397&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-144397
]
ASF GitHub Bot logged work on BEAM-4783:
----------------------------------------
Author: ASF GitHub Bot
Created on: 14/Sep/18 18:36
Start Date: 14/Sep/18 18:36
Worklog Time Spent: 10m
Work Description: kyle-winkelman commented on issue #6181: [BEAM-4783]
Add bundleSize for splitting BoundedSources.
URL: https://github.com/apache/beam/pull/6181#issuecomment-421447794
In my use case we use spark.dynamicAllocation as a way to remove a knob
(--num-executors) in our attempt to become knobless; when running in batch mode
it will create the SourceRDDs and based on the number of partitions it will try
to spin up that many executors. This completely backfires when the SourceRDD is
partitioned based on defaultParallelism because that will now be equal to 2
(default --num-executors).
If you prefer we could prevent the bundleSize from being a knob and always
use 64MB (Apache Hadoop default block size).
I understand why streaming acts in this way, but for batch the users are
going to have to guess how many executors they need. If they do not guess high
enough it is entirely possibly to end up with >2GB of data in a partition
(https://issues.apache.org/jira/browse/SPARK-6235). Starting at 64MB per
partition does not eliminate this possibility but it does reduce the chances.
For example if a user read a 10GB file with 1 executor it would fail if it ever
tried to cache the partition, but by breaking it into 64MB partitions it has a
chance of succeeding (depending on executor memory, etc.).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 144397)
Time Spent: 2h 40m (was: 2.5h)
> Spark SourceRDD Not Designed With Dynamic Allocation In Mind
> ------------------------------------------------------------
>
> Key: BEAM-4783
> URL: https://issues.apache.org/jira/browse/BEAM-4783
> Project: Beam
> Issue Type: Improvement
> Components: runner-spark
> Affects Versions: 2.5.0
> Reporter: Kyle Winkelman
> Assignee: Jean-Baptiste Onofré
> Priority: Major
> Labels: newbie
> Time Spent: 2h 40m
> Remaining Estimate: 0h
>
> When the spark-runner is used along with the configuration
> spark.dynamicAllocation.enabled=true the SourceRDD does not detect this. It
> then falls back to the value calculated in this description:
> // when running on YARN/SparkDeploy it's the result of max(totalCores,
> 2).
> // when running on Mesos it's 8.
> // when running local it's the total number of cores (local = 1,
> local[N] = N,
> // local[*] = estimation of the machine's cores).
> // ** the configuration "spark.default.parallelism" takes precedence
> over all of the above **
> So in most cases this default is quite small. This is an issue when using a
> very large input file as it will only get split in half.
> I believe that when Dynamic Allocation is enable the SourceRDD should use the
> DEFAULT_BUNDLE_SIZE and possibly expose a SparkPipelineOptions that allows
> you to change this DEFAULT_BUNDLE_SIZE.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)