[
https://issues.apache.org/jira/browse/BEAM-4783?focusedWorklogId=192939&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-192939
]
ASF GitHub Bot logged work on BEAM-4783:
----------------------------------------
Author: ASF GitHub Bot
Created on: 31/Jan/19 17:12
Start Date: 31/Jan/19 17:12
Worklog Time Spent: 10m
Work Description: kyle-winkelman commented on pull request #7690:
[BEAM-4783] Fix issues created in #6181.
URL: https://github.com/apache/beam/pull/7690
Fix issues created in #6181 and incorrectly fixed in #6884 (although this PR
increased readability greatly).
Before any work on
[BEAM-4783](https://issues.apache.org/jira/browse/BEAM-4783),
GroupCombineFunctions would always use a `new
HashPartitioner(rdd.rdd().sparkContext().defaultParallelism());`.
The intent was to skip the creation of this `Partitioner` and call
`groupByKey()` with no arguments only when the new bundleSize option was in
use. #6181 actually did the opposite causing performance impacts, and because
#6181 had terrible readability #6884 did not fix it correctly.
@iemejia @timrobertson100
------------------------
Follow this checklist to help us incorporate your contribution quickly and
easily:
- [ ] Format the pull request title like `[BEAM-XXX] Fixes bug in
ApproximateQuantiles`, where you replace `BEAM-XXX` with the appropriate JIRA
issue, if applicable. This will automatically link the pull request to the
issue.
- [ ] If this contribution is large, please file an Apache [Individual
Contributor License Agreement](https://www.apache.org/licenses/icla.pdf).
It will help us expedite review of your Pull Request if you tag someone
(e.g. `@username`) to look at it.
Post-Commit Tests Status (on master branch)
------------------------------------------------------------------------------------------------
Lang | SDK | Apex | Dataflow | Flink | Gearpump | Samza | Spark
--- | --- | --- | --- | --- | --- | --- | ---
Go | [](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/)
| --- | --- | --- | --- | --- | ---
Java | [](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/)<br>[](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Batch/lastCompletedBuild/)<br>[](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Streaming/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/)
Python | [](https://builds.apache.org/job/beam_PostCommit_Python_Verify/lastCompletedBuild/)
| --- | [](https://builds.apache.org/job/beam_PostCommit_Py_VR_Dataflow/lastCompletedBuild/)
</br> [](https://builds.apache.org/job/beam_PostCommit_Py_ValCont/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PreCommit_Python_PVR_Flink_Cron/lastCompletedBuild/)
| --- | --- | ---
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 192939)
Time Spent: 7h 10m (was: 7h)
> Add bundleSize parameter to control splitting of Spark sources (useful for
> Dynamic Allocation)
> ----------------------------------------------------------------------------------------------
>
> Key: BEAM-4783
> URL: https://issues.apache.org/jira/browse/BEAM-4783
> Project: Beam
> Issue Type: Improvement
> Components: runner-spark
> Affects Versions: 2.8.0
> Reporter: Kyle Winkelman
> Assignee: Kyle Winkelman
> Priority: Major
> Fix For: 2.8.0, 2.9.0
>
> Time Spent: 7h 10m
> Remaining Estimate: 0h
>
> When the spark-runner is used along with the configuration
> spark.dynamicAllocation.enabled=true the SourceRDD does not detect this. It
> then falls back to the value calculated in this description:
> // when running on YARN/SparkDeploy it's the result of max(totalCores,
> 2).
> // when running on Mesos it's 8.
> // when running local it's the total number of cores (local = 1,
> local[N] = N,
> // local[*] = estimation of the machine's cores).
> // ** the configuration "spark.default.parallelism" takes precedence
> over all of the above **
> So in most cases this default is quite small. This is an issue when using a
> very large input file as it will only get split in half.
> I believe that when Dynamic Allocation is enable the SourceRDD should use the
> DEFAULT_BUNDLE_SIZE and possibly expose a SparkPipelineOptions that allows
> you to change this DEFAULT_BUNDLE_SIZE.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)