[ 
https://issues.apache.org/jira/browse/BEAM-4783?focusedWorklogId=148849&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-148849
 ]

ASF GitHub Bot logged work on BEAM-4783:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 27/Sep/18 18:44
            Start Date: 27/Sep/18 18:44
    Worklog Time Spent: 10m 
      Work Description: kyle-winkelman commented on issue #6181: [BEAM-4783] 
Add bundleSize for splitting BoundedSources.
URL: https://github.com/apache/beam/pull/6181#issuecomment-425201311
 
 
   Looking further into the StreamingTransformTranslator, I would like to pose 
a question. Why do we do the groupByKey followed by the updateStateByKey? It 
appears to be a giant waste in which we convert everything to bytes and back 
unnecessarily.
   
   The only thing it does is gather all the values for a key into an Iterable, 
but the updateStateByKey would also do that if it were given the chance.
   
   If we were to update the UpdateStateByKeyFunction to expect 
WindowedValue<V>'s instead of Iterable<WindowedValue<V>>'s I believe we could 
eliminate the call to groupByKey. What is happening now is the updateStateByKey 
will wrap those values in a Seq and so currently we have either an empty Seq or 
a Seq with exactly 1 item and that item is itself an Iterable that contains 
multiple items.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 148849)
    Time Spent: 3h 20m  (was: 3h 10m)

> Spark SourceRDD Not Designed With Dynamic Allocation In Mind
> ------------------------------------------------------------
>
>                 Key: BEAM-4783
>                 URL: https://issues.apache.org/jira/browse/BEAM-4783
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-spark
>    Affects Versions: 2.5.0
>            Reporter: Kyle Winkelman
>            Assignee: Jean-Baptiste Onofré
>            Priority: Major
>              Labels: newbie
>          Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> When the spark-runner is used along with the configuration 
> spark.dynamicAllocation.enabled=true the SourceRDD does not detect this. It 
> then falls back to the value calculated in this description:
>       // when running on YARN/SparkDeploy it's the result of max(totalCores, 
> 2).
>       // when running on Mesos it's 8.
>       // when running local it's the total number of cores (local = 1, 
> local[N] = N,
>       // local[*] = estimation of the machine's cores).
>       // ** the configuration "spark.default.parallelism" takes precedence 
> over all of the above **
> So in most cases this default is quite small. This is an issue when using a 
> very large input file as it will only get split in half.
> I believe that when Dynamic Allocation is enable the SourceRDD should use the 
> DEFAULT_BUNDLE_SIZE and possibly expose a SparkPipelineOptions that allows 
> you to change this DEFAULT_BUNDLE_SIZE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to