[ 
https://issues.apache.org/jira/browse/BEAM-5519?focusedWorklogId=230106&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-230106
 ]

ASF GitHub Bot logged work on BEAM-5519:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 19/Apr/19 12:54
            Start Date: 19/Apr/19 12:54
    Worklog Time Spent: 10m 
      Work Description: kyle-winkelman commented on issue #6511: [BEAM-5519] 
Remove call to groupByKey in Spark Streaming.
URL: https://github.com/apache/beam/pull/6511#issuecomment-484887010
 
 
   Rebased.
   
   One other option I have come up with regarding the performance tests is to 
alter the Nexmark BoundedEventSource so that it splits properly. For example if 
it returns that the estimatedSizeBytes is 100 and the Spark Runner asks for 
desiredBundleSizeBytes of 10 then we split it into 10 equal pieces (instead of 
always splitting on numEventGenerators). This will unfortunately impact all 
batch performance numbers so I don't really want to do it, but it seems like it 
might be the best way.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 230106)
    Time Spent: 5h  (was: 4h 50m)

> Spark Streaming Duplicated Encoding/Decoding Effort
> ---------------------------------------------------
>
>                 Key: BEAM-5519
>                 URL: https://issues.apache.org/jira/browse/BEAM-5519
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-spark
>            Reporter: Kyle Winkelman
>            Assignee: Kyle Winkelman
>            Priority: Major
>              Labels: spark, spark-streaming, triaged
>             Fix For: 2.13.0
>
>          Time Spent: 5h
>  Remaining Estimate: 0h
>
> When using the SparkRunner in streaming mode. There is a call to groupByKey 
> followed by a call to updateStateByKey. BEAM-1815 fixed an issue where this 
> used to cause 2 shuffles but it still causes 2 encode/decode cycles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to