[ 
https://issues.apache.org/jira/browse/BEAM-5519?focusedWorklogId=150586&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-150586
 ]

ASF GitHub Bot logged work on BEAM-5519:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 03/Oct/18 01:45
            Start Date: 03/Oct/18 01:45
    Worklog Time Spent: 10m 
      Work Description: amitsela commented on issue #6511: [BEAM-5519] Remove 
call to groupByKey in Spark Streaming.
URL: https://github.com/apache/beam/pull/6511#issuecomment-426485740
 
 
   Sure. Run on a cluster and make sure there's no shuffle on RDDs that contain 
deserialized data, otherwise the runner should use coders before/after a 
shuffle.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 150586)
    Time Spent: 1h 10m  (was: 1h)

> Spark Streaming Duplicated Encoding/Decoding Effort
> ---------------------------------------------------
>
>                 Key: BEAM-5519
>                 URL: https://issues.apache.org/jira/browse/BEAM-5519
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-spark
>            Reporter: Kyle Winkelman
>            Assignee: Kyle Winkelman
>            Priority: Major
>              Labels: spark, spark-streaming
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When using the SparkRunner in streaming mode. There is a call to groupByKey 
> followed by a call to updateStateByKey. BEAM-1815 fixed an issue where this 
> used to cause 2 shuffles but it still causes 2 encode/decode cycles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to