[ 
https://issues.apache.org/jira/browse/BEAM-4565?focusedWorklogId=112966&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-112966
 ]

ASF GitHub Bot logged work on BEAM-4565:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 19/Jun/18 00:51
            Start Date: 19/Jun/18 00:51
    Worklog Time Spent: 10m 
      Work Description: katsiapis commented on issue #5649: [BEAM-4565] Fix hot 
key fanout in the face of combiner lifting.
URL: https://github.com/apache/beam/pull/5649#issuecomment-398239850
 
 
   Pre-existing issue:
   
         combine_per_key = combine_per_key.with_hot_key_fanout(fanout)
   
   should be 
   
         combine_per_key = combine_per_key.with_hot_key_fanout(self.fanout)
   
   (Ie a "self" was missing).
   
   Do we understand why tests are passing nonetheless?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 112966)
    Time Spent: 0.5h  (was: 20m)

> Hot key fanout should not distribute keys to all shards.
> --------------------------------------------------------
>
>                 Key: BEAM-4565
>                 URL: https://issues.apache.org/jira/browse/BEAM-4565
>             Project: Beam
>          Issue Type: Task
>          Components: sdk-java-core, sdk-py-core
>    Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0
>            Reporter: Robert Bradshaw
>            Assignee: Kenneth Knowles
>            Priority: Major
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The goal is to reduce the number of value sent to a single post-GBK worker. 
> If combiner lifting happens, each bundle will sends a single value per 
> sub-key, causing an N-fold blowup in shuffle data and N reducers with the 
> same amount of data to consume as the single reducer in the non-fanout case. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to