Github user feynmanliang commented on the pull request:

    https://github.com/apache/spark/pull/7412#issuecomment-122144627
  
    Because the data may not be balanced across executors after shuffling. In
    the worst case, imagine if all the data had a common key; then groupByKey
    would try to collect the entire dataset onto one machine.
    
    The diagrams you've cited show the number of frequent sequences (y-axis) of
    length-k (x-axis) for various minSupports (legend), which is different than
    the number of candidate suffixes associated to any given prefix.
    
    On Thu, Jul 16, 2015 at 6:24 PM zhang jiajin <[email protected]>
    wrote:
    
    > I'm confused, the groupBy just reorganize data, not generate new data, why
    > does excutor overload after shuffling ?
    >
    > The following diagrams are from paper "Mining Sequential Patterns by
    > Pattern-Growth: The PrefixSpan Approach":
    >
    > [image: image]
    > 
<https://cloud.githubusercontent.com/assets/13159256/8738656/262676ec-2c65-11e5-9c7e-5e79e5a03b38.png>
    >
    > —
    > Reply to this email directly or view it on GitHub
    > <https://github.com/apache/spark/pull/7412#issuecomment-122143351>.
    >



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to