GitHub user aray opened a pull request:

    https://github.com/apache/spark/pull/9429

    [SPARK-11275][SQL] Reimplement Expand as a Generator and fix existing 
implementation bugs

    This is an alternative to https://github.com/apache/spark/pull/9419
    
    I got tired of fighting/fixing the bugs with the existing implementation of 
cube/rollup/grouping sets specifically around the Expand operator so I 
reimplemented it as a Generator. I think this makes for a cleaner 
implementation. I also added unit tests that show this implementation solves 
SPARK-11275.
    
    I look forward to your comments!
    
    cc: @rxin @marmbrus @gatorsmile @rick-ibm @hvanhovell @chenghao-intel 
@holdenk 

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/aray/spark SPARK-11275

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/9429.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #9429
    
----
commit fad28d6126187f88b473fc35692248ad2cb00748
Author: Andrew Ray <[email protected]>
Date:   2015-11-02T18:10:02Z

    Reimplement Expand as a Generator
    - added unit tests for cube and rollup that actualy check the result
    - fixed bugs present in previous implementation of cube/rollup/groupingsets 
(SPARK-11275)

commit e4636791da3367ee6fcb371f2fce029f3b2e8a3e
Author: Andrew Ray <[email protected]>
Date:   2015-11-03T06:06:41Z

    newline at end of generators.scala

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to