GitHub user davies opened a pull request:

    https://github.com/apache/spark/pull/4024

    [SPARK-5224] improve performance of parallelize list/ndarray

    After the default batchSize changed to 0 (batched based on the size of 
object), but parallelize() still use BatchedSerializer with batchSize=1, this 
PR will use batchSize=1024 for parallelize by default.
    
    Also, BatchedSerializer did not work well with list and numpy.ndarray, this 
improve BatchedSerializer by using __len__ and __getslice__.
    
    -------|--------|------------
        |          before     |   after
    numpy.ndarray     |  32s  |   0.7s
    -------|----|

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/davies/spark opt_numpy

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/4024.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4024
    
----
commit 7618c7c930b0a4bad5469523ba38d52b6eab4589
Author: Davies Liu <[email protected]>
Date:   2015-01-13T18:48:00Z

    improve performance of parallelize list/ndarray

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to