GitHub user eatoncys opened a pull request:

    https://github.com/apache/spark/pull/23262

    [SPARK-26312][SQL]Converting converters in RDDConversions into arrays to 
improve their access performance

    
    ## What changes were proposed in this pull request?
    
    `RDDConversions` would get disproportionately slower as the number of 
columns in the query increased.
    This PR converts the `converters` in `RDDConversions` into arrays to 
improve their access performance, the type of `converters` before is 
`scala.collection.immutable.::` which is a subtype of list.
    
    The test of `PrunedScanSuite` for 2000 columns and 20k rows takes 409 
seconds before this PR, and 361 seconds after.
    
    ## How was this patch tested?
    
    Test case of `PrunedScanSuite` 
    
    Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/eatoncys/spark toarray

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/23262.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #23262
    
----
commit ddb252892a439281b16bc14fdfdb7faf756f1067
Author: 10129659 <chen.yanshan@...>
Date:   2018-12-08T07:15:10Z

    Converting converters in RDDConversions into arrays to improve their access 
performance

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to