GitHub user a-roberts opened a pull request:

    https://github.com/apache/spark/pull/15079

    [SPARK-17524] [TESTS] Use specified spark.buffer.pageSize

    ## What changes were proposed in this pull request?
    
    This PR has the appendRowUntilExceedingPageSize test in 
RowBasedKeyValueBatchSuite use whatever spark.buffer.pageSize value a user has 
specified to prevent a test failure for anyone testing Apache Spark on a box 
with a reduced page size. The test is currently hardcoded to use the default 
page size which is 64 MB so this minor PR is a test improvement
    
    ## How was this patch tested?
    Existing unit tests with 1 MB page size and with 64 MB (the default) page 
size

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/a-roberts/spark patch-5

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/15079.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #15079
    
----
commit c793efe6e567ba3fc51ceaf9ff225ca1e5f35f10
Author: Adam Roberts <[email protected]>
Date:   2016-09-13T12:31:06Z

    Use specified spark.buffer.pageSize
    
    Users can change the spark.buffer.pageSize value to run Spark unit tests on 
machines with less powerful hardware e.g. with two cores
    
    To address this we can use whatever they've specified to avoid a failure 
where we see the number of rows for the appendRowUntilExceedingPageSize test is 
not as expected

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to