GitHub user ueshin opened a pull request:

    https://github.com/apache/spark/pull/15840

    [SPARK-18398][SQL] Fix nullabilities of MapObjects and optimize not to 
check null if lambda is not nullable.

    ## What changes were proposed in this pull request?
    
    The nullabilities of `MapObject` can be made more strict by relying on 
`inputObject.nullable` and `lambdaFunction.nullable`.
    
    And we can optimize its execution a little to skip extra null check if the 
lambda is not nullable.
    
    The example of generated code before:
    
    ```java
    boolean isNull4 = i.isNullAt(0);
    ArrayData value4 = isNull4 ? null : (i.getArray(0));
    ArrayData value3 = null;
    
    if (!isNull4) {
        Integer[] convertedArray = null;
        int dataLength = value4.numElements();
        convertedArray = new Integer[dataLength];
    
        int loopIndex = 0;
        while (loopIndex < dataLength) {
            MapObjects_loopValue108 = (int) (value4.getInt(loopIndex));
            MapObjects_loopIsNull109 = value4.isNullAt(loopIndex);
    
            if (MapObjects_loopIsNull109) {
                throw new RuntimeException(((java.lang.String) references[0]));
            }
    
            if (false) {
                convertedArray[loopIndex] = null;
            } else {
                convertedArray[loopIndex] = MapObjects_loopValue108;
            }
    
            loopIndex += 1;
        }
    
        value3 = new 
org.apache.spark.sql.catalyst.util.GenericArrayData(convertedArray);
    }
    ```
    
    after:
    
    ```java
    boolean isNull4 = i.isNullAt(0);
    ArrayData value4 = isNull4 ? null : (i.getArray(0));
    ArrayData value3 = null;
    
    if (!isNull4) {
        Integer[] convertedArray = null;
        int dataLength = value4.numElements();
        convertedArray = new Integer[dataLength];
    
        int loopIndex = 0;
        while (loopIndex < dataLength) {
            MapObjects_loopValue108 = (int) (value4.getInt(loopIndex));
            MapObjects_loopIsNull109 = value4.isNullAt(loopIndex);
    
            if (MapObjects_loopIsNull109) {
                throw new RuntimeException(((java.lang.String) references[0]));
            }
            convertedArray[loopIndex] = MapObjects_loopValue108;
    
            loopIndex += 1;
        }
    
        value3 = new 
org.apache.spark.sql.catalyst.util.GenericArrayData(convertedArray);
    }
    ```
    
    ## How was this patch tested?
    
    Existing tests.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/ueshin/apache-spark issues/SPARK-18398

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/15840.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #15840
    
----
commit b287ded240e27b0db99ee9754f09bf9be121c28f
Author: Takuya UESHIN <[email protected]>
Date:   2016-11-10T09:11:35Z

    Fix nullabilities of MapObjects and optimize not to check null if lambda is 
not nullable.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to