Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16391#discussion_r93788919
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/DatasetBenchmark.scala ---
    @@ -170,36 +176,39 @@ object DatasetBenchmark {
         val benchmark3 = aggregate(spark, numRows)
     
         /*
    -    OpenJDK 64-Bit Server VM 1.8.0_91-b14 on Linux 
3.10.0-327.18.2.el7.x86_64
    -    Intel Xeon E3-12xx v2 (Ivy Bridge)
    +    Java HotSpot(TM) 64-Bit Server VM 1.8.0_60-b27 on Mac OS X 10.12.1
    +    Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz
    +
         back-to-back map:                        Best/Avg Time(ms)    
Rate(M/s)   Per Row(ns)   Relative
         
------------------------------------------------------------------------------------------------
    -    RDD                                           3448 / 3646         29.0 
         34.5       1.0X
    -    DataFrame                                     2647 / 3116         37.8 
         26.5       1.3X
    -    Dataset                                       4781 / 5155         20.9 
         47.8       0.7X
    +    RDD                                           3963 / 3976         25.2 
         39.6       1.0X
    +    DataFrame                                      826 /  834        121.1 
          8.3       4.8X
    +    Dataset                                       5178 / 5198         19.3 
         51.8       0.8X
    --- End diff --
    
    for "back-to-back map", the logic is so simple that the code generated by 
`Dataset` is less efficient than `RDD`. `RDD` just adds 1 to the input `Long`, 
the only overhead is boxing, while `Dataset` generates code like this:
    ```
    boolean mapelements_isNull = true;
    long mapelements_value = -1L;
    if (!false) {
      mapelements_argValue = range_value;
      mapelements_isNull = false;
      if (!mapelements_isNull) {
        Object mapelements_funcResult = null;
        mapelements_funcResult = mapelements_obj.apply(mapelements_argValue);
        if (mapelements_funcResult == null) {
          mapelements_isNull = true;
        } else {
          mapelements_value = (Long) mapelements_funcResult;
        }
      }
    }
    ```
    `Dataset` still has the boxing overhead, but its code is more verbose. And 
`Dataset` has to write the long to un unsafe row at last, which is another 
overhead. These are the reasons why `Dataset` is slower than `RDD` for this 
simple case.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to