Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16391#discussion_r93826236
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/DatasetBenchmark.scala ---
    @@ -170,36 +176,39 @@ object DatasetBenchmark {
         val benchmark3 = aggregate(spark, numRows)
     
         /*
    -    OpenJDK 64-Bit Server VM 1.8.0_91-b14 on Linux 
3.10.0-327.18.2.el7.x86_64
    -    Intel Xeon E3-12xx v2 (Ivy Bridge)
    +    Java HotSpot(TM) 64-Bit Server VM 1.8.0_60-b27 on Mac OS X 10.12.1
    +    Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz
    +
         back-to-back map:                        Best/Avg Time(ms)    
Rate(M/s)   Per Row(ns)   Relative
         
------------------------------------------------------------------------------------------------
    -    RDD                                           3448 / 3646         29.0 
         34.5       1.0X
    -    DataFrame                                     2647 / 3116         37.8 
         26.5       1.3X
    -    Dataset                                       4781 / 5155         20.9 
         47.8       0.7X
    +    RDD                                           3963 / 3976         25.2 
         39.6       1.0X
    +    DataFrame                                      826 /  834        121.1 
          8.3       4.8X
    +    Dataset                                       5178 / 5198         19.3 
         51.8       0.8X
    --- End diff --
    
    I noticed that Scala compiler automatically generates primitive version. 
Current Spark eventually calls primitive version thru generic version `Object 
apply(Object)`.    
    
    Here is a simple example. When we compile the following sample, we can find 
that the following class is generated by scalac. Scalac automatically generates 
a primitive version `int apply$mcII$sp(int)` that can be called by `int 
apply(int)`.  
    We could infer this signature in Catalyst for simple cases.
    
    Of course, I totally agree that the best solution is to analyze byte code 
and turn it into expression. [This 
](https://issues.apache.org/jira/browse/SPARK-14083)was already prototyped. Do 
you think it is good time to make this prototype more robust now?
    
    
    ```java
    test("ds") {
      val ds = sparkContext.parallelize((1 to 10), 1).toDS
      ds.map(i => i * 7).show
    }
    
    $ javap -c Test\$\$anonfun\$5\$\$anonfun\$apply\$mcV\$sp\$1.class
    Compiled from "Test.scala"
    public final class 
org.apache.spark.sql.Test$$anonfun$5$$anonfun$apply$mcV$sp$1 extends 
scala.runtime.AbstractFunction1$mcII$sp implements scala.Serializable {
      public static final long serialVersionUID;
    
      public final int apply(int);
        Code:
           0: aload_0
           1: iload_1
           2: invokevirtual #18                 // Method apply$mcII$sp:(I)I
           5: ireturn
    
      public int apply$mcII$sp(int);
        Code:
           0: iload_1
           1: bipush        7
           3: imul
           4: ireturn
    
      public final java.lang.Object apply(java.lang.Object);
        Code:
           0: aload_0
           1: aload_1
           2: invokestatic  #29                 // Method 
scala/runtime/BoxesRunTime.unboxToInt:(Ljava/lang/Object;)I
           5: invokevirtual #31                 // Method apply:(I)I
           8: invokestatic  #35                 // Method 
scala/runtime/BoxesRunTime.boxToInteger:(I)Ljava/lang/Integer;
          11: areturn
    
      public 
org.apache.spark.sql.Test$$anonfun$5$$anonfun$apply$mcV$sp$1(org.apache.spark.sql.Test$$anonfun$5);
        Code:
           0: aload_0
           1: invokespecial #42                 // Method 
scala/runtime/AbstractFunction1$mcII$sp."<init>":()V
           4: return
    }
    ```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to