LuciferYang commented on PR #39850:
URL: https://github.com/apache/spark/pull/39850#issuecomment-1413544628

   Then I try to do this demo in spark shell:
   
   1. build a spark-client
   2. add `spark-connect-client-jvm_2.12-3.5.0-SNAPSHOT.jar` and 
`perfmark-api-0.25.0.jar` into  `jars`
   3. remove `spark-sql_2.12-3.5.0-SNAPSHOT.jar ` from `jars` due to 
package+classname conflict between 
`spark-connect-client-jvm_2.12-3.5.0-SNAPSHOT.jar`  and 
`spark-sql_2.12-3.5.0-SNAPSHOT.jar `
   4. start client shell with `bin/spark-shell`, it is usable for 
spark-connect-client-jvm_2.12-3.5.0-SNAPSHOT.jar` although there are the 
following errors:
   
   ```
   <console>:18: error: value sparkContext is not a member of 
org.apache.spark.sql.SparkSession
                val _sc = spark.sparkContext
                                ^
   <console>:14: error: object implicits is not a member of package spark
          import spark.implicits._
                       ^
   <console>:14: error: object sql is not a member of package spark
          import spark.sql
                 ^
   ```
   5. run the following code in shell:
   ```
   
   import org.apache.spark.sql.SparkSession
   import org.apache.spark.sql.connect.client.SparkConnectClient
   import org.apache.spark.sql.Column
   import org.apache.spark.sql.functions.udf
   val ss = 
SparkSession.builder().client(SparkConnectClient.builder().port(15102).build()).build()
   def dummyUdf(x: Int): Int = x + 5
   val myUdf = udf(dummyUdf _)
   val df = ss.range(5).select(myUdf(Column("id")))
   val result = df.collectResult()
   println(result.length)
   result.toArray.zipWithIndex.foreach { case (v, idx) =>
     println(v.getInt(0))
   }
   ```
   
   then we can see following error in server side:
   
   ```
   23/02/02 18:44:00 ERROR SparkConnectService: Error during: execute
   java.lang.ClassNotFoundException: $line14.$read$$iw$$iw$$iw$$iw$$iw$$iw
        at 
scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:72)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.util.Utils$$anon$1.resolveClass(Utils.scala:142)
        at 
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1988)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
        at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:2119)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1657)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
        at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
        at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
        at org.apache.spark.util.Utils$.deserialize(Utils.scala:146)
        at 
org.apache.spark.sql.connect.planner.SparkConnectPlanner.transformScalarScalaUDF(SparkConnectPlanner.scala:855)
        at 
org.apache.spark.sql.connect.planner.SparkConnectPlanner.transformCommonInlineUserDefinedFunction(SparkConnectPlanner.scala:837)
        at 
org.apache.spark.sql.connect.planner.SparkConnectPlanner.transformExpression(SparkConnectPlanner.scala:748)
        at 
org.apache.spark.sql.connect.planner.SparkConnectPlanner.$anonfun$transformProject$1(SparkConnectPlanner.scala:698)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to