j7nhai opened a new issue, #7760:
URL: https://github.com/apache/incubator-gluten/issues/7760

   ### Backend
   
   VL (Velox)
   
   ### Bug description
   
   By rewrite the class and package in a jar.
   
   ```
   package org.apache.spark.sql.hive.execution;
   
   import org.apache.hadoop.hive.ql.exec.UDF;
   
   public class UDFStringString extends UDF {
     public String evaluate(String s1, String s2) {
       return s1 + " java " + s2;
     }
   }
   ```
   
   And run sql as following.
   
   ```
   CREATE TEMPORARY FUNCTION hive_string_string AS 
'org.apache.spark.sql.hive.execution.UDFStringString';
   
   select hive_string_string("hello", "world");
   explain select hive_string_string("hello", "world");
   
   ```
   
   
   with conf
   ```
   
spark.gluten.sql.columnar.backend.velox.udfLibraryPaths=file:///root/libmyudf.so
   ```
   
   
   And its output is:
   
   ```
   hello java world
   Time taken: 3.392 seconds, Fetched 2 row(s)
   
   == Physical Plan ==
   VeloxColumnarToRow
   +- ^(2) ProjectExecTransformer [hello java world AS 
hive_string_string(hello, world)#16]
      +- ^(2) InputIteratorTransformer[fake_column#17]
         +- RowToVeloxColumnar
            +- *(1) Scan OneRowRelation[fake_column#17]
   
   ```
   
   That means the udf is actually running on jvm because the output is "hello 
java world". But the plan show that it runs on native.
   
   
   
   
   
   ### Spark version
   
   None
   
   ### Spark configurations
   
   _No response_
   
   ### System information
   
   _No response_
   
   ### Relevant logs
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to