Kontinuation commented on issue #2397:
URL: https://github.com/apache/sedona/issues/2397#issuecomment-3530812778

   Implementing a 
[FunctionCatalog](https://github.com/apache/spark/blob/v4.0.1/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/FunctionCatalog.java)
 seems to be the way to go for scoping Sedona ST functions in a database 
instead of registering them in the global namespace. However, it is not easy to 
adapt the current implementation of Sedona ST functions to the 
[UnboundFunction/BoundFunction](https://github.com/apache/spark/tree/v4.0.1/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/functions)
 interface required by FunctionCatalog.
   
   Prefixing the function names with a customizable prefix seems to be a valid 
short term solution. The prefix can be configured by setting a Spark 
configuration, such as `spark.sedona.udf.prefix`. I have done something similar 
for making Apache Sedona and GeoMesa Spark SQL co-exist before: 
https://www.geomesa.org/documentation/stable/user/spark/sparksql.html#using-geomesa-sparksql-with-apache-sedona.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to