alamb commented on PR #14392:
URL: https://github.com/apache/datafusion/pull/14392#issuecomment-2628522279

   > I'm wondering how the user should choose what function to use. For 
instance `to_timestamp` function which may behave differently between non spark 
and spark env.
   > 
   > So the developer implements spark compliant `to_timestamp` function in the 
crate but when he is running the query ` select to_timestamp() from t1` what 
exactly implementation will be picked up? Like we should introduce something 
like `FUNCTION CATALOG` or something similar to switch between implementations?
   
   What I was imagining was that just like today DataFusion comes with some set 
if "included" functions (mostly following the postgres semantics). Nothing new 
is added to the core
   
   For users that want to use the spark functions, I am hoping we have 
something like
   
   ```rust
   use datafusion_functions_spark::install_spark;
   ...
   // by default this SessionContext has all the same functions as it does today
   let ctx = SessionContext::new();
   // for users who want spark compatible functions, install the 
   // spark compatibility functions, potentially overriding
   // some/all "built in functions", rewrites, etc
   install_spark(&mut ctx)?;
   // running queries now use spark compatible syntax
   ctx.sql(...)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to