Omega359 commented on issue #15914: URL: https://github.com/apache/datafusion/issues/15914#issuecomment-3432229706
> A question I have is how do we plan to support `spark.sql.ansi.enabled` config? I've seen a few PRs for Spark functions that try to cater for this config, but we don't exactly have a way to set this so it usually results in dead code (e.g. a hardcoded boolean that is set to false for now) or their own way of toggling it (e.g. another argument to the function). [ConfigOptions](https://github.com/apache/datafusion/blob/114beec770dfc7f12e581a5e178c897104b96c70/datafusion/common/src/config.rs#L1089) via [ScalarFunctionArgs](https://github.com/apache/datafusion/blob/114beec770dfc7f12e581a5e178c897104b96c70/datafusion/expr/src/udf.rs#L372). I plan at some point to submit a PR for a 'ansi' type config in DataFusion as my fork mainly exists for having that in to_date and to_timestamp. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
