comphead commented on issue #5600: URL: https://github.com/apache/datafusion/issues/5600#issuecomment-2627708479
> > > With that in mind, a joint effort on something in the main DataFusion repo or a `datafusion-contrib` repo could both work, and we are open to either option. > > > > > > I am -1 on moving any Comet code to `datafusion-contrib`. Apache governance is important to my employer and I would not be able to contribute to any `datafusion-contrib` repo. Of course, others are welcome to go create something there. > > Perhaps as an alternative we could setup a datafusion-udfs (pick an appropriate name) under the apache umbrella and managed by datafusion pmc's where this could live? Just a thought. Agree, this idea was floating for some time. To separate a core + some set of common builtin functions and having extension UDF crates per each specific use case (Spark, etc). It might be interesting challenge if someone wants to create a cross db use case and import the same function from `datafusion-spark-udfs-crate` and `datafusion-pg-udfs-crate` how to differentiate them in SQL stmt for cross system read but this is a more complicated scenario, may be not even realistic -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org