Hey all,

There have been ongoing discussions on this list and in Slack about
improving interoperability between Spark and Druid by creating Spark
connectors that can read from and write to Druid clusters. As these
discussions have begun to converge on a potential solution, I've opened a
proposal <https://github.com/apache/druid/issues/9780> laying out how we
can implement this functionality.

Thanks,
Julian

Reply via email to