Hi, We are using calcite druid adaptor to query data stored in druid. But lots of operations are not pushed down to druid, the built-in enumerable execution engine becomes the bottleneck for some of the queries that’s not pushed down. As we also have some use-case to join data in druid from outside data source, it seems using spark execution engine is the way to go.
Has anyone used the spark-adaptor to make spark as the calcite execution engine? How mature is the spark-adaptor? Is there any document about how to use the spark-adaptor? How does it compare to using hive? Thanks, -JD
