I am curious about your use case. Are you not losing out on the
optimisations of Calcite when you are using Spark? Is it possible for you
to share a general approach where we will be able to keep the optimisations
done by Calcite and use Spark on top of it?


On Fri, 4 Aug 2023 at 5:19 PM, P.F. ZHAN <[email protected]> wrote:

> Generally speaking, the SEARCH operator is very good, but when we use
> Calcite to optimize the logical plan and then use Spark to execute, this is
> unsupported. So is there a more elegant way to close the SEARCH operator?
> Or how to convert the SEARCH operator to the IN operator before converting
> the Calcite logical plan to the Spark logical plan? If we do this, we need
> to consider Join / Filter, are there any other RelNodes?
>
> Maybe, this optimization is optional more better at present for many query
> execution engine does not support this operator?
>

Reply via email to