To use Calcite on regular data (e.g. CSV files) you implement a table adapter 
(i.e. the interface TableFactory, perhaps also SchemaFactory.)

To use Calcite on streaming data, you also implement a table adapter, and when 
it produces Table objects, those objects must implement the StreamableTable 
interface.

There are examples in StreamTest.

Spark is a different kind of problem. I would characterize Spark as an engine, 
not a data source. You can use the Spark adapter to generate code for Spark, 
but it’s fair to say that the Spark adapter is working on an old version of 
Spark. (We’d love it if someone would help modernize our Spark adapter.)

Julian

> On May 23, 2016, at 11:00 PM, Albert <[email protected]> wrote:
> 
> Hi,
>    is there any document describing how to use `stream` on my own data ?
> the codes/docs suggests all those queries are running on special schema.
>    and similar case for `spark`.
>    any suggestions are appreciated. thanks.
> 
> 
> 
> -- 
> ~~~~~~~~~~~~~~~
> no mistakes
> ~~~~~~~~~~~~~~~~~~

Reply via email to