Nothing exists in Arrow core to do this. You will need to manually decide
on how to batch and serialize data into and out of Kafka.  The recent
discussion [1] on user@ on transferring data into and out of Redis provides
some pointers on how to do this.

Note that the "*fetch_pandas_all" *is something that Snowflake developed on
their own (I imagine on top of the existing libraries).

[1]
https://lists.apache.org/x/thread.html/r949d6e477a0e4ed1807a5b305a3b79d045fa296cfbd1b66313463cc6@%3Cuser.arrow.apache.org%3E



On Tue, Jul 21, 2020 at 11:51 AM Mehul Batra <mehul.ba...@pb.com> wrote:

> Hi Arrow Community,
>
>
>
> Do we guys have any Api to ingest and process apache Kafka data fast using
> pyarrow/python ….just like we have to *fetch_pandas_all* to ingest and
> process snowflake data fast.
>
>
>
> Thanks,
>
> Mehul Batra
> [image: Pitney Bowes] <http://www.pitneybowes.com/>
>
>
>
>
>

Reply via email to