Thanks for sharing this Erik!

It would be really nice/convenient to have a python option to do something
like that. Our ML team is mostly a python shop and we are also using
kubeflow pipelines to orchestrate our ML pipelines (mostly using their
python sdk to author these).

Please let me know if you can think of any way we could do this with python.

Thanks so much!





On Mon, Jan 27, 2020 at 1:18 PM Erik Willsey <
[email protected]> wrote:

> You can use the JDBC driver.  Here's a blog that describes JDBC usage in
> general:
> https://nl.devoteam.com/en/blog-post/querying-jdbc-database-parallel-google-dataflow-apache-beam/
>
>
> On Mon, Jan 27, 2020 at 12:32 PM Alan Krumholz <[email protected]>
> wrote:
>
>> Hi,
>> We are using beam and (dataflow) at my company and would like to use it
>> to read and write data from snowflake.
>>
>> Does anybody know if there are any source/sink available for snowflake?
>>
>> if not, what would be the easiest way to create those? (maybe there is
>> something for sqlalchemy that we could leverage for that?)
>>
>>
>> Thanks so much!
>>
>> Alan
>>
>

Reply via email to