Hi Michaƫl,

I'm not an expert of Dremio, i just try to evaluate the potential of this
techno and what impacts it could have on spark, and how they can work
together, or how spark could use even further arrow internally along the
existing algorithms.

Dremio has already a quite rich api set enabling to access for instance to
metadata, sql queries, or even to create virtual datasets programmatically.
They also have a lot of predefined functions, and I imagine there will be
more an more fucntions in the future, eg machine learning functions like the
ones we may find in azure sql server which enables to mix sql and ml
functions.  Acces to dremio is made through jdbc, and we may imagine to
access virtual datasets through spark and create dynamically new datasets
from the api connected to parquets files stored dynamycally by spark on
hdfs, azure datalake or s3... Of course a more thight integration between
both should be better with a spark read/write connector to dremio :)

regards
xavier



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to