Hi list,

I am trying to undestand if ti make sense to leverage on Spark as enabling
platform for Deep Learning.

My open question to you are:

   - Do you use Apache Spark in you DL pipelines?
   - How do you use Spark for DL? Is it just a stand-alone stage in the
   workflow (ie data preparation script) or is it  more integrated

I see a major advantage in leveraging on Spark as a unified entrypoint, for
example you can easily abstract data sources and leverage on existing team
skills for data pre-processing and training. On the flip side you may hit
some limitations including supported versions and so on.
What is your experience?

Thanks!

Reply via email to