Hi,

I have a use case to process simple ETL like jobs. The data volume is very
less (less than few GB), and can fit easily on my running java application's
memory. I would like to take advantage of Spark dataset api, but don't need
any spark setup (Standalone / Cluster ). Can I embed spark in existing Java
application and still use ?

I heard local spark mode is only for testing. For small data sets like, can
this still be used in production? Please advice if any disadvantages.

Regards
Reddy



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to