Hi all,

I have a central app that currently kicks of old-style Hadoop M/R jobs
either on-demand or via a scheduling mechanism.

My intention is to gradually port this app over to using a Spark standalone
cluster. The data will remain on HDFS.

Couple of questions:

1. Is there a way to get Spark jobs to load from jars that have been
pre-distributed to HDFS? I need to run these jobs programmatically from said
application.

2. Is SparkContext meant to be used in multi-threaded use-cases? i.e. can
multiple independent jobs run concurrently using the same SparkContext or
should I create a new one each time my app needs to run a job?

Thanks,
Ishaaq



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/launching-concurrent-jobs-programmatically-tp4990.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to