Is there any thing equivalent to haddop "Job"
(org.apache.hadoop.mapreduce.Job;) in spark? Once i submit the spark job i
want to concurrently read the sparkListener interface implementation methods
where i can grab the job status. I am trying to find a way to wrap the spark
submit object into one thread and read the sparkListener interface
implementation methods in another thread.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-job-tracker-tp8367p10548.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to