[ 
https://issues.apache.org/jira/browse/SPARK-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14057824#comment-14057824
 ] 

Tathagata Das edited comment on SPARK-2345 at 7/11/14 12:39 AM:
----------------------------------------------------------------

I must be missing something. If you want to run a Spark job inside the 
DStream.foreach function, you can just call any RDD action (count, collect, 
first, saveAsXXXXFile, etc.) inside that foreach function. context.runJob does 
not need to be called *explicitly*. All of the exisinting RDD actions 
(including the most generic rdd.foreachPartition) should be mostly sufficient 
for all requirements.

Maybe adding a code example of what you intend to do will help us disambiguate 
this?


was (Author: tdas):
I must be missing something. If you want to run a Spark job inside the 
DStream.foreach function, you can just call any RDD action (count, collect, 
first, saveAs***File, etc.) inside that foreach function. context.runJob does 
not need to be called *explicitly*. All of the exisinting RDD actions 
(including the most generic rdd.foreachPartition) should be mostly sufficient 
for all requirements.

Maybe adding a code example of what you intend to do will help us disambiguate 
this?

> ForEachDStream should have an option of running the foreachfunc on Spark
> ------------------------------------------------------------------------
>
>                 Key: SPARK-2345
>                 URL: https://issues.apache.org/jira/browse/SPARK-2345
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>            Reporter: Hari Shreedharan
>
> Today the Job generated simply calls the foreachfunc, but does not run it on 
> spark itself using the sparkContext.runJob method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to