Hi, On Thu, Sep 4, 2014 at 10:33 AM, Tathagata Das <tathagata.das1...@gmail.com> wrote:
> In the current state of Spark Streaming, creating separate Java processes > each having a streaming context is probably the best approach to > dynamically adding and removing of input sources. All of these should be > able to to use a YARN cluster for resource allocation. > So, for example, I would write a server application that accepts a command like "createNewInstance" and then calls spark-submit, pushing my actual application to the YARN cluster? Or could I use spark-jobserver? Thanks Tobias