All,

I am working on upgrading OODT to allow it to process streaming data,
alongside traditional non-streaming jobs.  This means that some jobs need
to be run by the resource manager, and other jobs need to be submitted to
the stream-processing.  Therefore, processing needs to be forked or
multiplexed at some point in the life-cycle.

There are two places where this can be done: workflow manager runners, and
the resource manager.  Currently, I am  working on building workflow
runners, and doing the job-multiplexing there because this cuts out one
superfluous step for the streaming jobs (namely going to the resource
manager before being routed).

Are there any comments on this approach or does this approach make sense?

-Michael Starch

Reply via email to