Chris, I did find the Runners in the trunk, and used them for the prototype piece to submit to Spark. Specifically, " http://svn.apache.org/repos/asf/oodt/branches/wengine-branch/wengine/src/main/java/org/apache/oodt/cas/workflow/engine/runner/MappedMultiRunner.java has yet to be ported over to the trunk, and I am wondering if there is a better solution than to using a Runner to submit to multiple Runners.
-Michael On Wed, Aug 6, 2014 at 9:08 AM, Mattmann, Chris A (3980) < [email protected]> wrote: > See OODT-215 and OODT-491 the functionality is not absent it is present > and those issues are the current status > > Sent from my iPhone > > > On Aug 6, 2014, at 8:53 AM, "Michael Starch" <[email protected]> wrote: > > > > Chris, > > > > I believe I am working on the truck of OODT. The WEngine branch already > > does what I need it to, however; the absence of this functionality in > trunk > > led me to wonder if this approach is undesired...hence this question. > > > > -Michael > > > > > > On Wed, Aug 6, 2014 at 8:50 AM, Chris Mattmann <[email protected] > > > > wrote: > > > >> Hi Mike, > >> > >> I think you are using the wengine branch of Apache OODT. > >> That is unmaintained. I would sincerely urge you to get > >> this working in trunk, that's where the developers are working > >> right now. > >> > >> Cheers, > >> Chris > >> > >> P.S. Let me think more about the below I have some ideas there. > >> > >> ------------------------ > >> Chris Mattmann > >> [email protected] > >> > >> > >> > >> > >> -----Original Message----- > >> From: Michael Starch <[email protected]> > >> Reply-To: <[email protected]> > >> Date: Wednesday, August 6, 2014 8:29 AM > >> To: <[email protected]> > >> Subject: Multiple Processing Paradigms at Once > >> > >>> All, > >>> > >>> I am working on upgrading OODT to allow it to process streaming data, > >>> alongside traditional non-streaming jobs. This means that some jobs > need > >>> to be run by the resource manager, and other jobs need to be submitted > to > >>> the stream-processing. Therefore, processing needs to be forked or > >>> multiplexed at some point in the life-cycle. > >>> > >>> There are two places where this can be done: workflow manager runners, > and > >>> the resource manager. Currently, I am working on building workflow > >>> runners, and doing the job-multiplexing there because this cuts out one > >>> superfluous step for the streaming jobs (namely going to the resource > >>> manager before being routed). > >>> > >>> Are there any comments on this approach or does this approach make > sense? > >>> > >>> -Michael Starch > >> > >> > >> >
