Thanks for the pointer and article reference, Bruce!

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chris Mattmann, Ph.D.
Chief Architect
Instrument Software and Science Data Systems Section (398)
NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
Office: 168-519, Mailstop: 168-527
Email: [email protected]
WWW:  http://sunset.usc.edu/~mattmann/
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Adjunct Associate Professor, Computer Science Department
University of Southern California, Los Angeles, CA 90089 USA
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++






-----Original Message-----
From: Bruce Barkstrom <[email protected]>
Reply-To: "[email protected]" <[email protected]>
Date: Wednesday, August 6, 2014 8:52 AM
To: "[email protected]" <[email protected]>
Subject: Re: Multiple Processing Paradigms at Once

>I don't have a deep enough insight into the OODT architecture
>to know for sure how that would work.  From an architectural
>point of view, it would be useful to have UML synchronization
>diagrams that show the threads of computation and the message
>passing that you need to undertake for this work.  It's a familiar
>problem in the embedded systems world (avionics, multi-computer
>automobile control systems with 80 computers per car).  A good
>reference on this kind of design work is Bruce Douglass' "Real-Time
>UML".  It might also be useful to find a copy of Klein, et al., 1993:
>"A Practicioner's Handbook for Real-Time Analysis: Guide to
>Rate Monotonic Analysis for Real-Time Systems", Kluwer.
>It isn't clear whether you want to do the design for the hard
>real-time kind of system (where the system fails if messages
>aren't received within a set priority) or less stringent kinds of
>systems.  Exception handling gets to be a really "interesting"
>part of these kinds of designs.
>
>Bruce B.
>
>
>On Wed, Aug 6, 2014 at 11:29 AM, Michael Starch <[email protected]>
>wrote:
>
>> All,
>>
>> I am working on upgrading OODT to allow it to process streaming data,
>> alongside traditional non-streaming jobs.  This means that some jobs
>>need
>> to be run by the resource manager, and other jobs need to be submitted
>>to
>> the stream-processing.  Therefore, processing needs to be forked or
>> multiplexed at some point in the life-cycle.
>>
>> There are two places where this can be done: workflow manager runners,
>>and
>> the resource manager.  Currently, I am  working on building workflow
>> runners, and doing the job-multiplexing there because this cuts out one
>> superfluous step for the streaming jobs (namely going to the resource
>> manager before being routed).
>>
>> Are there any comments on this approach or does this approach make
>>sense?
>>
>> -Michael Starch
>>

Reply via email to