We have a legacy system with home brewn workflows defined in XPDL, running
across multiple dozens of nodes. Resources are mapped in XML definition
files, and availability of resource to a given task at hand managed by a
custom written job scheduler. Jobs communicate status with callback/JMS
messages. Job completion decides steps in the workflow.

To this eco system now comes some Hadoop/Spark jobs.
I am tentatively exploring Mesos to manage this disparate set of clusters.
How can I maintain a dynamic count of Executors, how can I provide dynamic
workflow orchestration to pull off above architecture in the Mesos world?
Sorry for the noob question!

Reply via email to