Hi Dimuthu,

 

Thanks for sending this very thoughtful document. A couple of comments:

 

* Use of Kafka instead of RabbitMQ is interesting. Can you say more about how 
this approach can handle Kafka client failures?  For RabbitMQ, for example, 
there is the simple “Work Queue” approach in which the broker pushes a task to 
a worker. The task remains in queue until the worker sends an acknowledgement 
that the job has been handled, not just received. “Handled” may mean for 
example that the job has been submitted to an external batch scheduler over 
SSH, which may require some retries, etc.   If the worker crashes before the 
job has been submitted, then the broker can resend the message to another 
worker.   I’m wondering how your Kafka-based solution would handle the same 
issue. 

 

* A simpler but more common failure is communicating with external resources. A 
task executor may need to SSH to a remote resource, which can fail (the 
resource is slow to communicate, usually). How do you handle this case?

 

* Your design focuses on Airavata’s experiment execution handling. Airavata’s 
registry is another important component: this is where experiment objects get 
persistently stored. The registry stores metadata about both “live” experiments 
that are currently executing as well as archived experiments that have 
completed.

 

How would you extend your architecture to include the registry?

 

Marlon

 

 

From: "[email protected]" <[email protected]>
Reply-To: "[email protected]" <[email protected]>
Date: Monday, October 30, 2017 at 10:45 AM
To: "[email protected]" <[email protected]>
Subject: Linked Container Services for Apache Airavata Components - Phase 2 - 
Initial Prototype

 

Hi All, 

 

Based on the analysis of Phase 1, within past two weeks I have been working on 
implementing a task execution workflow following the microservices deployment 
pattern and Kubernetes as the deployment platform. 

 

Please find attached design document that explains the components and messaging 
interactions between components. Based on that design, I have implemented 
following components

 

1. Set of microservices to compose the workflow

2. A simple Web Console to  deploy and monitor workflows on the framework

 

I used Kakfa as the primary messaging medium to communicate among the 
microservices due to its simplicity and powerful features like partitions and 
consumer groups.

 

I have attached a user guide so that you can install and try this in your local 
machine. And source code for each component can be found from [1]

 

Please share you ideas and suggestions.

 

Thanks

Dimuthu

 

[1] 
https://github.com/DImuthuUpe/airavata/tree/master/sandbox/airavata-kubernetes

[2] 
https://docs.google.com/document/d/1R1xrmuPldHiWVDn4xNVay9Vnxn9FODQZXtF55JxJpSY/edit?usp=sharing

[3] 
https://docs.google.com/document/d/1A5eRIZiuUj4ShZVMS0NdAxjAxtOTZXculaYDCZ7IMQ8/edit?usp=sharing

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to