Hi All,

With the experience of adapting thrift data models for Airavata in past couple 
of years, its time for us to revisit them. Most persistent criticism has been 
the data models have been complex. Next the data models and architecture 
evolved in parallel and the implementations did not always match the intended 
models. In an effort to address these issues, lets first discuss the minimal 
required data models.

We need to confirm the models to the general principle of Experiments deriving 
into a Process or a Workflow. For single application, a process can be directly 
derived from Experiment Details. For workflows, multiple process are created. 
Executing a process leads to creation of multiple Tasks. Task is a general type 
which are enacted at run time based on a generic execution sequence of 
environment setup, data input staging, application execution and monitoring, 
data output staging and environment cleanup.

Please review the initial draft:
https://github.com/apache/airavata/tree/master/thrift-interface-descriptions/airavata-data-models
 
<https://github.com/apache/airavata/tree/master/thrift-interface-descriptions/airavata-data-models>

Assume lazy consensus and update the models, lets literately review and update 
these thrift IDL’s. We don’t yet need to dive into code generation, until these 
are close to final.

@Supun, may be you can start thinking on the data base representation on these 
models and assume the details will change but the general structure might 
remain.

Cheers,
Suresh

Reply via email to