+ 1 for the feature as such, and I also agree for this case the devil is in the implementation details. Please be careful in changing any semantics of for-each or as such workflow level parallelism vs gfac level parallelism. I would rather see new system components defined for this purpose rather than over-loading workflow definitions and implicitly handling parallelism.
Suresh On Sep 11, 2012, at 8:47 PM, Lahiru Gunathilake <glah...@gmail.com> wrote: > This sounds like awesome feature.. Can you please start a discussion on how > are you going to implement these features in xbaya and gfac core ! > > I think most important part is implementation details. > > Lahiru > > On Thu, Sep 6, 2012 at 10:55 AM, Raminderjeet Singh < > raminderjsi...@gmail.com> wrote: > >> Hi Dev, >> >> I came across few requirements for For each. I would like to discuss these >> before i create few JIRA tasks. >> >> 1. Currently For each take the array list and run jobs based on array size >> without caring about the Array size. On grid resources it may not be >> optimal to run lot of jobs in parallel and some time we can run into >> Airavata software limits also. My idea is if we can make For each >> configurable with number of jobs in parallel limits. We can split the jobs >> in batches based on limit defined by workflow composer. >> >> 2. There can be a case if few jobs fail while submission (connection >> failure or other middleware failure) and rest of the jobs submit fine. We >> need to find a way to handle such failures. In case of connection we can >> retry job submission but if resubmission fails, we need to either cancel >> rest of the jobs or decide to accept partial results. >> >> 3. Handle partial application failures. If i ran 20 jobs and only one did >> not produce results and if gateway is ok with results of 19 jobs and can >> run 1 remaining job as a separate process. We can have some attribute to >> contract to be strict check or not with few other attributes based on >> advise. >> >> 4. Show some information about the number of jobs >> running/completed/waiting on the node. >> >> Thanks >> Raminder > > > > > -- > System Analyst Programmer > PTI Lab > Indiana University