I have two minds on the "configure experiment" method. On the one hand,
most of the gateways we are taking use cases from already have a local
persistence mechanism for this, so we don't have a driver. And I'm sure
there will be implementation subtleties. On the other hand, it would be
a good feature to provide for new gateways. Telling them to go implement
a DB for this by themselves would be bad practice, especially when we
should have the experience to do it correctly.

The AMBER portal could be a good use case. I think this is currently in
the "nice to have" list.


Marlon

On 1/19/14 10:58 AM, Suresh Marru wrote:
> I see Amila’s point and can be argued that, Airavata Client can fetch 
> experiment, modify what is needed and re-submit as a new experiment.
>
> But I agree with Saminda, if an experiment has dozens of inputs and if say 
> only parameter or scheduling info needs to be changes, cloning makes it 
> useful. The challenge though is how to communicate what all needs to be 
> changed? Should we assume anything explicitly not passed remains as original 
> experiment and the ones passed are overridden? 
>
> I think the word clone seems fine and also aligns with the Java Clone 
> interpretation [1].
>
> This brings up another question, should there be only create, launch, clone 
> and terminate experiments or should we also have a configure experiment? The 
> purpose of configure is to let the client slowly load up the object as it has 
> the information and only launch it when it is ready. That way portals need 
> not have an intermediate persistence for these objects and facilitate users 
> to build an experiment in long sessions. Thought?
>
> Suresh
> [1] - http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#clone()
>
> On Jan 17, 2014, at 2:05 PM, Saminda Wijeratne <[email protected]> wrote:
>
>> an experiment will not define new descriptors but rather point to an 
>> existing descriptor(s). IMO (correct me if I'm wrong),
>>
>> Experiment = Application + Input value(s) for application + Configuration 
>> data for managing job
>>
>> Application = Service Descriptor + Host Descriptor + Application Descriptor
>>
>> Thus for an experiment it involves quite the amount of data of which needs 
>> to be specified. Thus it is easier to make a copy of it rather than asking 
>> the user to specify all of the data again when only there are very few 
>> changes compared to original experiment. Perhaps the confusion here is the 
>> word "clone"?
>>
>>
>> On Fri, Jan 17, 2014 at 10:20 AM, Amila Jayasekara <[email protected]> 
>> wrote:
>> This seems like adding new experiment definition. (i.e. new descriptors).
>> As far as I understood this should be handled at UI layer (?). For the 
>> backend it will just be new descriptor definitions (?).
>> Maybe I am missing something.
>>
>> - AJ
>>
>>
>> On Fri, Jan 17, 2014 at 1:15 PM, Saminda Wijeratne <[email protected]> 
>> wrote:
>> This was in accordance with the CIPRES usecase scenario where users would 
>> want to rerun their tasks but with subset of slightly different 
>> parameters/input. This is particularly useful for them because their tasks 
>> can include more than 20-30 parameters most of the time.
>>
>>
>> On Fri, Jan 17, 2014 at 6:49 AM, Sachith Withana <[email protected]> wrote:
>> Hi Amila,
>>
>> The use of the word "cloning" is misleading.
>>
>> Saminda suggested that, we would need to run the application in a different 
>> host ( based on the users intuition of the host availability/ efficiency) 
>> keeping all the other variables constant( inputs changes are also allowed). 
>> As an example: if a job keeps failing on one host, the user should be 
>> allowed to submit the job to another host. 
>>
>> We should come up with a different name for the scenario.. 
>>
>>
>> On Thu, Jan 16, 2014 at 11:36 PM, Amila Jayasekara <[email protected]> 
>> wrote:
>>
>>
>>
>> On Thu, Jan 16, 2014 at 10:58 AM, Sachith Withana <[email protected]> 
>> wrote:
>> Hi All,
>>
>> This is the summary of the meeting we had Wednesday( 01/16/14) on the 
>> Orchestrator.
>>
>> Orchestrator Overview
>> I Introduced the Orchestrator and I have attached the presentation herewith.
>>
>> Adding Job Cloning capability to the Orchestrator API
>> Saminda suggested that we should have a way to clone an existing job and run 
>> it with different inputs or on a different host or both. Here's the Jira for 
>> that.[1]
>>
>> I didnt quite understand what cloning does. Once descriptors are setup we 
>> can run experiment with different inputs, many times we want. So what is the 
>> actual need to have cloning ?
>>
>> Thanks
>> Thejaka Amila
>>  
>>
>> Gfac embedded vs Gfac as a service
>> We have implemented the embedded Gfac and decided to use it for now. 
>> Gfac as a service is a long term goal to have. Until we get the Orchestrator 
>> complete we will use the embedded Gfac. 
>>
>> Job statuses for the Orchestrator and the Gfac
>> We need to come up with multi-level job statuses. User-level, 
>> Orchestartor-level and the Gfac-level statuses. Also the mapping between 
>> them is open for discussion. We didn't come to a conclusion on the matter. 
>> We will discuss this topic in an upcoming meeting. 
>>
>>
>> [1] https://issues.apache.org/jira/browse/AIRAVATA-989
>>
>> -- 
>> Thanks,
>> Sachith Withana
>>
>>
>>
>>
>>
>> -- 
>> Thanks,
>> Sachith Withana
>>
>>
>>
>>

Reply via email to