Sudhakar,

What you are asking is not a direct MFT use case. It's more like a NFS
mount of a remote file system to a local file system. MFT is mainly
focussing on handling the data transfer path not synching data between two
endpoints at realtime.

Thanks
Dimuthu

On Thu, Mar 26, 2020 at 12:29 PM Pamidighantam, Sudhakar <pamid...@iu.edu>
wrote:

> Dimuthu:
>
>
>
> Yes, the working directory on remote HPC cluster.
>
>
>
> The workflow may look like this..
>
>
>
> The user launches a job..
>
> The remote working directory, dynamically defined by Airavata during the
> launch of the experiment is registered as a remote disk accessible
>
> The contents are made available readonly for  users to read/download
>
> Remove this as accessible when the experiment ends
>
> Continue with the rest of the Helix tasks
>
> …
>
>
>
>
>
> Thanks,
>
> Sudhakar.
>
>
>
> *From: *DImuthu Upeksha <dimuthu.upeks...@gmail.com>
> *Reply-To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
> *Date: *Thursday, March 26, 2020 at 12:23 PM
> *To: *Airavata Dev <dev@airavata.apache.org>
> *Subject: *Re: MFT and data access for running jobs
>
>
>
> Sudhakar,
>
> I’m not sure whether I grabbed your point about this remote working
> directory correctly. Are you taking about the working directory of the
> cluster? Can you please explain the workflow with more details?
>
> Thanks
> Dimuthu
>
>
>
> On Thu, Mar 26, 2020 at 10:21 AM Pamidighantam, Sudhakar <pamid...@iu.edu>
> wrote:
>
> Dimuthu:
>
>
>
> When the MFT becomes available would there be a way to define the remote
> working directory as a device to provide access to the data there.
>
> You know this has been a long standing need for particularly long running
> jobs.
>
>
>
> Thanks,
>
> Sudhakar.
>
>
>
>

Reply via email to