vivekshresta commented on pull request #260:
URL: https://github.com/apache/airavata/pull/260#issuecomment-682286257
Hi @machristie ,
Thanks for reviewing the code.
- From our last conversation in the dev mailing list, I assumed we agreed on
Airavata worrying about the storage limit makes sense since, in the future,
Airavata can choose between multiple StoragePreferences or choose a storage
preference mentioned in GatewayResourceProfile or UserStoragePreference(which
is about to be deprecated) when the storage preference id given by the gateway
is invalid. I guess the gateway too can easily achieve these functionalities.
I did want the validation to happen internally, but the problem I faced was,
during the experiment creation phase in Airavata, the experiment model does not
have any data related to the StoragePreference in which the experiment is being
created. Changing the createExperiment() method to accept another parameter
would mean changes across all the gateways. And in my previous discussions with
the team, I came to know that in the future, similar to choosing compute
preferences for an experiment during the experiment creation phase, we're gonna
develop a new functionality where the user gets to choose the StoragePreference
in which he/she is going to create the experiment. With that thought process, I
created a new API that can be invoked by any gateway, if they choose to use
this feature.
But I just verified that, by calling
'_set_storage_id_and_data_dir(experiment)' before creating an experiment, I can
set the storageId and experiment data directory removing the need for passing
the storageId explicitly.
Basically, the public API can be changed to an internal API now. Will make
those changes soon.
- I did consider this. The problems with this approach are:
1. We will check the size limit only after the experiment creation is
done.
2. When we know the size limit is exceeded, helix needs to communicate
back to APIServer for deleting the created
experiment entries and if needed, deleting the experiment directory.
Considering these and after discussing with Dimuthu, I thought this might be
the better approach when we use 'StorageResourceAdaptor', but this does seem to
complicate things.
Even if I remove the new public API and integrate it with
createExperiment(), this approach would still be consuming APIServer's
resources(though we're using pooled resources instead of creating a new SSH
connection every time). Does it make sense to just stick with the original
approach - the gateway worrying about the storage quotas?
Also can you please elaborate a little on transient network failure in helix.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]