Re: Understanding components in Airavata

2017-12-15 Thread DImuthu Upeksha
Managed to fix this by adding job manager commands in client code

ResourceJobManager resourceJobManager = RegisterSampleApplicationsUtils.
createResourceJobManager(ResourceJobManagerType.FORK, null, null, null);
resourceJobManager.setJobManagerBinPath("/bin/");
Map jobManagerCommandStringMap = new
HashMap();
jobManagerCommandStringMap.put(JobManagerCommand.SUBMISSION, "sh");
resourceJobManager.setJobManagerCommands(jobManagerCommandStringMap);

LOCALSubmission submission = new LOCALSubmission();
submission.setSecurityProtocol(SecurityProtocol.LOCAL);
submission.setResourceJobManager(resourceJobManager);

Thanks

Dimuthu


On Fri, Dec 15, 2017 at 10:36 PM, DImuthu Upeksha <
dimuthu.upeks...@gmail.com> wrote:

> Hi Suresh,
>
> I'm getting following error when launching the experiment
>
> [ERROR] Thread Thread[pool-41-thread-13,5,main] died
> java.lang.NullPointerException: null
> at org.apache.airavata.gfac.impl.job.ForkJobConfiguration.
> getSubmitCommand(ForkJobConfiguration.java:85)
> at org.apache.airavata.gfac.impl.LocalRemoteCluster.submitBatchJob(
> LocalRemoteCluster.java:60)
> at org.apache.airavata.gfac.impl.task.LocalJobSubmissionTask.execute(
> LocalJobSubmissionTask.java:89)
> at org.apache.airavata.gfac.impl.GFacEngineImpl.executeTask(
> GFacEngineImpl.java:814)
> at org.apache.airavata.gfac.impl.GFacEngineImpl.executeJobSubmission(
> GFacEngineImpl.java:510)
> at org.apache.airavata.gfac.impl.GFacEngineImpl.executeTaskListFrom(
> GFacEngineImpl.java:386)
> at org.apache.airavata.gfac.impl.GFacEngineImpl.executeProcess(
> GFacEngineImpl.java:286)
> at org.apache.airavata.gfac.impl.GFacWorker.executeProcess(
> GFacWorker.java:227)
> at org.apache.airavata.gfac.impl.GFacWorker.run(GFacWorker.java:86)
> at org.apache.airavata.common.logging.MDCUtil.lambda$
> wrapWithMDC$0(MDCUtil.java:40)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
> Code point that emits NPE is
>
> @Override
> public RawCommandInfo getSubmitCommand(String workingDirectory, String 
> forkFilePath) {
> return new RawCommandInfo(this.installedPath + 
> jobManagerCommands.get(JobManagerCommand.SUBMISSION).trim() + " " +
> workingDirectory + File.separator + 
> FilenameUtils.getName(forkFilePath));
> }
>
> When I debug, there was no job manager command for SUBMISSION type.
>
> This is the client code that I'm working on [1]. Please let me know if I
> have to add any other configuration when creating these resources.
>
> [1] https://gist.github.com/DImuthuUpe/3e31f1a5b64cf258bb6129ee848d1991
>
> Thanks
> Dimuthu
>
> On Fri, Dec 15, 2017 at 10:19 PM, Suresh Marru  wrote:
>
>> Hi Dimuthu,
>>
>> Since Airavata supports different type of computational resources from
>> simple local executions to batch systems to cloud, the Resource Job Manager
>> specifies the types of job submissions executed on a given resource, the
>> protocol used and so forth [2].
>>
>> The job manager commands are particularly relevant to batch schedulers
>> [1] on HPC clusters [3]. These envelop basic commands to interact with the
>> scheduler to queue jobs, check job statuses, cancel jobs and so forth.
>> Different schedulers have different commands to interact and these structs
>> help capture them.
>>
>> Not sure if this what you are looking for. If you have any particular
>> question related to these within GFac, may be I can point to those
>> implementations and elaborate.
>>
>> Suresh
>>
>> [1] - https://github.com/apache/airavata/blob/master/thrift-inte
>> rface-descriptions/data-models/resource-catalog-models/
>> compute_resource_model.thrift#L85-L92
>> [2] - https://github.com/apache/airavata/blob/master/thrift-inte
>> rface-descriptions/data-models/resource-catalog-models/
>> compute_resource_model.thrift#L113-L118
>> [3] - https://en.wikipedia.org/wiki/Job_scheduler#Batch_queuing_
>> for_HPC_clusters
>>
>>
>> On Dec 15, 2017, at 11:31 AM, DImuthu Upeksha 
>> wrote:
>>
>> Hi Folks,
>>
>> While I was trying to update the java sample clients for the Airavata,
>> there were some unclear areas that I came across. I'll use this thread to
>> get them clarified.
>>
>> What is the role of Resource Job Manager and Job Manager Command?
>>
>> Thanks
>> Dimuthu
>>
>>
>>
>


Re: Understanding components in Airavata

2017-12-15 Thread DImuthu Upeksha
Hi Suresh,

I'm getting following error when launching the experiment

[ERROR] Thread Thread[pool-41-thread-13,5,main] died
java.lang.NullPointerException: null
at
org.apache.airavata.gfac.impl.job.ForkJobConfiguration.getSubmitCommand(ForkJobConfiguration.java:85)
at
org.apache.airavata.gfac.impl.LocalRemoteCluster.submitBatchJob(LocalRemoteCluster.java:60)
at
org.apache.airavata.gfac.impl.task.LocalJobSubmissionTask.execute(LocalJobSubmissionTask.java:89)
at
org.apache.airavata.gfac.impl.GFacEngineImpl.executeTask(GFacEngineImpl.java:814)
at
org.apache.airavata.gfac.impl.GFacEngineImpl.executeJobSubmission(GFacEngineImpl.java:510)
at
org.apache.airavata.gfac.impl.GFacEngineImpl.executeTaskListFrom(GFacEngineImpl.java:386)
at
org.apache.airavata.gfac.impl.GFacEngineImpl.executeProcess(GFacEngineImpl.java:286)
at
org.apache.airavata.gfac.impl.GFacWorker.executeProcess(GFacWorker.java:227)
at org.apache.airavata.gfac.impl.GFacWorker.run(GFacWorker.java:86)
at
org.apache.airavata.common.logging.MDCUtil.lambda$wrapWithMDC$0(MDCUtil.java:40)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Code point that emits NPE is

@Override
public RawCommandInfo getSubmitCommand(String workingDirectory, String
forkFilePath) {
return new RawCommandInfo(this.installedPath +
jobManagerCommands.get(JobManagerCommand.SUBMISSION).trim() + " " +
workingDirectory + File.separator +
FilenameUtils.getName(forkFilePath));
}

When I debug, there was no job manager command for SUBMISSION type.

This is the client code that I'm working on [1]. Please let me know if I
have to add any other configuration when creating these resources.

[1] https://gist.github.com/DImuthuUpe/3e31f1a5b64cf258bb6129ee848d1991

Thanks
Dimuthu

On Fri, Dec 15, 2017 at 10:19 PM, Suresh Marru  wrote:

> Hi Dimuthu,
>
> Since Airavata supports different type of computational resources from
> simple local executions to batch systems to cloud, the Resource Job Manager
> specifies the types of job submissions executed on a given resource, the
> protocol used and so forth [2].
>
> The job manager commands are particularly relevant to batch schedulers [1]
> on HPC clusters [3]. These envelop basic commands to interact with the
> scheduler to queue jobs, check job statuses, cancel jobs and so forth.
> Different schedulers have different commands to interact and these structs
> help capture them.
>
> Not sure if this what you are looking for. If you have any particular
> question related to these within GFac, may be I can point to those
> implementations and elaborate.
>
> Suresh
>
> [1] - https://github.com/apache/airavata/blob/master/thrift-
> interface-descriptions/data-models/resource-catalog-
> models/compute_resource_model.thrift#L85-L92
> [2] - https://github.com/apache/airavata/blob/master/thrift-
> interface-descriptions/data-models/resource-catalog-
> models/compute_resource_model.thrift#L113-L118
> [3] - https://en.wikipedia.org/wiki/Job_scheduler#Batch_
> queuing_for_HPC_clusters
>
>
> On Dec 15, 2017, at 11:31 AM, DImuthu Upeksha 
> wrote:
>
> Hi Folks,
>
> While I was trying to update the java sample clients for the Airavata,
> there were some unclear areas that I came across. I'll use this thread to
> get them clarified.
>
> What is the role of Resource Job Manager and Job Manager Command?
>
> Thanks
> Dimuthu
>
>
>


Re: Understanding components in Airavata

2017-12-15 Thread Suresh Marru
Hi Dimuthu,

Since Airavata supports different type of computational resources from simple 
local executions to batch systems to cloud, the Resource Job Manager specifies 
the types of job submissions executed on a given resource, the protocol used 
and so forth [2]. 

The job manager commands are particularly relevant to batch schedulers [1] on 
HPC clusters [3]. These envelop basic commands to interact with the scheduler 
to queue jobs, check job statuses, cancel jobs and so forth. Different 
schedulers have different commands to interact and these structs help capture 
them. 

Not sure if this what you are looking for. If you have any particular question 
related to these within GFac, may be I can point to those implementations and 
elaborate. 

Suresh

[1] - 
https://github.com/apache/airavata/blob/master/thrift-interface-descriptions/data-models/resource-catalog-models/compute_resource_model.thrift#L85-L92
 

[2] - 
https://github.com/apache/airavata/blob/master/thrift-interface-descriptions/data-models/resource-catalog-models/compute_resource_model.thrift#L113-L118
 

[3] - 
https://en.wikipedia.org/wiki/Job_scheduler#Batch_queuing_for_HPC_clusters 


> On Dec 15, 2017, at 11:31 AM, DImuthu Upeksha  
> wrote:
> 
> Hi Folks,
> 
> While I was trying to update the java sample clients for the Airavata, there 
> were some unclear areas that I came across. I'll use this thread to get them 
> clarified. 
> 
> What is the role of Resource Job Manager and Job Manager Command?
> 
> Thanks
> Dimuthu