[jira] [Commented] (AIRAVATA-465) ability to interact with outputs from individual tasks while executing

2014-07-30 Thread Pamidighantam, Sudhakar V (JIRA)

[ 
https://issues.apache.org/jira/browse/AIRAVATA-465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080260#comment-14080260
 ] 

Pamidighantam, Sudhakar V commented on AIRAVATA-465:


I will be out of office until Aug 15, 2014 with limited email access. Please 
contact h...@ncsa.illinois.edu if this require immediate attention. For 
SEAGrid/GridChem related issues please use the consulting portal at 
https://www.gridchem.org/consult/

Have a pleasant Summer 2014.

Thanks,
Sudhakar.


 ability to interact with outputs from individual tasks while executing
 --

 Key: AIRAVATA-465
 URL: https://issues.apache.org/jira/browse/AIRAVATA-465
 Project: Airavata
  Issue Type: New Feature
  Components: Workflow Tracking
Affects Versions: 0.13
Reporter: Sudhakar Pamidighantam
Assignee: Lahiru Gunathilake
 Fix For: 0.15 


 It will be useful to interact with the output of an individual task in a 
 workflow while the task is executing. Particularly for long running ( hours)  
 tasks some intermediate status echos will be very useful for the user to be 
 assured of the right progress of the task. In case of unsatisfactory progress 
 the user may want to stop/pause/kill the workflow and resubmit with modified 
 inputs. If there is a way to modify the inputs and the task rereads them at 
 the next cycle(step) during the execution then the workflow can be steered 
 toward the right (desired) direction by the user. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


RE: Separate Thrift services- Code restructure

2014-10-28 Thread Pamidighantam, Sudhakar V
I am starting to look at Airavata with a background of having used an old 
snapshot in paramchem project.
Is there a place I can go to learn the current organization of the development.

Thanks,
Sudhakar.

From: Shameera Rathnayaka [mailto:shameerai...@gmail.com]
Sent: Tuesday, October 28, 2014 3:56 PM
To: dev
Subject: Re: Separate Thrift services- Code restructure

+1 for merging Workflow service with Orchestrator,
we are not get any advantage by keeping those two as separate services.

Thanks,
Shameera.

On Tue, Oct 28, 2014 at 4:35 PM, Raminderjeet Singh 
raminderjsi...@gmail.commailto:raminderjsi...@gmail.com wrote:
I need to move workflow sever/client out out API server to remove extra 
dependencies on workflow model. I am going to move workflow server and client 
to orchestrator server and can get rid of server part as next step.

Thanks
Raminder

On Tue, Oct 28, 2014 at 3:49 PM, Suresh Marru 
sma...@apache.orgmailto:sma...@apache.org wrote:
+ 1.

I think we can leave out the workflow service and its probably best to embedded 
it with orchestrator, since there is so much overlap. So that leaves 3 services:

API Server - Client
Orchestrator Server -Client
GFac Server - Client

Suresh

On Oct 28, 2014, at 3:32 PM, Raminder Singh 
raminderjsi...@gmail.commailto:raminderjsi...@gmail.com wrote:

 Hi All,

 I am fixing AIRAVATA-1471 to create separate distributions for all the Thrift 
 services in Airavata so that we can run all in separate JVMs and dockerize 
 (www.docker.comhttp://www.docker.com) the servers. In this exercise, i 
 found we don’t have client stubs for several components in separate artifacts 
 like Orchestrator Client is part of Orcherstrator Service, GFAC client is 
 part of Orcherstator-Core, Workflow server and client is part of Airavata API 
 server and client. To be consistent with API server and reduce maven 
 dependency tree, i am going to create airavata— component—stubs package and 
 add the component client to that project. I need to move the code and changed 
 dependencies etc. Please let me know if there are any objections. If not i 
 will go ahead tomorrow and make the change and commit them after testing.

 Thanks
 Raminder




--
Best Regards,
Shameera Rathnayaka.

email: shameera AT apache.orghttp://apache.org , shameerainfo AT 
gmail.comhttp://gmail.com
Blog : http://shameerarathnayaka.blogspot.com/


Re: access to wiki

2014-11-24 Thread Pamidighantam, Sudhakar V
Yes. it is pamidigs. 

Thanks,
Sudhakar.
On Nov 24, 2014, at 1:53 PM, Marlon Pierce marpi...@iu.edu wrote:

 Hi Sudhakar, did you create a wiki account?  Please send me your username and 
 I'll give you the required permissions.
 
 Thanks--
 
 Marlon
 
 On 11/24/14, 2:41 PM, Pamidighantam, Sudhakar V wrote:
 I would like to deposit some scripts toward generalization in app catalog 
 and gfac design/implementation and would need access to the wiki.
 Please let me know when this is ready.
 
 Thanks,
 Sudhakar.
 



Re: access to wiki

2014-11-24 Thread Pamidighantam, Sudhakar V
Thanks Marlon.

I will try and add these scripts under use cases as child pages.
If there is more appropriate place let me know. 
Thanks,
Sudhakar.

On Nov 24, 2014, at 2:19 PM, Marlon Pierce marpi...@iu.edu wrote:

 You should now have required permissions. Let me know if you have problems.
 
 Marlon
 
 On 11/24/14, 3:14 PM, Pamidighantam, Sudhakar V wrote:
 Yes. it is pamidigs.
 
 Thanks,
 Sudhakar.
 On Nov 24, 2014, at 1:53 PM, Marlon Pierce marpi...@iu.edu wrote:
 
 Hi Sudhakar, did you create a wiki account?  Please send me your username 
 and I'll give you the required permissions.
 
 Thanks--
 
 Marlon
 
 On 11/24/14, 2:41 PM, Pamidighantam, Sudhakar V wrote:
 I would like to deposit some scripts toward generalization in app catalog 
 and gfac design/implementation and would need access to the wiki.
 Please let me know when this is ready.
 
 Thanks,
 Sudhakar.
 



Fwd: [sgg-l] XBaya Quick Start Tutorial is in Airavata wiki

2014-12-01 Thread Pamidighantam, Sudhakar V


Begin forwarded message:

From: Marlon Pierce marpi...@iu.edumailto:marpi...@iu.edu
Subject: Re: [sgg-l] XBaya Quick Start Tutorial is in Airavata wiki
Date: December 1, 2014 at 10:14:52 AM CST
To: sg...@list.indiana.edumailto:sg...@list.indiana.edu
Reply-To: sg...@list.indiana.edumailto:sg...@list.indiana.edu

Please take this to dev@airavata. Looks like

* Some services (derby) were not properly shutdown from a previous test, and/or

* Not all services were started correctly (zookeeper)

Marlon

On 12/1/14, 11:05 AM, Pamidighantam, Sudhakar V wrote:
The version was from just before break. I get the following errors…

 (Apache Derby 10.9.1.0 - (1344872) ,Apache Derby Network Client JDBC Driver 
10.9.1.0 - (1344872)).
file:/Users/spamidig/Applications/XBayaDocTest/airavata/apache-airavata-server-0.14-SNAPSHOT/bin/airavata-server.properties
Mon Dec 01 09:48:19 CST 2014 : Could not listen on port 1527 on host 0.0.0.0:
 java.net.BindException: Address already in use
[INFO] Database already created for App Catalog!
[INFO] Starting Airavata API Server on Port 8930
[

and


[INFO] Initiating client connection, connectString=localhost:2181 
sessionTimeout=6000 
watcher=org.apache.airavata.api.server.AiravataAPIServer@2b27cc70
[INFO] Opening socket connection to server /0:0:0:0:0:0:0:1:2181
[WARN] Session 0x0 for server null, unexpected error, closing socket connection 
and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041)
[INFO] Opening socket connection to server /fe80:0:0:0:0:0:0:1%1:2181
[ERROR] KeeperErrorCode = ConnectionLoss for /experiment-catalog
Mon Dec 01 09:48:19 CST 2014 : Apache Derby Network Server - 10.9.1.0 - 
(1344872) shutdown
[ERROR] Server Start Error:
AiravataSystemException(airavataErrorType:INTERNAL_ERROR)
at 
org.apache.airavata.api.server.AiravataAPIServer.storeServerConfig(AiravataAPIServer.java:233)
at 
org.apache.airavata.api.server.AiravataAPIServer.startAiravataServer(AiravataAPIServer.java:111)
at 
org.apache.airavata.api.server.AiravataAPIServer.start(AiravataAPIServer.java:256)
at org.apache.airavata.server.ServerMain.startAllServers(ServerMain.java:297)
at org.apache.airavata.server.ServerMain.performServerStart(ServerMain.java:146)
at org.apache.airavata.server.ServerMain.main(ServerMain.java:129)
[INFO] connected to rabbitmq: amqp://guest@127.0.0.1:5672/ for 
airavata_rabbitmq_exchange
[INFO] setting basic.qos / prefetch count to 64 for airavata_rabbitmq_exchange
[INFO] Initiating client connection, connectString=localhost:2181 
sessionTimeout=6000 
watcher=org.apache.airavata.orchestrator.server.OrchestratorServerHandler@d71adc2
[INFO] Opening socket connection to server /0:0:0:0:0:0:0:1:2181
[WARN] Session 0x0 for server null, unexpected error, closing socket connection 
and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041)
[INFO] Opening socket connection to server /127.0.0.1:2181
[WARN] Session 0x0 for server null, unexpected error, closing socket connection 
and attempting reconnect


Is there a way to correct this.

Thanks,
Sudhakar.



On Dec 1, 2014, at 9:51 AM, Eroma Abeysinghe 
eroma.abeysin...@gmail.commailto:eroma.abeysin...@gmail.commailto:eroma.abeysin...@gmail.com
 wrote:

Couldn't launch Airavata server? Did you take the latest from git?

No problem at all.
Lets edit as we go on  thanks again for the comments and input.

On Mon, Dec 1, 2014 at 10:46 AM, Pamidighantam, Sudhakar V 
spami...@illinois.edumailto:spami...@illinois.edumailto:spami...@illinois.edu
 wrote:
I am still working on this.. as I could not launch the server… More edits 
possible.

Thanks,
Sudhakar.

On Dec 1, 2014, at 9:17 AM, Eroma Abeysinghe 
eroma.abeysin...@gmail.commailto:eroma.abeysin...@gmail.commailto:eroma.abeysin...@gmail.com
 wrote:

Hi,

Thank you Sudhakar and Shameera for reviewing the quick start documentation.
Modified the tutorial baed on your comments and now its in

https://cwiki.apache.org/confluence/display/AIRAVATA/XBAYA+Quick-Start+Tutorial

I ran the quick start tutorial steps and was able to get results using 'Add' 
application myself
--
Thank You,
Best Regards,
Eroma




--
Thank You,
Best Regards,
Eroma



Re: [sgg-l] XBaya Quick Start Tutorial is in Airavata wiki

2014-12-01 Thread Pamidighantam, Sudhakar V
I am not sure if that alone is the issue. I see these errors in both places. 
Could we have a chat to resolve this later this afternoon.

Thanks,
Sudhakar.
On Dec 1, 2014, at 12:08 PM, Shameera Rathnayaka 
shameerai...@gmail.commailto:shameerai...@gmail.com wrote:

Hi Sudhakar,

I also face the same issue with my home network where localhost doesn't​ 
correctly map to 127.0.0.1. When we build with test derby start with 127.0.0.1 
and test server properties file has it as localhost. This is why this happen. 
Interesting thing is when i build the source with test at Lab it works without 
an issue.

Thanks,
Shameera.

On Mon, Dec 1, 2014 at 12:59 PM, Pamidighantam, Sudhakar V 
spami...@illinois.edumailto:spami...@illinois.edu wrote:


Begin forwarded message:

From: Marlon Pierce marpi...@iu.edumailto:marpi...@iu.edu
Subject: Re: [sgg-l] XBaya Quick Start Tutorial is in Airavata wiki
Date: December 1, 2014 at 10:14:52 AM CST
To: sg...@list.indiana.edumailto:sg...@list.indiana.edu
Reply-To: sg...@list.indiana.edumailto:sg...@list.indiana.edu

Please take this to dev@airavata. Looks like

* Some services (derby) were not properly shutdown from a previous test, and/or

* Not all services were started correctly (zookeeper)

Marlon

On 12/1/14, 11:05 AM, Pamidighantam, Sudhakar V wrote:
The version was from just before break. I get the following errors…

 (Apache Derby 10.9.1.0 - (1344872) ,Apache Derby Network Client JDBC Driver 
10.9.1.0 - (1344872)).
file:/Users/spamidig/Applications/XBayaDocTest/airavata/apache-airavata-server-0.14-SNAPSHOT/bin/airavata-server.properties
Mon Dec 01 09:48:19 CST 2014 : Could not listen on port 1527 on host 
0.0.0.0http://0.0.0.0/:
 java.net.BindException: Address already in use
[INFO] Database already created for App Catalog!
[INFO] Starting Airavata API Server on Port 8930
[

and


[INFO] Initiating client connection, connectString=localhost:2181 
sessionTimeout=6000 
watcher=org.apache.airavata.api.server.AiravataAPIServer@2b27cc70
[INFO] Opening socket connection to server /0:0:0:0:0:0:0:1:2181
[WARN] Session 0x0 for server null, unexpected error, closing socket connection 
and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041)
[INFO] Opening socket connection to server /fe80:0:0:0:0:0:0:1%1:2181
[ERROR] KeeperErrorCode = ConnectionLoss for /experiment-catalog
Mon Dec 01 09:48:19 CST 2014 : Apache Derby Network Server - 10.9.1.0 - 
(1344872) shutdown
[ERROR] Server Start Error:
AiravataSystemException(airavataErrorType:INTERNAL_ERROR)
at 
org.apache.airavata.api.server.AiravataAPIServer.storeServerConfig(AiravataAPIServer.java:233)
at 
org.apache.airavata.api.server.AiravataAPIServer.startAiravataServer(AiravataAPIServer.java:111)
at 
org.apache.airavata.api.server.AiravataAPIServer.start(AiravataAPIServer.java:256)
at org.apache.airavata.server.ServerMain.startAllServers(ServerMain.java:297)
at org.apache.airavata.server.ServerMain.performServerStart(ServerMain.java:146)
at org.apache.airavata.server.ServerMain.main(ServerMain.java:129)
[INFO] connected to rabbitmq: amqp://guest@127.0.0.1:5672/ for 
airavata_rabbitmq_exchange
[INFO] setting basic.qos / prefetch count to 64 for airavata_rabbitmq_exchange
[INFO] Initiating client connection, connectString=localhost:2181 
sessionTimeout=6000 
watcher=org.apache.airavata.orchestrator.server.OrchestratorServerHandler@d71adc2
[INFO] Opening socket connection to server /0:0:0:0:0:0:0:1:2181
[WARN] Session 0x0 for server null, unexpected error, closing socket connection 
and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041)
[INFO] Opening socket connection to server 
/127.0.0.1:2181http://127.0.0.1:2181/
[WARN] Session 0x0 for server null, unexpected error, closing socket connection 
and attempting reconnect


Is there a way to correct this.

Thanks,
Sudhakar.



On Dec 1, 2014, at 9:51 AM, Eroma Abeysinghe 
eroma.abeysin...@gmail.commailto:eroma.abeysin...@gmail.commailto:eroma.abeysin...@gmail.com
 wrote:

Couldn't launch Airavata server? Did you take the latest from git?

No problem at all.
Lets edit as we go on  thanks again for the comments and input.

On Mon, Dec 1, 2014 at 10:46 AM, Pamidighantam, Sudhakar V 
spami...@illinois.edumailto:spami...@illinois.edumailto:spami...@illinois.edu
 wrote:
I am still working on this.. as I could not launch the server… More edits 
possible.

Thanks,
Sudhakar.

On Dec 1, 2014, at 9

Re: Improvements to Experiment input data model in order to support Gaussian application

2014-12-08 Thread Pamidighantam, Sudhakar V
Chaturi:
Thanks for these suggestions. One question I have is whether we should look at 
some of the input files in the set of applications currently under testing to 
come up with these requirements.
There may be additional requirements in some of the inputs. Of course we can 
incrementally update the data structures as well as we test these applications 
in more depth. But I feel some significant number of application cases should 
be accommodated with each update. We may target these for rc 0.15 and depending 
on the time available  we can look at at least few more applications.

Comments?

Thanks,
Sudhakar.
On Dec 8, 2014, at 9:22 AM, Chathuri Wimalasena 
kamalas...@gmail.commailto:kamalas...@gmail.com wrote:

Hi Devs,

We are trying to add Gaussian application using airavata-appcatalog. While 
doing that, we face some limitations of the current design.

In Gaussian there are several input files, some input files should used when 
the job run command is generated, but some does not.  Those which are not 
involved with job run command also need to be staged to working directory. Such 
flags are not supported in current design.

Another interesting feature that in Gaussian is, in input file, we can specify 
the values for memory, cpu like options. If input file includes those 
parameters, we need to give priority to those values instead of the values 
specified in the request.

To support these features, we need to slightly modify our thrift IDLS, 
specially to InputDataObjectType struct.

Current struct is below.

struct InputDataObjectType {
1: required string name,
2: optional string value,
3: optional DataType type,
4: optional string applicationArgument,
5: optional bool standardInput = 0,
6: optional string userFriendlyDescription,
7: optional string metaData
}

In order to support 1st requirement, we introduce 2 enums.

enum InputValidityType{
REQUIRED,
OPTIONAL
}

enum CommandLineType{
INCLUSIVE,
EXCLUSIVE
}

Please excuse me for names. You are welcome to suggest better names.

To support 2nd requirement, we change metaData field to a map with another enum 
where we define all the metadata types that can have.

enum InputMetadataType {
MEMORY,
CPU
}

So the new InputDataObjectType would be as below.

struct InputDataObjectType {
1: required string name,
2: optional string value,
3: optional DataType type,
4: optional string applicationArgument,
5: optional bool standardInput = 0,
6: optional string userFriendlyDescription,
7: optional mapInputMetadataType, string metaData,
8: optional InputValidityType inputValid;
9: optional CommandLineType addedToCommandLine;
10: optional bool dataStaged = 0;
}

Suggestions are welcome.

Thanks,
Chathuri




Re: Improvements to Experiment input data model in order to support Gaussian application

2014-12-08 Thread Pamidighantam, Sudhakar V
I would suggest that we look at several quantum chemistry applications which 
have slight variations on the theme.  We have NWChem, Gamess, and Molpro 
examples to look at. I can send some input files and/or have a session to go 
over the relevant sections. We can do this later today. 

Thanks,
Sudhakar. 


On Dec 8, 2014, at 10:23 AM, Marlon Pierce marpi...@iu.edu wrote:

 The more examples, the better.  I'd like to find the right balance between 
 understanding the problem space and making incremental progress.
 
 Marlon
 
 On 12/8/14, 10:38 AM, Pamidighantam, Sudhakar V wrote:
 Chaturi:
 Thanks for these suggestions. One question I have is whether we should look 
 at some of the input files in the set of applications currently under 
 testing to come up with these requirements.
 There may be additional requirements in some of the inputs. Of course we can 
 incrementally update the data structures as well as we test these 
 applications in more depth. But I feel some significant number of 
 application cases should be accommodated with each update. We may target 
 these for rc 0.15 and depending on the time available  we can look at at 
 least few more applications.
 
 Comments?
 
 Thanks,
 Sudhakar.
 On Dec 8, 2014, at 9:22 AM, Chathuri Wimalasena 
 kamalas...@gmail.commailto:kamalas...@gmail.com wrote:
 
 Hi Devs,
 
 We are trying to add Gaussian application using airavata-appcatalog. While 
 doing that, we face some limitations of the current design.
 
 In Gaussian there are several input files, some input files should used when 
 the job run command is generated, but some does not.  Those which are not 
 involved with job run command also need to be staged to working directory. 
 Such flags are not supported in current design.
 
 Another interesting feature that in Gaussian is, in input file, we can 
 specify the values for memory, cpu like options. If input file includes 
 those parameters, we need to give priority to those values instead of the 
 values specified in the request.
 
 To support these features, we need to slightly modify our thrift IDLS, 
 specially to InputDataObjectType struct.
 
 Current struct is below.
 
 struct InputDataObjectType {
 1: required string name,
 2: optional string value,
 3: optional DataType type,
 4: optional string applicationArgument,
 5: optional bool standardInput = 0,
 6: optional string userFriendlyDescription,
 7: optional string metaData
 }
 
 In order to support 1st requirement, we introduce 2 enums.
 
 enum InputValidityType{
 REQUIRED,
 OPTIONAL
 }
 
 enum CommandLineType{
 INCLUSIVE,
 EXCLUSIVE
 }
 
 Please excuse me for names. You are welcome to suggest better names.
 
 To support 2nd requirement, we change metaData field to a map with another 
 enum where we define all the metadata types that can have.
 
 enum InputMetadataType {
 MEMORY,
 CPU
 }
 
 So the new InputDataObjectType would be as below.
 
 struct InputDataObjectType {
 1: required string name,
 2: optional string value,
 3: optional DataType type,
 4: optional string applicationArgument,
 5: optional bool standardInput = 0,
 6: optional string userFriendlyDescription,
 7: optional mapInputMetadataType, string metaData,
 8: optional InputValidityType inputValid;
 9: optional CommandLineType addedToCommandLine;
 10: optional bool dataStaged = 0;
 }
 
 Suggestions are welcome.
 
 Thanks,
 Chathuri
 
 
 
 



Re: Improvements to Experiment input data model in order to support Gaussian application

2014-12-11 Thread Pamidighantam, Sudhakar V
We can not expect  users or applications to change behavior for Airavata. It is 
up to us enable applications and users as they are now.
As you have seen several applications have system input parameters inside a 
master input file and they are used by the application and
required to be used in scheduling. As I was suggesting the memory for 
scheduling should be higher than what is expected by the application.
Similarly time for scheduling should also be higher than what is given in the 
input to accommodate cleanup and other post processing as well.
Some schedulers allow soft and hard limits (admins may or may not enable them) 
and we can think of these pairs of system parameters as
soft and hard memory and time limits.

Thanks,
Sudhakar.


On Dec 11, 2014, at 2:02 AM, Shameera Rathnayaka 
shameerai...@gmail.commailto:shameerai...@gmail.com wrote:

Hi Amila,

According to my understanding, what this handle does is, read the user given 
configuration at run time. I have no idea this will effect to qsub or aprun or 
other parameter. It is better if someone explain it to me too.

We already have a way to provide the these configuration parameters with the 
experiment itself by defining ComputeResourceScheduling. But there are some use 
cases like gaussian, where gaussian users provide these configurations with the 
input file it self.  IMO here we have two options, either we can  ask gaussian 
users to adopt to the airavata way but still those configuration in input file 
is required for gaussian application(I guess , correct me if I am wrong here) 
or use airavata extension points to support this scenario. Here the handler 
address the second option.

Thanks,
Shameera.

On Thu, Dec 11, 2014 at 12:00 AM, Amila Jayasekara 
thejaka.am...@gmail.commailto:thejaka.am...@gmail.com wrote:
Also, regarding the handler that Shameera is working on ...
I guess that handler is going to change mainly qsub parameters or aprun 
parameters (Correct me if i am wrong). I think it would be more useful to write 
a handler which changes any parameter in qsub, aprun or mpiexec.

In implementation wise I would imagine there is an abstract handler with 
concrete implementation for each job scheduling command.

Thanks
-Amila

On Wed, Dec 10, 2014 at 9:17 AM, Marlon Pierce 
marpi...@iu.edumailto:marpi...@iu.edu wrote:
+1 for more generalization.

We are collecting more raw material for chemistry application use cases at 
https://cwiki.apache.org/confluence/display/AIRAVATA/Use+Cases. We'll review 
them (and bio apps that we also collected previously) in a wiki document to see 
if our API mappings are correct.

Preliminarily, we see the command line arguments don't contain the full list of 
input and output files.  Additional required inputs may be passed via control 
files, environment variables, etc.  Examples include data libraries for basis 
functions, names of checkpoint files, names of output files, and so forth.  So 
we need a way to say the application may take 4 inputs, but only 1 is needed to 
construct a valid command line, for example.

On the other hand, I don't think we need the InputMetadataType that Chathuri 
introduces below. This overlaps with what is already in the compute resource 
description fields.


Marlon


On 12/8/14, 10:17 PM, Amila Jayasekara wrote:
Hi Chathuri,

I do not know anything about Gaussian. So its kind of hard for me to
understand what exactly is the meaning of the structures you introduced and
why you exactly need those structures.

A more important question is how to come up with a more abstract and
generic thrift IDLS so that you dont need to change it every time we add a
new application. Going through many example applications is certainly a
good way to understand broad requirements and helps to abstract out many
features.

Thanks
-Thejaka

On Mon, Dec 8, 2014 at 10:22 AM, Chathuri Wimalasena 
kamalas...@gmail.commailto:kamalas...@gmail.com
wrote:

Hi Devs,

We are trying to add Gaussian application using airavata-appcatalog. While
doing that, we face some limitations of the current design.

In Gaussian there are several input files, some input files should used
when the job run command is generated, but some does not.  Those which are
not involved with job run command also need to be staged to working
directory. Such flags are not supported in current design.

Another interesting feature that in Gaussian is, in input file, we can
specify the values for memory, cpu like options. If input file includes
those parameters, we need to give priority to those values instead of the
values specified in the request.

To support these features, we need to slightly modify our thrift IDLS,
specially to InputDataObjectType struct.

Current struct is below.

struct InputDataObjectType {
 1: required string name,
 2: optional string value,
 3: optional DataType type,
 4: optional string applicationArgument,
 5: optional bool standardInput = 0,
 6: optional string 

Re: DataCat Project Progress

2014-12-11 Thread Pamidighantam, Sudhakar V
Supun:
I support these goals. I am available for you to engage with you and anything I 
can do to expedite the project please let me know. Even if you think I can not, 
 do please ask anyway.
I have written the original parsers in perl myself and have directed others 
when the Cup/JFLex system was put in place. I have generated and modified the 
CUP/JFlex code before, so I am familiar with how it works. I will look at the 
paper and may suggest  additions.

I can not test the system now by adding more data and see if this can parse the 
new data. How can we get to that point. This is critical first step for me 
before I can ask friendly users to test this further. Current state is a 
prototype only and is not interesting enough for any of our users. Unless we 
can add parsing of more data and more salient data and create products it is 
difficult engage end users in any meaningful way.

Perhaps if this is deployed somewhere in Indiana it may be easier to move 
forward. If you need more data please let me know where I should locate it for 
you to access.

Thanks,
Sudhakar.


On Dec 11, 2014, at 4:11 AM, Supun Nakandala 
supun.nakand...@gmail.commailto:supun.nakand...@gmail.com wrote:

Hi All,

We had the mid evaluation of the project last Tuesday and the following 
concerns were raised.

  1.  The lack of visibility of the overall solution in the project 
demonstration.
  2.  The ability to come up with a solution where, scientist who does not have 
a background in computer science can create new parsers (metadata extraction 
logic)

The project was demonstrated using the web interface that we developed. For the 
final evaluation we expect to demonstrate the system using laravel PHP 
Reference Gateway running in a production server and demonstrate how a new data 
product that gets generated will be identified, indexed and will be available 
for searching and hope this will handle the first issue.

We also had a meeting with Dr. Dilum our internal supervisor where we 
identified things that can be done from now to 15th January, the expected 
project completion date

  1.  Do a proper performance test and publish a paper before final marks for 
the project is finalized (marks will be finalized by the end of March).
  2.  Getting to work more parsers, so that Sudhakar can ask more users to use 
the system. This will help to get more feedback on the system and have a real 
world usage.
  3.  Implement the support for provenance aware workflow execution in Airavata 
using our system.

We have written a draft paper which I have attached here with. We showed this 
to Dr. Srinath and Dr Dilum and they suggested that we do a proper performance 
testing (The one that already done is not up to the expected standards). Given 
the available time we need to prioritize our work and select a set of tasks 
that is doable and has the most impact. What do you all think?

Draft Paper: 
https://docs.google.com/document/d/1PLfST6hLygQpsr4RlgiDoffmDEwMOWbmb1WZ0uKTtd8/edit#heading=h.6fjqfavj2nov

Literature Review: 
https://drive.google.com/file/d/0B0cLF-CLa59oaXRBazF1aURvQTg/view?usp=sharing

Supun



Re: DataCat Project Progress

2014-12-11 Thread Pamidighantam, Sudhakar V
Suresh is traveling. What issues are you facing. I have root access on this 
system.
Let me see if I can help there.

Thanks,
Sudhakar.

On Dec 11, 2014, at 10:29 AM, Supun Nakandala 
supun.nakand...@gmail.commailto:supun.nakand...@gmail.com wrote:

Hi Sudhakar,

Thank you very much for your support. Suresh gave us a 
machine(gridchem.uits.iu.eduhttp://gridchem.uits.iu.edu/) in IU to deploy the 
server. But unfortunately we are having some ssh issues when logging to that 
server . I will contact Suresh and get the issue fixed and deploy  an instance 
of the server there so that you can configure and test the parsers by your self.

On Thu, Dec 11, 2014 at 7:42 PM, Pamidighantam, Sudhakar V 
spami...@illinois.edumailto:spami...@illinois.edu wrote:
Supun:
I support these goals. I am available for you to engage with you and anything I 
can do to expedite the project please let me know. Even if you think I can not, 
 do please ask anyway.
I have written the original parsers in perl myself and have directed others 
when the Cup/JFLex system was put in place. I have generated and modified the 
CUP/JFlex code before, so I am familiar with how it works. I will look at the 
paper and may suggest  additions.

I can not test the system now by adding more data and see if this can parse the 
new data. How can we get to that point. This is critical first step for me 
before I can ask friendly users to test this further. Current state is a 
prototype only and is not interesting enough for any of our users. Unless we 
can add parsing of more data and more salient data and create products it is 
difficult engage end users in any meaningful way.

Perhaps if this is deployed somewhere in Indiana it may be easier to move 
forward. If you need more data please let me know where I should locate it for 
you to access.

Thanks,
Sudhakar.


On Dec 11, 2014, at 4:11 AM, Supun Nakandala 
supun.nakand...@gmail.commailto:supun.nakand...@gmail.com wrote:

Hi All,

We had the mid evaluation of the project last Tuesday and the following 
concerns were raised.

  1.  The lack of visibility of the overall solution in the project 
demonstration.
  2.  The ability to come up with a solution where, scientist who does not have 
a background in computer science can create new parsers (metadata extraction 
logic)

The project was demonstrated using the web interface that we developed. For the 
final evaluation we expect to demonstrate the system using laravel PHP 
Reference Gateway running in a production server and demonstrate how a new data 
product that gets generated will be identified, indexed and will be available 
for searching and hope this will handle the first issue.

We also had a meeting with Dr. Dilum our internal supervisor where we 
identified things that can be done from now to 15th January, the expected 
project completion date

  1.  Do a proper performance test and publish a paper before final marks for 
the project is finalized (marks will be finalized by the end of March).
  2.  Getting to work more parsers, so that Sudhakar can ask more users to use 
the system. This will help to get more feedback on the system and have a real 
world usage.
  3.  Implement the support for provenance aware workflow execution in Airavata 
using our system.

We have written a draft paper which I have attached here with. We showed this 
to Dr. Srinath and Dr Dilum and they suggested that we do a proper performance 
testing (The one that already done is not up to the expected standards). Given 
the available time we need to prioritize our work and select a set of tasks 
that is doable and has the most impact. What do you all think?

Draft Paper: 
https://docs.google.com/document/d/1PLfST6hLygQpsr4RlgiDoffmDEwMOWbmb1WZ0uKTtd8/edit#heading=h.6fjqfavj2nov

Literature Review: 
https://drive.google.com/file/d/0B0cLF-CLa59oaXRBazF1aURvQTg/view?usp=sharing

Supun




--
Thank you
Supun Nakandala
Dept. Computer Science and Engineering
University of Moratuwa



Re: [VOTE] Apache Airavata release 0.14 - RC1

2014-12-22 Thread Pamidighantam, Sudhakar V
How ever, I saw this in the console… inspire of the apparent success. Should be 
fixed in RC2.

Thanks,
Sudhakar.

[ERROR] could not un-bind queue: amq.gen--oDDp43V6d-D4PIFiqGKDg for exchange 
airavata_rabbitmq_exchange
[WARN] Failed to find the subscriber for experiment id: 
_test_d61a9754-e060-4ea5-8217-be84e69a3c6e_test_d61a9754-e060-4ea5-8217-be84e69a3c6e.*_amq.gen--oDDp43V6d-D4PIFiqGKDg
org.apache.airavata.common.exception.AiravataException: could not un-bind 
queue: amq.gen--oDDp43V6d-D4PIFiqGKDg for exchange airavata_rabbitmq_exchange
at 
org.apache.airavata.messaging.core.impl.RabbitMQConsumer.stopListen(RabbitMQConsumer.java:216)
at org.apache.airavata.xbaya.messaging.Monitor.unsubscribe(Monitor.java:243)
at 
org.apache.airavata.xbaya.messaging.Monitor$NotificationMessageHandler.onMessage(Monitor.java:211)
at 
org.apache.airavata.messaging.core.impl.RabbitMQConsumer$2.handleDelivery(RabbitMQConsumer.java:188)
at 
com.rabbitmq.client.impl.ConsumerDispatcher$5.run(ConsumerDispatcher.java:140)
at 
com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:85)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:106)
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:102)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:124)
at com.rabbitmq.client.impl.ChannelN.queueDelete(ChannelN.java:815)
at com.rabbitmq.client.impl.ChannelN.queueDelete(ChannelN.java:61)
at 
org.apache.airavata.messaging.core.impl.RabbitMQConsumer.stopListen(RabbitMQConsumer.java:212)
... 8 more
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol 
method: #methodchannel.close(reply-code=406, reply-text=PRECONDITION_FAILED - 
queue 'amq.gen--oDDp43V6d-D4PIFiqGKDg' in vhost '/' in use, class-id=50, 
method-id=40)
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:67)
at 
com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:33)
at 
com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:343)
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:216)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:118)
... 11 more
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol 
method: #methodchannel.close(reply-code=406, reply-text=PRECONDITION_FAILED - 
queue 'amq.gen--oDDp43V6d-D4PIFiqGKDg' in vhost '/' in use, class-id=50, 
method-id=40)
at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:478)
at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:315)
at 
com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:144)
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:91)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:550)
... 1 more

On Dec 22, 2014, at 4:50 PM, Pamidighantam, Sudhakar V 
spami...@illinois.edumailto:spami...@illinois.edu wrote:

Followed the instruction in the Quick-Start tutorial and every thing works fine.

+1 for the next part pf the process.

Thanks,
Sudhakar.
On Dec 22, 2014, at 3:17 PM, Marlon Pierce 
marpi...@iu.edumailto:marpi...@iu.edu wrote:

Let's keep this vote thread open for 72 more hours unless there are objections.

Marlon

On 12/16/14, 3:02 PM, Chathuri Wimalasena wrote:
Apache Airavata PMC is pleased to call for a vote on the following
Apache Airavata 0.14 release candidate artifacts:

Detailed change log/release
notes:https://git-wip-us.apache.org/repos/asf?p=airavata.git;a=blob_plain;f=RELEASE_NOTES;hb=refs/tags/airavata-0.14http://git-wip-us.apache.org/repos/asf?p=airavata.git;a=blob_plain;f=RELEASE_NOTES;hb=refs/tags/airavata-0.14

All Release 
Artifacts:https://dist.apache.org/repos/dist/dev/airavata/0.14/RC1/http://dist.apache.org/repos/dist/dev/airavata/0.14/RC1/

PGP release keys (signed using
65541DBC):https://dist.apache.org/repos/dist/release/airavata/KEYS

Specific URL's:
GIT source tag:
*https://git-wip-us.apache.org/repos/asf?p=airavata.git;a=shortlog;h=refs/tags/airavata-0.14
https://git-wip-us.apache.org/repos/asf?p=airavata.git;a=shortlog;h=refs/tags/airavata-0.14*

Source release:
*https://dist.apache.org/repos/dist/dev/airavata/0.14/RC1/airavata-0.14-source-release.zip
https://dist.apache.org/repos/dist/dev/airavata/0.14/RC1/airavata-0.14-source-release.zip*

Binary Artifacts:

Airavata Server:

*https://dist.apache.org/repos/dist/dev/airavata/0.14/RC1/apache-airavata-server-0.14-bin.zip
https://dist.apache.org/repos/dist/dev/airavata/0.14/RC1/apache-airavata-server-0.14-bin.zip*
*https://dist.apache.org/repos/dist/dev/airavata/0.14/RC1/apache-airavata-server-0.14-bin.tar.gz
https://dist.apache.org/repos/dist/dev

Re: Saving the content of the STDOUT and STDERR to database

2015-01-09 Thread Pamidighantam, Sudhakar V
If the STDOUT becomes large it is not good to store in a database table. 
Perhaps the data base can contain a pointer(URI)  to the file.

Sudhakar.
On Jan 9, 2015, at 5:01 PM, Chathuri Wimalasena 
kamalas...@gmail.commailto:kamalas...@gmail.com wrote:

Hi All,

At the moment, we are saving the content of the STDOUT and STDERR in to 
DATA_TRANSFER_DETAIL table in the database and when retrieving the whole 
experiment object, DataTransferDetail object is also included. Since we no 
longer using wrapper scripts for different applications (instead using the 
module itself), now the most of the applications write the output to STDOUT. 
This output might contain special characters which might ruin the json response 
at the client side. We face this issue with Gamess.

Now we treat STDOUT and STDERR as normal outputs which will be available to 
users at the end of the experiment. Due to that IMO, we no longer need to save 
the content of those files to database. If we still want to save them for some 
other reason, we should save them as files, not the string content. This change 
will need some database table data type modifications.

Feel free to provide your input.

Thanks..
Chathuri




Re: [jira] [Commented] (AIRAVATA-1635) [GSoC] Integrate Airavata Java Client SDK with GridChem Client

2015-03-26 Thread Pamidighantam, Sudhakar V
Suresh:

Could you please add Dimuthu under project ID 596 using “ add new user “ 
function under Management menu in the consult portal www.gridchem.org/consult. 

Thanks,
Sudhakar.
On Mar 26, 2015, at 11:16 AM, Dimuthu Upeksha (JIRA) j...@apache.org wrote:

 
[ 
 https://issues.apache.org/jira/browse/AIRAVATA-1635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382132#comment-14382132
  ] 
 
 Dimuthu Upeksha commented on AIRAVATA-1635:
 ---
 
 Suresh/Sudhakar,
 Can I have access to a working system of GridChem as we have discussed 
 earlier? Then I'll be able to get familiar with its use cases.
 
 [GSoC] Integrate Airavata Java Client SDK with GridChem Client 
 ---
 
Key: AIRAVATA-1635
URL: https://issues.apache.org/jira/browse/AIRAVATA-1635
Project: Airavata
 Issue Type: Epic
   Reporter: Suresh Marru
 Labels: gsoc, gsoc2015, mentor
 
 GridChem is a Science Gateway enables users to run computational experiments 
 on multiple supercomputing resources. Currently GridChem, a java swing based 
 webstart client [1] uses a Axis2 based Middleware Service [2] which brokers 
 users actions into computational jobs. 
 This project needs to understand the Client [1] and port it to use Apache 
 Airavata java client SDK. The project has following components:
 * Integrate GridChem client with Airavata User Store (implemented by WSO2 
 Identity Server)
 * Integrate with Airavata API for application executions.
 * Integrate with Atlassian JIRA + Confluence for user error reporting and 
 status notifications.
 [1] - https://github.com/SciGaP/GridChem-Client
 [2] - https://github.com/SciGaP/GridChem-Middleware-Service
 
 
 
 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)
 



Re: [jira] [Commented] (AIRAVATA-1635) [GSoC] Integrate Airavata Java Client SDK with GridChem Client

2015-03-27 Thread Pamidighantam, Sudhakar V
Dimuthu:
This is a standard format for an application named Gaussian. This file is 
parsed for various job requirements at the middleware and resource stages for 
proper job submission. There are several applications each of which have their 
own standard ways of setting the job parameters. Also these files are used by 
the applications for setting variables internally when they execute. 


Sudhakar.


On Mar 27, 2015, at 4:05 AM, Dimuthu Upeksha (JIRA) j...@apache.org wrote:

 
[ 
 https://issues.apache.org/jira/browse/AIRAVATA-1635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383553#comment-14383553
  ] 
 
 Dimuthu Upeksha commented on AIRAVATA-1635:
 ---
 
 I tried GridChem client by putting some jobs on the servers and getting 
 outputs. Input files that we push has a format like this.
 
 %chk=water.chk
 %nprocshared=1
 %mem=500MB
 #P RHF/6-31g* opt pop=reg gfinput gfprint iop(6/7=3) SCF=direct 
 
 Gaussian Test Job 00
 Water with archiving
 
 0 1
 O
 H 1 0.96
 H 1 0.96 2 109.471221
 
 Is this format some standard way of passing jobs or specific format for 
 GridChem?
 Does client directly pass this file to middleware or parse this and pass only 
 necessary data?
 
 [GSoC] Integrate Airavata Java Client SDK with GridChem Client 
 ---
 
Key: AIRAVATA-1635
URL: https://issues.apache.org/jira/browse/AIRAVATA-1635
Project: Airavata
 Issue Type: Epic
   Reporter: Suresh Marru
 Labels: gsoc, gsoc2015, mentor
 
 GridChem is a Science Gateway enables users to run computational experiments 
 on multiple supercomputing resources. Currently GridChem, a java swing based 
 webstart client [1] uses a Axis2 based Middleware Service [2] which brokers 
 users actions into computational jobs. 
 This project needs to understand the Client [1] and port it to use Apache 
 Airavata java client SDK. The project has following components:
 * Integrate GridChem client with Airavata User Store (implemented by WSO2 
 Identity Server)
 * Integrate with Airavata API for application executions.
 * Integrate with Atlassian JIRA + Confluence for user error reporting and 
 status notifications.
 [1] - https://github.com/SciGaP/GridChem-Client
 [2] - https://github.com/SciGaP/GridChem-Middleware-Service
 
 
 
 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)
 



Re: [jira] [Commented] (AIRAVATA-1635) [GSoC] Integrate Airavata Java Client SDK with GridChem Client

2015-03-28 Thread Pamidighantam, Sudhakar V
In either case the file is not changed on the client side so that is not of any 
significance. The migration to Airavata is in reducing the maintenance of 
requirement defining code in multiple locations ( such as Airavata Server and 
HPC Host etc…) into once location while managing all the relevant requirements 
and validation. Some of the validation can/should be done at the client itself 
and this also needs strengthening.

 But your project will focus on three modular steps each of which could be 
independent but influence the others.
1. authentication migration to Airavata Identity service 
2. migration of application execution to Airavata Job/Workflow management 
3. migration of ticketing and notification system to Jira. 

Ticketing system requires authentication so they are coupled. Authentication is 
coupled to user registration. Of course job executions require authentication 
as well. So this should be the first one to look at.  

So we should look at the issues carefully and think how a production system can 
be migrated without/with only minimal disruption. 

Sudhakar.
On Mar 27, 2015, at 11:05 PM, DImuthu Upeksha dimuthu.upeks...@gmail.com 
wrote:

 Hi Sudhakar,
 
 So if we port this use case to Airavata, we don't have to change the file at 
 client side. Only requirement is to pass this file to Airavata through its 
 API. Am I correct?
 
 On Fri, Mar 27, 2015 at 5:23 PM, Pamidighantam, Sudhakar V 
 spami...@illinois.edu wrote:
 Dimuthu:
 This is a standard format for an application named Gaussian. This file is 
 parsed for various job requirements at the middleware and resource stages for 
 proper job submission. There are several applications each of which have 
 their own standard ways of setting the job parameters. Also these files are 
 used by the applications for setting variables internally when they execute.
 
 
 Sudhakar.
 
 
 On Mar 27, 2015, at 4:05 AM, Dimuthu Upeksha (JIRA) j...@apache.org wrote:
 
 
 [ 
  https://issues.apache.org/jira/browse/AIRAVATA-1635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383553#comment-14383553
   ]
 
  Dimuthu Upeksha commented on AIRAVATA-1635:
  ---
 
  I tried GridChem client by putting some jobs on the servers and getting 
  outputs. Input files that we push has a format like this.
 
  %chk=water.chk
  %nprocshared=1
  %mem=500MB
  #P RHF/6-31g* opt pop=reg gfinput gfprint iop(6/7=3) SCF=direct
 
  Gaussian Test Job 00
  Water with archiving
 
  0 1
  O
  H 1 0.96
  H 1 0.96 2 109.471221
 
  Is this format some standard way of passing jobs or specific format for 
  GridChem?
  Does client directly pass this file to middleware or parse this and pass 
  only necessary data?
 
  [GSoC] Integrate Airavata Java Client SDK with GridChem Client
  ---
 
 Key: AIRAVATA-1635
 URL: https://issues.apache.org/jira/browse/AIRAVATA-1635
 Project: Airavata
  Issue Type: Epic
Reporter: Suresh Marru
  Labels: gsoc, gsoc2015, mentor
 
  GridChem is a Science Gateway enables users to run computational 
  experiments on multiple supercomputing resources. Currently GridChem, a 
  java swing based webstart client [1] uses a Axis2 based Middleware Service 
  [2] which brokers users actions into computational jobs.
  This project needs to understand the Client [1] and port it to use Apache 
  Airavata java client SDK. The project has following components:
  * Integrate GridChem client with Airavata User Store (implemented by WSO2 
  Identity Server)
  * Integrate with Airavata API for application executions.
  * Integrate with Atlassian JIRA + Confluence for user error reporting and 
  status notifications.
  [1] - https://github.com/SciGaP/GridChem-Client
  [2] - https://github.com/SciGaP/GridChem-Middleware-Service
 
 
 
  --
  This message was sent by Atlassian JIRA
  (v6.3.4#6332)
 
 
 
 
 
 -- 
 Regards
 W.Dimuthu Upeksha
 Undergraduate
 Department of Computer Science And Engineering
 University of Moratuwa, Sri Lanka



Re: [API] Assigning Computational Resources

2015-04-19 Thread Pamidighantam, Sudhakar V
Just to clarify further, there are two kinds of projects … one is a research 
project under which many experiments can be collected. On the organization of 
users side,
there may be a project under which many users can be grouped lead by a PI ( 
principal investigator). Perhaps this distinction should be maintained when a 
Gateway is considered. Each user may have multiple research projects. Each PI 
may have several collaborators using the project (resource) allocation. All 
these need to be tracked for the Gateway, PI and the individual users.

Thanks,
Sudhakar.
On Apr 19, 2015, at 4:55 AM, DImuthu Upeksha dimuthu.upeks...@gmail.com wrote:

 Hi Eroma,
 
 Yet it's clear. Thanks for the clarification.
 
 Thanks 
 Dimuthu
 
 On Sun, Apr 19, 2015 at 9:34 AM, Eroma Abeysinghe 
 eroma.abeysin...@gmail.com wrote:
 Hi Dimuthu,
 
 Project is a grouping for a collection of experiments. Meaning we can select 
 a project when creating an experiment. A project can have one or many 
 experiments grouped together. Compute resource is the super computer which 
 application (applications are deployed in a computer resource) experiments 
 are executed.
 Currently in AIravata a computer resource and a project does not have a 
 direct link and neither we could specify projects to particular resource.
 
 Others please correct if above is not accurate.
 Hope this helps.
 
 Thanks,
 Best Regards,
 Eroma
 
 On Sat, Apr 18, 2015 at 11:03 PM, DImuthu Upeksha 
 dimuthu.upeks...@gmail.com wrote:
 Hi all,
 
 Can we assign/get computational resources for a project using Airavata API?
 There are methods like getAllComputeResourceNames and getComputeResource 
 which enables to get them. But it does not specify a particular project. What 
 is the concept behind computational resources? Does it available for all 
 projects or can we specify particular resources for  specific projects?
 
 Thanks
 Dimuthu
 -- 
 Regards
 W.Dimuthu Upeksha
 Undergraduate
 Department of Computer Science And Engineering
 University of Moratuwa, Sri Lanka
 
 
 
 -- 
 Thank You,
 Best Regards,
 Eroma
 
 
 
 -- 
 Regards
 W.Dimuthu Upeksha
 Undergraduate
 Department of Computer Science And Engineering
 University of Moratuwa, Sri Lanka



Re: [API] Assigning Computational Resources

2015-04-19 Thread Pamidighantam, Sudhakar V
Well it definitely depends on the allocation available. The (user) project 
allocation should not be 0 or negative for an experiment to run. As experiments 
are run the allocation gets consumed and the  PI typically has to renew an 
allocation when it is fully consumed ( or low or expires at a particular date) 
and the Gateway needs to approve renewal and grant new allocation (amount). 

Sudhakar.
On Apr 19, 2015, at 8:42 AM, DImuthu Upeksha dimuthu.upeks...@gmail.com wrote:

 Hi Sudhakar,
 
 What is the effect for the allocation of computational resources in these two 
 types of projects? I think according to Eroma's clarification, it does not 
 depend on high level project management.
 
 Thanks,
 Dimuthu
 
 On Sun, Apr 19, 2015 at 6:21 PM, Pamidighantam, Sudhakar V 
 spami...@illinois.edu wrote:
 Just to clarify further, there are two kinds of projects … one is a research 
 project under which many experiments can be collected. On the organization of 
 users side,
 there may be a project under which many users can be grouped lead by a PI ( 
 principal investigator). Perhaps this distinction should be maintained when a 
 Gateway is considered. Each user may have multiple research projects. Each PI 
 may have several collaborators using the project (resource) allocation. All 
 these need to be tracked for the Gateway, PI and the individual users.
 
 Thanks,
 Sudhakar.
 
 On Apr 19, 2015, at 4:55 AM, DImuthu Upeksha dimuthu.upeks...@gmail.com 
 wrote:
 
 Hi Eroma,
 
 Yet it's clear. Thanks for the clarification.
 
 Thanks 
 Dimuthu
 
 On Sun, Apr 19, 2015 at 9:34 AM, Eroma Abeysinghe 
 eroma.abeysin...@gmail.com wrote:
 Hi Dimuthu,
 
 Project is a grouping for a collection of experiments. Meaning we can select 
 a project when creating an experiment. A project can have one or many 
 experiments grouped together. Compute resource is the super computer which 
 application (applications are deployed in a computer resource) experiments 
 are executed.
 Currently in AIravata a computer resource and a project does not have a 
 direct link and neither we could specify projects to particular resource.
 
 Others please correct if above is not accurate.
 Hope this helps.
 
 Thanks,
 Best Regards,
 Eroma
 
 On Sat, Apr 18, 2015 at 11:03 PM, DImuthu Upeksha 
 dimuthu.upeks...@gmail.com wrote:
 Hi all,
 
 Can we assign/get computational resources for a project using Airavata API?
 There are methods like getAllComputeResourceNames and getComputeResource 
 which enables to get them. But it does not specify a particular project. 
 What is the concept behind computational resources? Does it available for 
 all projects or can we specify particular resources for  specific projects?
 
 Thanks
 Dimuthu
 -- 
 Regards
 W.Dimuthu Upeksha
 Undergraduate
 Department of Computer Science And Engineering
 University of Moratuwa, Sri Lanka
 
 
 
 -- 
 Thank You,
 Best Regards,
 Eroma
 
 
 
 -- 
 Regards
 W.Dimuthu Upeksha
 Undergraduate
 Department of Computer Science And Engineering
 University of Moratuwa, Sri Lanka
 
 
 
 
 -- 
 Regards
 W.Dimuthu Upeksha
 Undergraduate
 Department of Computer Science And Engineering
 University of Moratuwa, Sri Lanka



Re: [API] Assigning Computational Resources

2015-04-20 Thread Pamidighantam, Sudhakar V
Yes, we can consider projects to be created by users but activated by Gateway 
Admins and assign resources to the project and the group can be created then 
(or  by default the project ID could be the group ID). Once active PI should be 
able to add users. 

Sudhakar.
On Apr 20, 2015, at 8:02 AM, Suresh Marru sma...@apache.org wrote:

 Hi Sudhakar,
 
 This is a good usecase. Can we then consider projects are created and managed 
 by users? And to manage allocations, how about we use the group concept? So 
 as soon as an allocation is approved, a group will be created and the PI can 
 manage the users and group wide allocation. 
 
 Suresh
 
 On Apr 19, 2015, at 11:53 AM, Pamidighantam, Sudhakar V 
 spami...@illinois.edu wrote:
 
 Well it definitely depends on the allocation available. The (user) project 
 allocation should not be 0 or negative for an experiment to run. As 
 experiments are run the allocation gets consumed and the  PI typically has 
 to renew an allocation when it is fully consumed ( or low or expires at a 
 particular date) and the Gateway needs to approve renewal and grant new 
 allocation (amount). 
 
 Sudhakar.
 On Apr 19, 2015, at 8:42 AM, DImuthu Upeksha dimuthu.upeks...@gmail.com 
 wrote:
 
 Hi Sudhakar,
 
 What is the effect for the allocation of computational resources in these 
 two types of projects? I think according to Eroma's clarification, it does 
 not depend on high level project management.
 
 Thanks,
 Dimuthu
 
 On Sun, Apr 19, 2015 at 6:21 PM, Pamidighantam, Sudhakar V 
 spami...@illinois.edu wrote:
 Just to clarify further, there are two kinds of projects … one is a 
 research project under which many experiments can be collected. On the 
 organization of users side,
 there may be a project under which many users can be grouped lead by a PI ( 
 principal investigator). Perhaps this distinction should be maintained when 
 a Gateway is considered. Each user may have multiple research projects. 
 Each PI may have several collaborators using the project (resource) 
 allocation. All these need to be tracked for the Gateway, PI and the 
 individual users.
 
 Thanks,
 Sudhakar.
 
 On Apr 19, 2015, at 4:55 AM, DImuthu Upeksha dimuthu.upeks...@gmail.com 
 wrote:
 
 Hi Eroma,
 
 Yet it's clear. Thanks for the clarification.
 
 Thanks 
 Dimuthu
 
 On Sun, Apr 19, 2015 at 9:34 AM, Eroma Abeysinghe 
 eroma.abeysin...@gmail.com wrote:
 Hi Dimuthu,
 
 Project is a grouping for a collection of experiments. Meaning we can 
 select a project when creating an experiment. A project can have one or 
 many experiments grouped together. Compute resource is the super computer 
 which application (applications are deployed in a computer resource) 
 experiments are executed.
 Currently in AIravata a computer resource and a project does not have a 
 direct link and neither we could specify projects to particular resource.
 
 Others please correct if above is not accurate.
 Hope this helps.
 
 Thanks,
 Best Regards,
 Eroma
 
 On Sat, Apr 18, 2015 at 11:03 PM, DImuthu Upeksha 
 dimuthu.upeks...@gmail.com wrote:
 Hi all,
 
 Can we assign/get computational resources for a project using Airavata API?
 There are methods like getAllComputeResourceNames and getComputeResource 
 which enables to get them. But it does not specify a particular project. 
 What is the concept behind computational resources? Does it available for 
 all projects or can we specify particular resources for  specific projects?
 
 Thanks
 Dimuthu
 -- 
 Regards
 W.Dimuthu Upeksha
 Undergraduate
 Department of Computer Science And Engineering
 University of Moratuwa, Sri Lanka
 
 
 
 -- 
 Thank You,
 Best Regards,
 Eroma
 
 
 
 -- 
 Regards
 W.Dimuthu Upeksha
 Undergraduate
 Department of Computer Science And Engineering
 University of Moratuwa, Sri Lanka
 
 
 
 
 -- 
 Regards
 W.Dimuthu Upeksha
 Undergraduate
 Department of Computer Science And Engineering
 University of Moratuwa, Sri Lanka
 
 



Re: Register a remote computational resource steps

2015-04-30 Thread Pamidighantam, Sudhakar V
Suresh:
I think we need to have a local deployment of this test PHP interface and 
instructions to use that instance. 
I will talk to Eroma about creating these in a document similar to 
XBAYA-Quick-Start tutorial. 

This way any body can deploy a  local PHP instance,  register resources and 
could test the execution. 
I believe that the test PHP gateway code is already in the git. Could you 
please verify and comment on this plan.

Thanks,
Sudhakar.
On Apr 30, 2015, at 8:51 AM, Suresh Marru sma...@apache.org wrote:

 Hi Alfredo,
 
 We will be happy to walk through these steps using a test PHP interface which 
 consumes the same API you are programming against. 
 
 http://dev.test-drive.airavata.org/portal/pga/public/
 
 Navigating through the working PHP code in this repo might help understand 
 the API sequences better.
 
 Suresh
 
 On Apr 30, 2015, at 9:30 AM, Chathuri Wimalasena kamalas...@gmail.com 
 wrote:
 
 Hi, 
 
 You do not need to have airavata server running in remote instance. Only 
 local instance is sufficient. Here are the steps you should do in order to 
 use your application. 
 Register the compute resource
 Register the gatewayResource preference for the gateway profile 
 Register an application module
 Register the application interface
 Register the application deployment
 I assume you went through all the above steps. From each register method of 
 the API, you will get a ID in return. When you create the experiment object, 
 you need to give the application interface id as the applicationId and 
 give the compute resource id as the resourceHostId of the 
 ComputationalResourceScheduling object. Sample experiment object is like 
 this. 
 
 
 Experiment simpleExperiment =
 
 ExperimentModelUtil.createSimpleExperiment(projectID, testUser, 
 TestFR_Ultrascan_Experiment, Ultrascan Experiment run, appId, 
 applicationInputs);
 simpleExperiment.setExperimentOutputs(appOutputs);
 ComputationalResourceScheduling scheduling = 
 ExperimentModelUtil.createComputationResourceScheduling(hostId, 4, 1, 1, 
 normal, 30, 0, 1, null);
 UserConfigurationData userConfigurationData = new UserConfigurationData();
 userConfigurationData.setAiravataAutoSchedule(false);
 userConfigurationData.setOverrideManualScheduledParams(false);
 userConfigurationData.setComputationalResourceScheduling(scheduling);
 simpleExperiment.setUserConfigurationData(userConfigurationData);
 experimentId = airavata.createExperiment(gatewayId, simpleExperiment);
 
 Hope this helps.
 
 Thanks..
 Chathuri
 
 
 On Wed, Apr 29, 2015 at 8:39 AM, SmashRod Alfredo smash...@hotmail.it 
 wrote:
 Hi Everyone,
 I need some details on the steps necessary in order to register a remote 
 computational resource on Airavata.
 
 The first question is really dumb but I don't find anywhere a detailed 
 answer or explanation: it is necessary to have an airavata-server instance 
 running on both local (where computational resource is registered) and 
 remote machine?
 
 
 I've try to register an internal computational resource without success 
 doing the following:
 - Register the computational resources (following the provided samples)
 computeResourceID = registerComputeHost(remoteMachineDomainName, 
 RTSRV SSC Machine,
 ResourceJobManagerType.FORK, push, /usr/bin, 
 SecurityProtocol.SSH_KEYS, 22, null);
 System.out.println(Resource Id is  + computeResourceID);
 
 - Register the GatewayResourceProfile
 
 ComputeResourcePreference rtsrvComputateResourcePreference = 
 RegisterSampleApplicationsUtils.
 createComputeResourcePreference(computeResourceID , 
 null, false, null,
 JobSubmissionProtocol.SSH, 
 DataMovementProtocol.SCP, /tmp);
 
 GatewayResourceProfile gatewayResourceProfile = new 
 GatewayResourceProfile();
 gatewayResourceProfile.setGatewayID(DEFAULT_GATEWAY);
 gatewayResourceProfile.setGatewayName(DEFAULT_GATEWAY);
 
 gatewayResourceProfile.addToComputeResourcePreferences(rtsrvComputateResourcePreference);
  
 String gatewayProfile = 
 airavataClient.registerGatewayResourceProfile(gatewayResourceProfile);
 System.out.println(Gateway Profile is registered with Id  + 
 gatewayProfile);
 writeIdPropertyFile(Gateway Profile 
 ID,gatewayProfile,propertyFile);
 
 - Register an application deployment of the remote host
 
 When try to execute a workflow using the defined application on the 
 registered remote machine I got the following error:
 Computational resource scheduling is not configured for host ..
 
 Does something missing on my remote machine (airavata-server running or some 
 kind of services?), or something is not properly configured on the local 
 machine (some properties file?)?
 
 Thanks for the explanation
 
 Alfredo
 
 



Re: chessisNumber?

2015-06-18 Thread Pamidighantam, Sudhakar V
I have not heard such a requirement in LSF. Can somebody explain how this is 
used.

Sudhakar.
On Jun 18, 2015, at 7:56 AM, Pierce, Marlon 
marpi...@iu.edumailto:marpi...@iu.edu wrote:

Is this chassis number?

From: Chathuri Wimalasena kamalas...@gmail.commailto:kamalas...@gmail.com
Reply-To: dev@airavata.apache.orgmailto:dev@airavata.apache.org 
dev@airavata.apache.orgmailto:dev@airavata.apache.org
Date: Thursday, June 18, 2015 at 8:53 AM
To: dev@airavata.apache.orgmailto:dev@airavata.apache.org 
dev@airavata.apache.orgmailto:dev@airavata.apache.org
Subject: Re: chessisNumber?

Hi Suresh,

It was added when we add LSF job manager. As I remember, it was required for 
that.

Thanks..
Chathuri

On Thu, Jun 18, 2015 at 8:38 AM, Suresh Marru 
sma...@apache.orgmailto:sma...@apache.org wrote:
Hi All,

What is chessisNumber in the scheduling model [1]?

Suresh

[1] - 
https://github.com/apache/airavata/blob/master/thrift-interface-descriptions/airavata-api/scheduling_model.thrifthttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_airavata_blob_master_thrift-2Dinterface-2Ddescriptions_airavata-2Dapi_scheduling-5Fmodel.thriftd=AwMFAgc=8hUWFZcy2Z-Za5rBPlktOQr=7_-LbDwTKOoIiO4P4OLfUTX6lSdjys9jh2AJ7sBl9agm=CAD06wUvRp3SMm7JlK7VqhLjhBAuO2g_I56BlOWuIVgs=i67hR7DSvw2fIewTz-2fZ2psiP3PzqlivsR5W9tHPnwe=




Re: chessisNumber?

2015-06-18 Thread Pamidighantam, Sudhakar V
Thanks Chaturi:

OK. this is to specify a particular host in a cluster. This option is used to 
test some thing in one host before going to production for all hosts or if one 
host has special resources that are needed.

Sudhakar.
On Jun 18, 2015, at 8:55 AM, Chathuri Wimalasena 
kamalas...@gmail.commailto:kamalas...@gmail.com wrote:

According to LSF xslt file, chassis name is specified as #BSUB -m c. May be 
Lahiru can give more insight.

Thanks..
Chathuri

On Thu, Jun 18, 2015 at 9:16 AM, Pamidighantam, Sudhakar V 
spami...@illinois.edumailto:spami...@illinois.edu wrote:
I have not heard such a requirement in LSF. Can somebody explain how this is 
used.

Sudhakar.

On Jun 18, 2015, at 7:56 AM, Pierce, Marlon 
marpi...@iu.edumailto:marpi...@iu.edu wrote:

Is this chassis number?

From: Chathuri Wimalasena kamalas...@gmail.commailto:kamalas...@gmail.com
Reply-To: dev@airavata.apache.orgmailto:dev@airavata.apache.org 
dev@airavata.apache.orgmailto:dev@airavata.apache.org
Date: Thursday, June 18, 2015 at 8:53 AM
To: dev@airavata.apache.orgmailto:dev@airavata.apache.org 
dev@airavata.apache.orgmailto:dev@airavata.apache.org
Subject: Re: chessisNumber?

Hi Suresh,

It was added when we add LSF job manager. As I remember, it was required for 
that.

Thanks..
Chathuri

On Thu, Jun 18, 2015 at 8:38 AM, Suresh Marru 
sma...@apache.orgmailto:sma...@apache.org wrote:
Hi All,

What is chessisNumber in the scheduling model [1]?

Suresh

[1] - 
https://github.com/apache/airavata/blob/master/thrift-interface-descriptions/airavata-api/scheduling_model.thrifthttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_airavata_blob_master_thrift-2Dinterface-2Ddescriptions_airavata-2Dapi_scheduling-5Fmodel.thriftd=AwMFAgc=8hUWFZcy2Z-Za5rBPlktOQr=7_-LbDwTKOoIiO4P4OLfUTX6lSdjys9jh2AJ7sBl9agm=CAD06wUvRp3SMm7JlK7VqhLjhBAuO2g_I56BlOWuIVgs=i67hR7DSvw2fIewTz-2fZ2psiP3PzqlivsR5W9tHPnwe=






Re: [GSoC] Hangout - Airavata and PGA Overview

2015-05-28 Thread Pamidighantam, Sudhakar V
8 CT is fine for me.

Sudhakar.
On May 28, 2015, at 7:05 AM, Suresh Marru 
sma...@apache.orgmailto:sma...@apache.org wrote:

Hi All,

Will 9 am eastern time (6.30 pm IST) tomorrow (Friday 29th May) work for every 
one for a google hangout?

After a brief overview of Airavata and PGA, lets all of us do hand-on tutorials 
to make sure we understand the basic concepts to work with Airavata. It will be 
useful if you can finish all of the three tutorials before the hangout session 
- 
https://cwiki.apache.org/confluence/display/AIRAVATA/Airavata+Quick-Start+Tutorialshttps://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_AIRAVATA_Airavata-2BQuick-2DStart-2BTutorialsd=AwMCAgc=8hUWFZcy2Z-Za5rBPlktOQr=7_-LbDwTKOoIiO4P4OLfUTX6lSdjys9jh2AJ7sBl9agm=xk0Sln5p_ytbSDosiBZtXKzPCTmmB1jFhK9Jl2eDsZYs=Na8lrkf7dEg1emX0phwWyYEe4JSxf-k6ARO2OglFqfEe=

Suresh



Re: launching a job through Airavata to Mesos cluster

2015-10-28 Thread Pamidighantam, Sudhakar V
Pankaj:

If mesos is a scheduler, perhaps Airavata could be enhanced to submit jobs to 
using mesos scheduler ( similar to slurm/pbs/lsf scheduler). Is this what you 
are referring to?

Thanks,
Sudhakar.
On Oct 28, 2015, at 11:11 AM, Pierce, Marlon 
> wrote:

I’ll add: if submitting a job to (for example) a SLURM queuing system, we need 
to create the correct SLURM submission script and submit it by executing the 
correct command line operation (sbatch).

From: Marlon Pierce >
Reply-To: Airavata Dev >
Date: Wednesday, October 28, 2015 at 12:08 PM
To: Airavata Dev >, 
Suresh Marru >, Pankaj Saha 
>
Subject: Re: launching a job through Airavata to Mesos cluster

Hi Pankaj, can you say more about what you mean by “launch a dockerized job”?

Marlon


From: Pankaj Saha >
Reply-To: Airavata Dev >
Date: Wednesday, October 28, 2015 at 11:56 AM
To: Suresh Marru >
Cc: Airavata Dev >
Subject: Re: launching a job through Airavata to Mesos cluster

Hi Suresh,

My initial understanding is, I have to launch a dockerized job through Airavata 
which will be run in the Mesos cluster.  I was looking for the code which 
submits jobs and wanted to make changes such a way that it can submit docker 
containers to Mesos/Marathon cluster.

I can use 0.15 branch and I have no idea about data transfer protocol and job 
submission protocols that Shameera has mentioned. I may want to submit jobs by 
submitting a JSON through command line or any other way that you guys feel is 
more appropriate.

I can talk to Prof. Madhu and let you know more on the requirement.

Thanks
Pankaj







On Wed, Oct 28, 2015 at 11:23 AM, Suresh Marru 
> wrote:
Pankaj can you clarify the following:

Do you want an Airavata instance to run some dockerized applications scheduled 
by Mesos? Or do you just need a client which will connect to Airavata hosted 
and managed by Mesos/Marathon?

Suresh

On Oct 28, 2015, at 10:50 AM, Shameera Rathnayaka 
> wrote:

Hi Pankaj,

Wich version of Airavata you are working on?  what is the data transfer 
protocol? What is the job submission protocol?

Short answer:  if you are using Airavata 0.15 then you need to write new 
Provider implementation to submit the request to Mesos/Marathon cluster.  But 
if you are using Airavata 16.0 which is current master, then you need to write 
JobSubmissionTask implementation. Either case you can go through the existing 
implementations, for Provider implementation see  SSHProvider  and 
JobSubmissionTask implementation see SSHJobSubmissionTask.

If I get the answers to my questions then i can provide exactly what you need 
to do. BTW we have cleaned our internal architecture in Airavata 16.0, as a 
developer you would find it easy to work with Airavata 16.0 that Airavata 15.0. 
But notice master is not yet stable as Airavata 15.0.

Regards,
Shameera.


On Tue, Oct 27, 2015 at 1:53 PM Pankaj Saha 
> wrote:
Hello Shameera,
I am working on jet-stream project, where I have to find out a way to submit a 
job in mesos/marathon cluster through Airavata client. I don't have much idea 
from where to start looking into. Can you please give some clue so that I can 
start working and making changes to java code for the same.

Thanks
Pankaj

--
Shameera Rathnayaka





Re: Airavata installation to submit a job in Mesos/Maraton cluster

2015-12-05 Thread Pamidighantam, Sudhakar V
Pankaj;
is your mesos cluster registered in the PGA as computer resource. Is the hello 
world application catalog entries made. Can you do this in a PGA using the gw56 
( I assume you have admin privileges). If not somebody can do this for you.

Thanks,
Sudhakar.
On Dec 5, 2015, at 1:26 PM, Pankaj Saha 
> wrote:

Hi Supun,
I have pointed to 'airavata-server' => 
'gw56.iu.xsede.org',
 to start working with.As you have already provided all the required previleges 
to my new userid, I am able to see all computer resources and applications in 
it. So it was not a problem with PGA but Airavata server installation in my 
local machine.

Now can anyone please guide me for launching a hello world application through 
this portal so that it can launch a job in a mesos cluster. I know that I can 
not use the 
"gw56.iu.xsede.org"
 server, as I want to launch a job in my local mesos/marathon cluster. But if I 
get some clue how to start with I can possible fix my local Airavata server 
installation and will try to launch a job in local resource.

Thanks
Pankaj

On Fri, Dec 4, 2015 at 5:41 PM, Supun Nakandala 
> wrote:
We haven't done any changes to 0.15 branches recently. So if it worked for you 
previously it should still work without any issue. I suggest that you first try 
to setup your pga to work with SciGaP hosted airavata and then try to connect 
it to your own airavata. But we don't have any active deployments of 0.15. In 
that case you may want to use master branch

On Fri, Dec 4, 2015 at 5:36 PM, Pankaj Saha 
> wrote:

While running server, no error messages. But while creating projects its throws 
the error message which I have posted.

On 04-Dec-2015 4:55 PM, "Supun Nakandala" 
> wrote:
do you get any airavata error messages?

On Fri, Dec 4, 2015 at 4:51 PM, Pankaj Saha 
> wrote:
I am using 15 branch
yes I could not even create a project.

On Fri, Dec 4, 2015 at 4:49 PM, Supun Nakandala 
> wrote:
Which Airavata/PGA version are you using?

Looking at your error message I see even the project is also not set.

On Fri, Dec 4, 2015 at 12:16 PM, Pankaj Saha 
> wrote:
Hello Devs,
I have a requirement to launch a job (simple hello world kind of) on 
mesos/marathon cluster through Airavata client (PGA preferable ). I have tried 
installing a local Aiaravata PGA with hosted Airavata server on my machine. Now 
the problem is with PGA installation, its not allowing me create or explore any 
project/experiment.
Its saying:
Required field 'gatewayId' was not present! Struct: 
createProject_args(gatewayId:null, project:null)

I always used the default  gateway-Id which is "default" and the same is 
mentioned in the server configuration properties file too. This problem I have 
never faced earlier and Airavata installation used to be very simple.

Second challenge is to run a job through Airavata client, so that it can launch 
a job in a Mesos cluster through Marathon. So Airavata should be able to invoke 
a rest API based client of Marathon to do the job.
I seek help to resolve the first issue and then need to find a way for 
launching jobs.


Thanks
Pankaj





--
Thank you
Supun Nakandala
Dept. Computer Science and Engineering
University of Moratuwa




--
Thank you
Supun Nakandala
Dept. Computer Science and Engineering
University of Moratuwa



--
Thank you
Supun Nakandala
Dept. Computer Science and Engineering
University of Moratuwa




Re: [VOTE] Apache Airavata Release 0.15 - RC1

2015-12-07 Thread Pamidighantam, Sudhakar V
+1 for the Apache Airavata 0.15 release.

Thanks,
Sudhakar.
On Dec 4, 2015, at 5:19 AM, Suresh Marru  wrote:

> Apache Airavata PMC is pleased to call for a vote on the following Apache 
> Airavata 0.15 release candidate artifacts:
> 
> Detailed change log/release notes:
> 
> https://git-wip-us.apache.org/repos/asf?p=airavata.git;a=blob_plain;f=RELEASE_NOTES;hb=refs/tags/airavata-0.15
> 
> All Release Artifacts:
> 
> https://dist.apache.org/repos/dist/dev/airavata/0.15/RC1/
> 
> PGP release keys (signed using 617DDBAD):
> 
> https://dist.apache.org/repos/dist/release/airavata/KEYS
> 
> Specific URL's:
> 
> GIT source tag:
> https://git-wip-us.apache.org/repos/asf?p=airavata.git;a=shortlog;h=refs/tags/airavata-0.15
> 
> Source release:
> https://dist.apache.org/repos/dist/dev/airavata/0.15/RC1/airavata-0.15-source-release.zip
> 
> Binary Artifacts:
> 
> Airavata Server:
> https://dist.apache.org/repos/dist/dev/airavata/0.15/RC1/apache-airavata-server-0.15-bin.tar.gz
> https://dist.apache.org/repos/dist/dev/airavata/0.15/RC1/apache-airavata-server-0.15-bin.zip
> 
> Maven staging repo:
> https://repository.apache.org/content/repositories/orgapacheairavata-1006/
> 
> Please verify the artifacts and vote. The vote will be open for atleast 72 
> hours.
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 



Re: Required field 'gatewayId' was not present!

2015-11-28 Thread Pamidighantam, Sudhakar V
Is it possible to provide a default gatewayID for a given hosted Airavata 
service to avoid this problem. Of course if somebody wants a define it a 
particular war then the default can be changed to what the gateway wants to set 
it to.

Thanks,
Sudhakar.
On Nov 28, 2015, at 1:35 PM, Pankaj Saha 
> wrote:

Hi Supun,
I am getting this error when trying to create any project or experiment or 
trying to browse them.
the error message is:
Required field 'gatewayId' was not present! Struct: 
createProject_args(gatewayId:null, project:null)
I am using a local hosted airavata server and below is my pga_config details


   /* 'airavata-server' => 
'gw56.iu.xsede.org',*/
'airavata-server' => 'localhost',
'airavata-port' => '8930',
'airavata-timeout' => '100',
'gateway-id' => 'default',
'server-allowed-file-size' => 64,
'experiment-data-dir' => '/../experimentData',
'experiment-data-absolute-path' => '/var/www/experimentData',
'ssh-user' => 'root',

Can you please help me finding the problem witjh my installation?

Thanks
Pankaj




Re: Airavata 0.16 Release Planning

2016-06-03 Thread Pamidighantam, Sudhakar V
None. +1 for the release.

Thanks,
Sudhakar.
On Jun 3, 2016, at 6:55 AM, Suresh Marru 
> wrote:

I gated the release earlier but this task was long done. Any objections to move 
forward with 0.16 release?

Suresh

On Mar 30, 2016, at 2:53 PM, Suresh Marru 
> wrote:

To contradict my proposal, I would like to work on 
https://issues.apache.org/jira/browse/AIRAVATA-1945
 before requesting feature freeze. Should not take long.

Suresh

On Mar 28, 2016, at 11:29 AM, Pierce, Marlon 
> wrote:

Do we have any outstanding tasks that need to be wrapped up and committed to 
dev?

From: Shameera Rathnayaka 
>
Reply-To: "dev@airavata.apache.org" 
>
Date: Monday, March 28, 2016 at 11:20 AM
To: Airavata Dev >
Subject: Re: Airavata 0.16 Release Planning

+1

On Mon, Mar 28, 2016 at 10:50 AM Suresh Marru 
> wrote:
Hi All,

Before we go too far, how about we call a feature freeze and stat working on 
0.16 release? Unless any one is in the middle of a development activity, how 
about we target end of the week to start working on it?

Suresh
--
Shameera Rathnayaka





Re: Airavata 0.16 Release Planning

2016-03-28 Thread Pamidighantam, Sudhakar V
+1

Sudhakar.
On Mar 28, 2016, at 10:20 AM, Shameera Rathnayaka 
> wrote:

+1

On Mon, Mar 28, 2016 at 10:50 AM Suresh Marru 
> wrote:
Hi All,

Before we go too far, how about we call a feature freeze and stat working on 
0.16 release? Unless any one is in the middle of a development activity, how 
about we target end of the week to start working on it?

Suresh
--
Shameera Rathnayaka



Re: Refactor JIRA Component list

2016-08-04 Thread Pamidighantam, Sudhakar V
Could you name them devop_tools.

Thanks,
Sudhakar.
On Aug 4, 2016, at 10:58 AM, Shameera Rathnayaka 
> wrote:

Hi Amila,

Thanks for your feedback, agree with your 1 and 2 thoughts, let's remove those 
two components.

What I mean by dev-tools is Ansible deployment scripts, Docker Images and etc 
... . For an example If there any issue with Airavata ansible scripts then it 
should go under this dev-tools components.

Thanks,
Shameera.

On Thu, Aug 4, 2016 at 11:42 AM Amila Jayasekara 
> wrote:
Some feedback as follows;

1. Unit-test should not be a separate component. Every component must have unit 
tests and issues with unit tests must go into the respective component.
2. The release should also not be a separate component. Every component should 
have a release version, and issues should go into that release version.
3. I am not very clear why we have a separate component called dev-tools. To 
me, it seems dev-tools are not part of airavata, so issues with dev-tools 
should go to (report) the relevant project. Also, if the issue is related to 
configurations, it should go to the respective component. e.g.,- distribution.

Thanks
-Thejaka


On Mon, Jul 25, 2016 at 5:21 PM, Shameera Rathnayaka 
> wrote:
Hi Devs,

Airavata 0.16 is hot off the press, and now latest code refers as 
0.17-SNAPSHOT.  Let's do some refactoring to the JIRA component list. Some of 
them are outdated and some components are not in the list. Let's decide what is 
the best component list to be available in JIRA.

Here is my initial suggestion :

Api-Server - All API server related issues goes under this component
Orchestrator - All Orchestrator related issues
Monitoring - All monitoring related issues
Messaging - All messaging related issues
GFac - All Task execution related issues
Registry - All app catalog, exp catalog, credential store, replica catalog 
related related issues.
Cloud - All Cloud related issues goes here.
Security - All security related issues.
Workflow - All workflow related issues.
Integration-Test- All Integration test related issues
Unit-Test - All unit test related issues.
Client - All client related issues ex: Desktop client, python client(jupyter) 
etc ...
PGA - All PGA related issues.
Web-Site - All web site related issues.
Documents - All cwiki and document related issues.
Release - Any issue with release goes under this.
Labs - All research level issues goes under this.
Dev-Tools - All development tools (docker, ansible etc ...) related issues goes 
under this.

Feedback is most welcomed.

Thanks,
Shameera.



--
Shameera Rathnayaka



Re: Spark context diagrams

2017-05-26 Thread Pamidighantam, Sudhakar V
Apoorv:

Can you create these diagrams with creately or some software and annotate them 
better.

It is a bit difficult for old eyes to read them..

Thanks,
Sudhakar.

On May 26, 2017, at 11:25 AM, Apoorv Palkar 
> wrote:

Hey I've been working on teh spark details and posted 2 diagrams on google docs 
in link below. Hopefully i can with the grove and have it be working with/as 
the potential orchestrator.



https://docs.google.com/document/d/1kjIBC0ianDVJlSuPs8FanCTO8ili1VETA5xKeFqo1gY/edit?usp=sharing





Re: Using Docker images to run Thrift

2017-06-07 Thread Pamidighantam, Sudhakar V
+1 any thing to reduce manual installation and corresponding issues would be 
welcome. There should be testing to ensure this went through correctly and 
suggest practical ways to correct any shortcomings.

Thanks,
Sudhakar.
On Jun 7, 2017, at 3:02 PM, Christie, Marcus Aaron 
> wrote:

Dev,

After running into difficulties getting Thrift to build on my laptop I started 
exploring the possibility of using Docker images to run Thrift.  I’ve created a 
pull request of my changes here: 
https://github.com/apache/airavata/pull/112

One question: I opted to just switch the scripts to using Docker, but I thought 
perhaps that could be a command line flag whether to use Docker or not.  My 
hope is that using Docker images to run Thrift will be a lot more convenient 
than requiring developers to install Thrift.

Your feedback is welcome.

Thanks,

Marcus



Re: Welcome Marcus Christie as Airavata PMC member

2017-11-17 Thread Pamidighantam, Sudhakar V
Congratulation to Marcus and welcome as PMC.

Thanks,
Sudhakar.
> On Nov 17, 2017, at 2:11 PM, Suresh Marru  wrote:
> 
> Hi All,
> 
> The Project Management Committee (PMC) for Apache Airavata has asked Marcus 
> Christie to become a PMC member based on his contributions to the project. We 
> are pleased to announce that he has accepted.
> 
> As you know, Marcus has been stewarding Airavata already as a committer and 
> being a PMC member will enable him to assist with the management and to guide 
> the direction of the project as well.
> 
> Please join me in welcoming Marcus to Airavata PMC
> 
> Cheers,
> Suresh
> (On Behalf of Apache Airavata PMC)



Re: Running Airavata in standalone mode

2017-12-08 Thread Pamidighantam, Sudhakar V
I believe these kind of error are seen many times if the deployment 
instructions are not followed. I think it is time to capture them and suggest 
actionable solutions or refer back to the deployment instructions.

Thanks,
Sudhakar.
On Dec 8, 2017, at 7:40 AM, DImuthu Upeksha 
> wrote:

Hi Suresh/ Marcus,

I'm trying to run Airavata in standalone mode by running ServerMain class in 
airavata-standalone-server module using Idea. However I came across this [1] 
error trail. Now I'm working on investigating them one by one and please 
suggest what I can do to fix them if you have come across with same issues 
before. In the mean time, if you can send me a sample 
airavata-server.properties file and the dump of a database, it will be really 
helpful.

[1] https://gist.github.com/DImuthuUpe/dffd0275022dcd0242b1ad1bd762298a

Thanks
Dimuthu



Re: Getting error while trying to clone experiment

2017-12-27 Thread Pamidighantam, Sudhakar V
Cloning requires access to the data (inputs etc) from the previous experiment 
for reuse that are usually at the service host. Perhaps the permissions may be 
the issue in your case.

Sudhakar.
On Dec 27, 2017, at 1:52 PM, Saurabh Agrawal 
> wrote:

Hi all,
Got this exception while trying to clone an experiment on SEAGrid using my 
Python script.

(ENV) [js-17-195] sa0412 ~/test/airavata-jupyter-notebook-client-->python3.6 
test.py
Traceback (most recent call last):
  File "test.py", line 26, in 
main()
  File "test.py", line 23, in main
username, existing_exp_id, new_exp_name, new_exp_proj_id)
  File "/home/sa0412/test/airavata-jupyter-notebook-client/api.py", line 116, 
in clone_experiment
cloned_exp_id = airavataClient.cloneExperiment(auth_token, existing_exp_id, 
new_exp_name, new_exp_proj_id)
  File 
"/home/sa0412/test/airavata-jupyter-notebook-client/apache/airavata/api/Airavata.py",
 line 6244, in cloneExperiment
return self.recv_cloneExperiment()
  File 
"/home/sa0412/test/airavata-jupyter-notebook-client/apache/airavata/api/Airavata.py",
 line 6277, in recv_cloneExperiment
raise result.ase
apache.airavata.api.error.ttypes.AiravataSystemException: 
AiravataSystemException(airavataErrorType=2, message='Error while cloning the 
experiment with existing configuration. More info : Error while getting the 
experiment. More info : User does not have permission to access this resource')

The following statement clones an experiment:
airavataClient.cloneExperiment(auth_token, existing_exp_id, new_exp_name, 
new_exp_proj_id)

BTW, I am able to launch an experiment using the following statement:
airavata_client.launchExperiment(auth_token, experiment_id, gateway_id)

Please suggest.

Thanks in advance,
Saurabh Agrawal



Re: Getting error while trying to clone experiment

2017-12-27 Thread Pamidighantam, Sudhakar V
Saurabh:

Perhaps some deeper debugging is needed as to which resource it is trying to 
access when the permission problem occurs.
Do you have access to the server logs.

I am not sure how the non URI input fields are saved for cloning.

Sudhakar.
On Dec 27, 2017, at 3:09 PM, Saurabh Agrawal 
<agras...@umail.iu.edu<mailto:agras...@umail.iu.edu>> wrote:

Hi Sudhakar,

Thanks for replying.
Actually, my previous experiment was an echo experiment, and it only had a 
string input (Val1) and no file or other input.
Please suggest if it is required to provide permission on a string.
Also, what is the best way to provide permission on a string?

On Wed, Dec 27, 2017 at 2:39 PM, Pamidighantam, Sudhakar V 
<spami...@illinois.edu<mailto:spami...@illinois.edu>> wrote:
Cloning requires access to the data (inputs etc) from the previous experiment 
for reuse that are usually at the service host. Perhaps the permissions may be 
the issue in your case.

Sudhakar.

On Dec 27, 2017, at 1:52 PM, Saurabh Agrawal 
<agras...@umail.iu.edu<mailto:agras...@umail.iu.edu>> wrote:

Hi all,
Got this exception while trying to clone an experiment on SEAGrid using my 
Python script.

(ENV) [js-17-195] sa0412 ~/test/airavata-jupyter-notebook-client-->python3.6 
test.py
Traceback (most recent call last):
  File "test.py", line 26, in 
main()
  File "test.py", line 23, in main
username, existing_exp_id, new_exp_name, new_exp_proj_id)
  File "/home/sa0412/test/airavata-jupyter-notebook-client/api.py", line 116, 
in clone_experiment
cloned_exp_id = airavataClient.cloneExperiment(auth_token, existing_exp_id, 
new_exp_name, new_exp_proj_id)
  File 
"/home/sa0412/test/airavata-jupyter-notebook-client/apache/airavata/api/Airavata.py",
 line 6244, in cloneExperiment
return self.recv_cloneExperiment()
  File 
"/home/sa0412/test/airavata-jupyter-notebook-client/apache/airavata/api/Airavata.py",
 line 6277, in recv_cloneExperiment
raise result.ase
apache.airavata.api.error.ttypes.AiravataSystemException: 
AiravataSystemException(airavataErrorType=2, message='Error while cloning the 
experiment with existing configuration. More info : Error while getting the 
experiment. More info : User does not have permission to access this resource')

The following statement clones an experiment:
airavataClient.cloneExperiment(auth_token, existing_exp_id, new_exp_name, 
new_exp_proj_id)

BTW, I am able to launch an experiment using the following statement:
airavata_client.launchExperiment(auth_token, experiment_id, gateway_id)

Please suggest.

Thanks in advance,
Saurabh Agrawal




--
Thanks,
Saurabh Agrawal



Re: [ANNOUNCE] Welcome Dimuthu Upeksha as Airavata PMC member and committer

2018-03-06 Thread Pamidighantam, Sudhakar V
Congratulations and Welcome to PMC Dimuthu.

Sudhakar.
> On Mar 6, 2018, at 10:32 AM, Suresh Marru  wrote:
> 
> Hi All,
> 
> The Project Management Committee (PMC) for Apache Airavata has asked Dimuthu 
> Upeksha to become a committer and PMC member based on his contributions to 
> the project. We are pleased to announce that he has accepted.
> 
> Being a committer enables easier contribution to the project since there is 
> no need to go via the patch submission process. This should enable better 
> productivity. Being a PMC member enables assistance with the management and 
> to guide the direction of the project.
> 
> Please join me in welcoming Dimuthu to Airavata.
> 
> Suresh
> (On Behalf of Apache Airavata PMC)



Re: Metascheduler work

2018-12-17 Thread Pamidighantam, Sudhakar V
I thought Pankaj presented a way to implement MPI executions through mesos more 
recently.
He may want to comment on this..

Thanks,
Sudakar.

From: Marlon Pierce 
Reply-To: 
Date: Monday, December 17, 2018 at 8:52 AM
To: "dev@airavata.apache.org" 
Subject: Re: Metascheduler work

Hi Dimuthu,

This is something we should re-evaluate. Mangirish Wangle looked at Mesos 
integration with Airavata back in 2016, but he ultimately ran into many 
difficulties, including getting MPI jobs to work, if I recall correctly.

Marlon


From: "dimuthu.upeks...@gmail.com" 
Reply-To: dev 
Date: Sunday, December 16, 2018 at 7:30 AM
To: dev 
Subject: Metascheduler work

Hi Folks,

I found this [1] mail thread and the JIRA ticket [2] which have discussed about 
coming up with an Airavata specific job scheduler. At the end of the 
discussion, seems like an approach based on Mesos has been chosen to tryout. Is 
there any other discussion/ documents regarding this topic? Has anyone worked 
on this and if so, where are the code / design documents?

[1] 
https://markmail.org/message/tdae5y3togyq4duv#query:+page:1+mid:tdae5y3togyq4duv+state:results
[2] https://issues.apache.org/jira/browse/AIRAVATA-1436

Thanks
Dimuthu


Re: Unused modules

2018-11-30 Thread Pamidighantam, Sudhakar V
What is the estimated timeline for enforceable allocation management to be 
available in Airavata, 2019, 2020?

Thanks,
Sudhakar.

From: DImuthu Upeksha 
Reply-To: "dev@airavata.apache.org" 
Date: Friday, November 30, 2018 at 8:30 AM
To: "dev@airavata.apache.org" 
Subject: Re: Unused modules

Hi Suresh,

+1 for removing gfac modules as well

Dimuthu

On Fri, Nov 30, 2018 at 6:32 PM Apache Airavata 
mailto:smarru.apa...@gmail.com>> wrote:
+1 to remove all of them. While you are at it, should we also remove gfac 
modules from develop and staging branches?

Suresh


On Nov 30, 2018, at 6:44 AM, DImuthu Upeksha 
mailto:dimuthu.upeks...@gmail.com>> wrote:
Hi Folks,

I can see that some modules [1] are no longer being used or actively developed.

allocation-manager
cloud
compute-account-provisioning
configuration
db-event-manager
integration-tests
monitoring
security
workflow
workflow-model
xbaya
xbaya-gui

I'm suggesting to remove these unused modules as they affect the build time and 
the clarity of the code. Any objections / suggestions?

[1] https://github.com/apache/airavata/tree/staging/modules

Thanks
Dimuthu



Re: Error installing Airavata+RabbitMQ+Zookeeper

2019-03-11 Thread Pamidighantam, Sudhakar V
Can you try a VM with more memory (not disk space) say 8GB or so.

Thanks,
Sudhakar.

From: "Achanta, Sai Rohith" 
Reply-To: "dev@airavata.apache.org" 
Date: Monday, March 11, 2019 at 10:20 PM
To: "dev@airavata.apache.org" 
Subject: Error installing Airavata+RabbitMQ+Zookeeper

Hi team,

I’m trying to setup Airavata on my laptop by following this 
link.
 I’m successful till Keycloak configuration. When I’m trying to install 
“Airavata+RabbitMQ+Zookeeper”, I’m facing memory issue with VM when the maven 
build is running. I tried to increase the disk space of VM, but still I see the 
error.

I have attached the screenshot to this e-mail.
Can anyone help me, please.

Thanks and regards,
Sai Rohith Achanta.



Some Data requirements

2019-05-15 Thread Pamidighantam, Sudhakar V
Please see some data needs we are seeing in the current gateways. Some of these 
are handled but several require additional development, integration and 
operational changes.

Specific use cases could/should  be documented as well. This may not cover all 
unmet needs and other are encouraged to add to this as we embark on providing 
first class Data management in Apache Airavata.

Airavata Data Requirements


  1.  Data Ingestion

  Data for input can be of different types and hierarchies, starting from 1. 
individual parameters, 2. name lists in a simple/small file (in KB), typically 
instructions for the execution, 3. data files which could be large (up to 20 
GB), 4. Directories containing multiple files each (100s GB).

These files can come in many forms  ascii, binary, compressed ( zip, tar) etc..

There may be data from data bases that need to be extracted and presented 
potentially for the user to choose or modify and use further (Ex. Supercrtbl) 
in an experiment as input.

The data from a previous execution (result/restart data) may need to be used to 
either restart an experiment along with modified inputs (routinely happens in 
SEAGrid). In such case a way to refer to the previous job/experiment and/or 
data locality is needed.

In the case of workflows with multiple tasks, the independent input data for 
different tasks in the workflow may have to uploaded upfront and need to be 
thus labelled appropriately.

In the case of job arrays, data for each of the independent tasks may be 
presented in different hierarchies as folders or compressed sets.

There is a use case where an input segment/field  may have multiple 
files/parameters (file arrays, parameter arrays) associated with that while 
others may have different types (AMPGateway BSR3 application stage1).

Some data may be pre-staged on the remote HPC system (Future water) or brought 
from third party locations/services (Box, Data Repos, Instruments) and 
associated with the experiment.

The web, session and other timeouts need to be tuned for making sure all the 
needed data is transferred in usable condition.

B. Data Validation and Handling

There need to be a way to validate all the inputs needed/required to be 
available before an experiment/workflow is scheduled. Files transferred should 
be checked for completeness by checksum or other validation. The data need to 
be uploaded and organized appropriately for the execution on the remote and 
even intermediate staging areas. If the data need to be  staged from third 
party locations and pre-staged data need to be used a way to verify the data 
accessibility need to be provided. Restart data can be checked if they contain 
right data for restart. The remote hosts may have quotas and the validation 
should consider if there is sufficient space to move the data for scheduling 
the experiment.

C. Data Processing
In some cases, data need to be processed before used in an experiment. 
Uncompressing a zip/tar need to be handled. In some cases,  specific 
preprocessing routines may need to run for the data to be prepared. For some 
cases the data need to be organized for learning by machines. A way to extend 
the extraction of critical attributes from the inputs, experiments and results 
for learning may be very useful.

4. Data Dissemination

Data need to be provided for the users to monitor personally or automatic 
validators and parsers. Once the experiment completes the parsers should be 
able to pickup and complete a post processing step. Data could be large (10s 
GB) and a failsafe way to provide this output data will be needed. Data may 
need to be compressed and organized for the additional post processing steps. 
Users needed a way to extract (output) data from multiple experiments in bulk 
to process it through external programs and scripts. This requires a way to 
select a set of experiments to extract their logs/outputs with sufficient 
warning regarding the size of the resulting download.

E. Data Storage

Data need to be stored for immediate consumption and potential reuse in the 
gateway/or other systems.

F. Data Archive and retrieval

Data need to be archived to tertiary storage device so the primary storage 
service is reused for newer data/experiments. But a way to retrieve the data 
from archival when needed should be in place.

G. Data deletion/hiding
Some data (erroneous, unwanted) need to be deleted so it does not interfere 
with new experiments or processing. A way to hide/delete based on user choice 
would be useful to provide. Sometimes restart data corrupted if a fixed 
checkpoint file is specified and this needs to be deleted or replaced with an 
immediately previous good copy.

Thanks for your attention.
Sudhakar.


Re: SMILES Proto Schema

2022-06-20 Thread Pamidighantam, Sudhakar V
Bhavesh:


  1.  We need to pick a primary key. SMILES string could be a good one but is 
not absolutely needed to be. Also sometimes it is difficult to auto-generate 
SMILES strings for molecules. We can choose another one such as name but we 
need to consistently and uniquely use the name across data models.
  2.  The filtering should get all the records and we can have pagination to 
control how many are shown. Some options in the filters could include ranges 
for example, Absorption Max between 500-560nm.

Thanks,
Sudhakar.
From: Bhavesh Asanabada 
Date: Monday, June 20, 2022 at 9:16 AM
To: dev@airavata.apache.org 
Subject: Re: SMILES Proto Schema
Hi Sudhakar,

I have a few doubts;

  1.  I don't find the primary key (the SMILES string) in other proto files. Do 
I need to include it and access the data with the SMILES string?
  2.  In the filtering options, Are there any predefined thresholds for the 
quantities?
I also request to have a meeting to confirm with my SMILES query code.

Thanks & Regards
Bhavesh Asanabada


Re: Regarding Airavata Seagrid Rich Client

2022-07-20 Thread Pamidighantam, Sudhakar V
Aishwarya:

Has a code been committed about this to git yet? Could you please point to the 
repo.

Thanks,
Sudhakar.

From: Aishwarya Sinhasane 
Date: Tuesday, June 21, 2022 at 9:21 PM
To: dev@airavata.apache.org 
Subject: Regarding Airavata Seagrid Rich Client
Hello Everyone,

I tried to make a login module in electronJS which is working properly for 
login with credentials as well as CILogon.

The electronJS application loads the django airavata portal so we can access 
all the modules that are already present in the django portal.

I discussed with sudhakar about including other modules such as Application 
Editors and Molecule Editors in application. We came to the conclusion that to 
include these modules in the main menu of the application. Users can access 
these before login and can create molecules and applications using editors and 
once it's ready to create an experiment the user needs to login to the system. 
Also users can login and can access editors.

Currently, I am developing the frontend for the molecule editor nanocad. Also I 
am trying to understand the logic of the molecule editor developed in JavaFX.

If anyone has other suggestions please let me know.

Screenshots are attached below for your reference.


Re: SSH access to the SEAGrid database.

2022-07-13 Thread Pamidighantam, Sudhakar V
Bhavesh:

I could login into 
bhav...@gridchem.uits.iu.edu fine with my 
ssh key.

Either your key is bad or something. Can you send me another ssh public key. 
Please send the output for ssh -vv 
bhav...@gridchem.uits.iu.edu
Screen shots are not very useful to debug this.

Thanks,
Sudhakar.

From: Bhavesh Asanabada 
Date: Tuesday, July 12, 2022 at 10:32 PM
To: dev@airavata.apache.org 
Subject: SSH access to the SEAGrid database.
Hi Sudhakar,

In the last meeting, I shared the SSH key to connect to the SEAGrid database 
for the reference models. I tried to connect with the provided routes but it 
throws an error as permission is denied. Could you please confirm the access I 
have?

Thanks
Bhavesh Asanabada


Re: Gaussian16 experiment failure.

2022-07-13 Thread Pamidighantam, Sudhakar V
Usually this happens when there is a network issue. Please retry with different 
compute resource if Expanse is not reachable for some reason.

Thanks,
Sudhakar.

From: Bhavesh Asanabada 
Date: Wednesday, July 13, 2022 at 12:30 AM
To: dev@airavata.apache.org 
Subject: Gaussian16 experiment failure.
Hi,

Today, I tried to process a random experiment on 
seagrid.org.
 On processing the input file (neopentanediol.inp) in Gaussian16 and launching 
the experiment, it shows the status as a failure. But with the same input file, 
Aishwarya can run the experiment successfully. Please refer to the attached 
file for the experiment failure log and do needful.

Application Configuration Used:

  *   Allocation - Default
  *   Compute Resource - Expanse
Error response
Failed to setup environment of task TASK_b504ee89-7857-418d-af28-c55eb1ca7156

Thanks & Regards
Bhavesh Asanabada


Re: Regarding Airavata Seagrid Rich Client

2022-07-28 Thread Pamidighantam, Sudhakar V
Where should I check for this path?

Thanks,
Sudhakar.

From: Aishwarya Sinhasane 
Date: Thursday, July 28, 2022 at 3:20 PM
To: dev@airavata.apache.org 
Subject: Re: Regarding Airavata Seagrid Rich Client
Hello Sudhakar

It is working for me. This error is mainly because of path difference in OS. 
Can you please try giving path as per your OS requirements. I will make it 
dynamic today n will push the code again.

Thanks & Regards
Aishwarya Sinhasane

On Thu, Jul 28, 2022, 1:01 PM Aishwarya Sinhasane 
mailto:aishwaryasinhas...@gmail.com>> wrote:
Yes sure Sudhakar. I will try to fix it.

Thank you
Aishwarya Sinhasane

On Thu, Jul 28, 2022, 12:57 PM Pamidighantam, Sudhakar V 
mailto:spami...@illinois.edu>> wrote:

Aishwarya:



I am getting this error message when I try to load the JSMol Editor. Can this 
be addressed.



Thanks,

Sudhakar.



npm start



> electron-quick-start@1.0.0 start

> electron .



(node:10220) electron: Failed to load URL: 
file:///Users/spamidig/Library/Applications/airavata-sandbox/gsoc2022/seagrid-rich-client/C:/Users/aishw/gsoc/seagrid-client-electron/airavata-sandbox/gsoc2022/seagrid-rich-client/ui/samplemol.html
 with error: ERR_FILE_NOT_FOUND

(Use `Electron --trace-warnings ...` to show where the warning was created)




From: Aishwarya Sinhasane 
mailto:aishwaryasinhas...@gmail.com>>
Date: Thursday, July 21, 2022 at 3:49 AM
To: dev@airavata.apache.org<mailto:dev@airavata.apache.org> 
mailto:dev@airavata.apache.org>>
Subject: Re: Regarding Airavata Seagrid Rich Client
Hello Sudhakar,

Yes I have committed the code and below is my repository and recent PR:

https://github.com/aishwaryasinhasane/airavata-sandbox/tree/master/gsoc2022/seagrid-rich-client<https://urldefense.com/v3/__https:/github.com/aishwaryasinhasane/airavata-sandbox/tree/master/gsoc2022/seagrid-rich-client__;!!DZ3fjg!6dk7oQM3gRRFhxM9R-pq4Iv-7z1mJw4SdTXjpXj9PiV_umuOFJ4O64F2xveJrVIQq2V7iraP5cNASt2nzaRqQQhMc0W_sg$>
https://github.com/apache/airavata-sandbox/pull/83<https://urldefense.com/v3/__https:/github.com/apache/airavata-sandbox/pull/83__;!!DZ3fjg!6dk7oQM3gRRFhxM9R-pq4Iv-7z1mJw4SdTXjpXj9PiV_umuOFJ4O64F2xveJrVIQq2V7iraP5cNASt2nzaRqQQiwC0o-Pg$>

Thanks and Regards
Aishwarya Sinhasane


On Wed, 20 Jul 2022 at 15:33, Pamidighantam, Sudhakar V 
mailto:spami...@illinois.edu>> wrote:
Aishwarya:

Has a code been committed about this to git yet? Could you please point to the 
repo.

Thanks,
Sudhakar.

From: Aishwarya Sinhasane 
mailto:aishwaryasinhas...@gmail.com>>
Date: Tuesday, June 21, 2022 at 9:21 PM
To: dev@airavata.apache.org<mailto:dev@airavata.apache.org> 
mailto:dev@airavata.apache.org>>
Subject: Regarding Airavata Seagrid Rich Client
Hello Everyone,

I tried to make a login module in electronJS which is working properly for 
login with credentials as well as CILogon.

The electronJS application loads the django airavata portal so we can access 
all the modules that are already present in the django portal.

I discussed with sudhakar about including other modules such as Application 
Editors and Molecule Editors in application. We came to the conclusion that to 
include these modules in the main menu of the application. Users can access 
these before login and can create molecules and applications using editors and 
once it's ready to create an experiment the user needs to login to the system. 
Also users can login and can access editors.

Currently, I am developing the frontend for the molecule editor nanocad. Also I 
am trying to understand the logic of the molecule editor developed in JavaFX.

If anyone has other suggestions please let me know.

Screenshots are attached below for your reference.


Re: SMILES Proto Schema

2022-07-28 Thread Pamidighantam, Sudhakar V
Bhavesh:



When I tried to run java package I got this error.



Do you know what could be missing?



Thanks,

Sudhakar.



[INFO] Changes detected - recompiling the module!

[INFO] Compiling 38 source files to 
/Users/spamidig/Library/Applications/smilesdb/airavata-sandbox/gsoc2022/smilesdb/Server/target/classes

[INFO] 

[INFO] BUILD FAILURE

[INFO] 

[INFO] Total time:  16.819 s

[INFO] Finished at: 2022-07-28T16:20:45-04:00

[INFO] 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.10.1:compile (default-compile) 
on project Server: Fatal error compiling: error: invalid target release: 18 -> 
[Help 1]

[ERROR]

[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[ERROR]

[ERROR] For more information about the errors and possible solutions, please 
read the following articles:

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException


From: Bhavesh Asanabada 
Date: Thursday, June 16, 2022 at 12:16 PM
To: dev@airavata.apache.org 
Subject: SMILES Proto Schema
Hi Sudhakar,
I’ve done with the sample gRPC implementation. As a next step towards the GSoC 
goal, I’m structuring the proto buffer files considering the files you have 
shared before (attached with this mail).

Proceeding with the files, there are a few missing data types in the 
molecule.proto file which I could not figure out with the variables mentioned. 
Can you please confirm the data types? and suggest the additional changes you 
need in the schema.

Thanks
Bhavesh Asanabada


Re: Regarding Airavata Seagrid Rich Client

2022-07-28 Thread Pamidighantam, Sudhakar V
Aishwarya:



I am getting this error message when I try to load the JSMol Editor. Can this 
be addressed.



Thanks,

Sudhakar.



npm start



> electron-quick-start@1.0.0 start

> electron .



(node:10220) electron: Failed to load URL: 
file:///Users/spamidig/Library/Applications/airavata-sandbox/gsoc2022/seagrid-rich-client/C:/Users/aishw/gsoc/seagrid-client-electron/airavata-sandbox/gsoc2022/seagrid-rich-client/ui/samplemol.html
 with error: ERR_FILE_NOT_FOUND

(Use `Electron --trace-warnings ...` to show where the warning was created)




From: Aishwarya Sinhasane 
Date: Thursday, July 21, 2022 at 3:49 AM
To: dev@airavata.apache.org 
Subject: Re: Regarding Airavata Seagrid Rich Client
Hello Sudhakar,

Yes I have committed the code and below is my repository and recent PR:

https://github.com/aishwaryasinhasane/airavata-sandbox/tree/master/gsoc2022/seagrid-rich-client<https://urldefense.com/v3/__https:/github.com/aishwaryasinhasane/airavata-sandbox/tree/master/gsoc2022/seagrid-rich-client__;!!DZ3fjg!6dk7oQM3gRRFhxM9R-pq4Iv-7z1mJw4SdTXjpXj9PiV_umuOFJ4O64F2xveJrVIQq2V7iraP5cNASt2nzaRqQQhMc0W_sg$>
https://github.com/apache/airavata-sandbox/pull/83<https://urldefense.com/v3/__https:/github.com/apache/airavata-sandbox/pull/83__;!!DZ3fjg!6dk7oQM3gRRFhxM9R-pq4Iv-7z1mJw4SdTXjpXj9PiV_umuOFJ4O64F2xveJrVIQq2V7iraP5cNASt2nzaRqQQiwC0o-Pg$>

Thanks and Regards
Aishwarya Sinhasane


On Wed, 20 Jul 2022 at 15:33, Pamidighantam, Sudhakar V 
mailto:spami...@illinois.edu>> wrote:
Aishwarya:

Has a code been committed about this to git yet? Could you please point to the 
repo.

Thanks,
Sudhakar.

From: Aishwarya Sinhasane 
mailto:aishwaryasinhas...@gmail.com>>
Date: Tuesday, June 21, 2022 at 9:21 PM
To: dev@airavata.apache.org<mailto:dev@airavata.apache.org> 
mailto:dev@airavata.apache.org>>
Subject: Regarding Airavata Seagrid Rich Client
Hello Everyone,

I tried to make a login module in electronJS which is working properly for 
login with credentials as well as CILogon.

The electronJS application loads the django airavata portal so we can access 
all the modules that are already present in the django portal.

I discussed with sudhakar about including other modules such as Application 
Editors and Molecule Editors in application. We came to the conclusion that to 
include these modules in the main menu of the application. Users can access 
these before login and can create molecules and applications using editors and 
once it's ready to create an experiment the user needs to login to the system. 
Also users can login and can access editors.

Currently, I am developing the frontend for the molecule editor nanocad. Also I 
am trying to understand the logic of the molecule editor developed in JavaFX.

If anyone has other suggestions please let me know.

Screenshots are attached below for your reference.


Re: Regarding Pre and Post job commands in Airavata

2023-01-05 Thread Pamidighantam, Sudhakar V
Dimuthu:

For gateway with specific applications the application or executable needs to 
be transferred along with the user inputs to the condor pool, which may require 
unpacking a tar ball or other mechanisms and identify the actual executable by 
a driver script. The driver script is the current executable application in the 
gateway but the actual executable or a container (the tarball above) also need 
to transferred before the execution can be set up. These steps require a pre 
job script or a way to add additional transfer_input_files instructions at the 
connecting HT condor host.

Of course dags are quite diverse and can be used for various other usecases 
such as parameter sweeps with different inputs (which again could be packaged 
in tarballs and need extraction after staging) etc..

Thanks,
Sudhakar.

From: DImuthu Upeksha 
Reply-To: "dev@airavata.apache.org" 
Date: Thursday, January 5, 2023 at 10:14 AM
To: "dev@airavata.apache.org" 
Subject: Re: Regarding Pre and Post job commands in Airavata

Hi Dinuka,

Sorry for the late reply. It is great to explore options to integrate Dag 
capabilities into the job submitter. We already have some form of HTCondor 
support in Airavata. Can you summarize the difference between what we already 
have for HTCondor and this Dag feature? I am specifically looking for practical 
usages instead of technical differences.

Thanks
Dimuthu

On Sun, Jan 1, 2023 at 10:35 AM Dinuka De Silva 
mailto:l.dinukadesi...@gmail.com>> wrote:
Hi,

The current implementation of mapping these (Pre, Post, etc. job commands) to 
the job scheduler script has assumed the job scheduler script to be a type of 
shell script. So, the order of the execution is based on the order of the 
commands listed in the script which is as below.

- Module Commands
- Pre Job Commands
- Job Submitter Command
- Post Job Commands

The scheduler script of SLURM, FORK, LSF, UGE, and PBS are shell scripts while 
HTCondor and maybe some other job schedulers have different file types. The 
script grammar in HTCondor does not support appending shell scripts inside. 
Now, we are needing to support Pre, Post, and other commands for HTCondor 
realizing the current design doesn't support it.

In HTCondor there's an option [1] to configure pre and post-scripts to be 
executed at the worker instance. But then the script has to be Dag and the 
pre-script, post-script and job-script are to be separate files. So, I tried a 
sample and planning to put this to airavata.

[1] 
https://htcondor.readthedocs.io/en/latest/users-manual/dagman-workflows.html

Thanks & Regards,
Dinuka




Re: Priti Singh GSoC Proposal

2023-04-04 Thread Pamidighantam, Sudhakar V

Priti:

You refer to a Form in the text. Please specify that it is a profile 
information form that user is required to fill. For some gateways this may be 
optional. The rest looks reasonable to me.

Thanks,
Sudhakar.



From: Priti Singh 
Date: Tuesday, April 4, 2023 at 2:33 AM
To: dev@airavata.apache.org 
Subject: Priti Singh GSoC Proposal
Hi Team
I have drafted my project proposal for GSoC'23 which involves
modifying the Gateway and User Profile of the Airavata Resource
Allocation Manager. I would appreciate your feedback on whether any
changes are required before I submit it.

Thanks and Regards
Priti


Re: [External] [DISCUSS] New name for MFT

2023-02-01 Thread Pamidighantam, Sudhakar V
Nauka  Another one for carrying (good and people over water) networks

Thanks,
Sudhakar.

From: Isuru Ranawaka 
Reply-To: "dev@airavata.apache.org" 
Date: Tuesday, January 31, 2023 at 5:43 PM
To: "dev@airavata.apache.org" 
Subject: Re: [External] [DISCUSS] New name for MFT

Great ideas.. few more are


  *   Commando: A pigeon  severed for British Army forced for delivering 
messages securely back and forth 
(https://www.mirror.co.uk/news/uk-news/second-world-war-hero-commando-5387701)
  *   Raven: 
https://gameofthrones.fandom.com/wiki/Raven

On Tue, Jan 31, 2023 at 3:33 PM Christie, Marcus Aaron 
mailto:machr...@iu.edu>> wrote:
Lots of good ideas here. When I think of MFT transfering data, it made me think 
of rivers. Here's a list of mythological rivers: 
https://en.wikipedia.org/wiki/Category:Mythological_rivers

One of those rivers is Iravati 
(https://en.wikipedia.org/wiki/Iravati),
 who is also the mother of Airavata.


> On Jan 31, 2023, at 1:14 PM, Suresh Marru 
> mailto:sma...@apache.org>> wrote:
>
> Good, we are getting options. Let's gather a few more and then do a trademark 
> search, filter, and vote on the final 3.
>
> Suresh
>
>> On Jan 31, 2023, at 12:59 PM, Thejaka Amila J Kanewala 
>> mailto:thejaka.am...@gmail.com>> wrote:
>>
>> Finding a name is harder than developing the product :D.
>>
>> I think we already have nice suggestions.
>>
>> Here are a few more:
>> 1. Javelin : 
>> https://en.wikipedia.org/wiki/Javelin_throw
>> 2. Catapult -- 
>> https://en.wikipedia.org/wiki/Catapult
>> 3. Dionysius -- they engineered the first version of catapult -- 
>> https://www.hellenicaworld.com/Greece/Technology/en/Catapults.html
>> 4. Trebuchet -- 
>> https://en.wikipedia.org/wiki/Trebuchet
>>
>> As you can see I am a little obsessed with catapults :-).
>>
>> Best Regards,
>> Thejaka Amila Kanewala, PhD
>> https://github.com/thejkane/agm
>> http://valagamba.net/
>>
>>
>> On Tue, Jan 31, 2023 at 6:36 AM Lahiru Jayathilake 
>> mailto:lahirujayathil...@gmail.com>> wrote:
>> Three suggestions from me as well,
>>
>> 1. SkyBridge
>>
>> 2. Hammurabi - is a Babylonian king. The "Code of Hammurabi" is considered 
>> to be one of the first written legal codes in history with the idea of fair 
>> and orderly transfer of information. And the name exhibits stability, 
>> reliability, and order in the transfer of information
>>
>> 3. Akkad - was an ancient empire that was recognized for its advanced 
>> administration and communication systems. The name exhibits stability and 
>> efficiency in the transfer of large-scale data
>>
>> Thanks,
>> Lahiru
>>
>> On Tue, Jan 31, 2023 at 7:48 PM Pierce, Marlon 
>> mailto:marpi...@iu.edu>> wrote:
>> How about a ship theme? Here’s a list of types of ships: 
>> https://en.wikipedia.org/wiki/List_of_ship_types.
>>
>>
>>
>> A couple of ancient types of ships: Kerkouros, Corbita.
>>
>>
>>
>> Marlon
>>
>>
>>
>>
>>
>> From: Suresh Marru mailto:sma...@apache.org>>
>> Date: Monday, January 30, 2023 at 5:59 PM
>> To: Airavata Dev mailto:dev@airavata.apache.org>>
>> Subject: [External] [DISCUSS] New name for MFT
>>
>> This message was sent 

Re: questions regarding dashboards to get quick statistics.

2023-06-04 Thread Pamidighantam, Sudhakar V
Saurav:
While you are at it, Please do a pull request for the documentation as well.

Thanks,
Sudhakar.

From: Lahiru Jayathilake 
Date: Sunday, June 4, 2023 at 11:53 AM
To: saurav kumar jha 
Cc: dev@airavata.apache.org , sma...@apache.org 
, Abeysinghe, Eroma 
Subject: Re: questions regarding dashboards to get quick statistics.
Hi Saurav,

The reason for the issue is missing account credentials for email job 
monitoring. To resolve this, follow the instructions in section [1] to create 
an email account. Then, update the 'email.based.monitor.address' and 
'email.based.monitor.password' properties within the 
modules/distribution/src/main/docker/docker-compose.yml file.
This will resolve the issue you're having.

[1] - 
https://github.com/apache/airavata/tree/develop/modules/ide-integration#starting-job-monitoring-components

Cheers!
Lahiru

On Fri, Jun 2, 2023 at 4:44 PM saurav kumar jha 
mailto:imsauravgaurav...@gmail.com>> wrote:
Hi Lahiru,
Thanks a lot for the response.
while trying to run Airavata locally on Ubuntu using this command
 ```docker-compose -f 
modules/ide-integration/src/main/containers/docker-compose.yml -f 
modules/distribution/src/main/docker/docker-compose.yml up```
Email store Authentication related error comes up. I have pasted the log below. 
I have done nothing for authentication nor have I started any db separately. 
All that I am doing is running above command after creating a docker image 
using the steps mentioned in the readme file of Airavata. What should I do to 
fix this?

emailmonitor_1  | 2023-06-02 07:56:34,488 [Thread-0] ERROR 
org.apache.airavata.monitor.email.EmailBasedMonitor {} - [EJM]: Couldn't 
connect to the store
emailmonitor_1  | javax.mail.AuthenticationFailedException: 
[AUTHENTICATIONFAILED] Invalid credentials (Failure)
emailmonitor_1  |   at 
com.sun.mail.imap.IMAPStore.protocolConnect(IMAPStore.java:732) 
~[javax.mail-1.6.2.jar:1.6.2]
emailmonitor_1  |   at javax.mail.Service.connect(Service.java:366) 
~[javax.mail-1.6.2.jar:1.6.2]
emailmonitor_1  |   at javax.mail.Service.connect(Service.java:246) 
~[javax.mail-1.6.2.jar:1.6.2]
emailmonitor_1  |   at 
org.apache.airavata.monitor.email.EmailBasedMonitor.run(EmailBasedMonitor.java:185)
 ~[email-monitor-0.21-SNAPSHOT.jar:0.21-SNAPSHOT]
emailmonitor_1  |   at java.lang.Thread.run(Thread.java:829) ~[?:?]
emailmonitor_1  | 2023-06-02 07:56:34,489 [Thread-0] ERROR 
org.apache.airavata.monitor.email.EmailBasedMonitor {} - [EJM]: Caught a 
throwable while closing email store
emailmonitor_1  | java.lang.NullPointerException: null
emailmonitor_1  |   at 
org.apache.airavata.monitor.email.EmailBasedMonitor.run(EmailBasedMonitor.java:231)
 ~[email-monitor-0.21-SNAPSHOT.jar:0.21-SNAPSHOT]
emailmonitor_1  |   at java.lang.Thread.run(Thread.java:829) ~[?:?]
emailmonitor_1  | 2023-06-02 07:56:35,595 [Thread-0] ERROR 
org.apache.airavata.monitor.email.EmailBasedMonitor {} - [EJM]: Couldn't 
connect to the store
emailmonitor_1  | javax.mail.AuthenticationFailedException: 
[AUTHENTICATIONFAILED] Invalid credentials (Failure)
emailmonitor_1  |   at 
com.sun.mail.imap.IMAPStore.protocolConnect(IMAPStore.java:732) 
~[javax.mail-1.6.2.jar:1.6.2]
emailmonitor_1  |   at javax.mail.Service.connect(Service.java:366) 
~[javax.mail-1.6.2.jar:1.6.2]
emailmonitor_1  |   at javax.mail.Service.connect(Service.java:246) 
~[javax.mail-1.6.2.jar:1.6.2]
emailmonitor_1  |   at 
org.apache.airavata.monitor.email.EmailBasedMonitor.run(EmailBasedMonitor.java:185)
 ~[email-monitor-0.21-SNAPSHOT.jar:0.21-SNAPSHOT]
emailmonitor_1  |   at java.lang.Thread.run(Thread.java:829) ~[?:?]
emailmonitor_1  | 2023-06-02 07:56:35,596 [Thread-0] ERROR 
org.apache.airavata.monitor.email.EmailBasedMonitor {} - [EJM]: Caught a 
throwable while closing email store
emailmonitor_1  | java.lang.NullPointerException: null
emailmonitor_1  |   at 
org.apache.airavata.monitor.email.EmailBasedMonitor.run(EmailBasedMonitor.java:231)
 ~[email-monitor-0.21-SNAPSHOT.jar:0.21-SNAPSHOT]
emailmonitor_1  |   at java.lang.Thread.run(Thread.java:829) ~[?:?]
apiserver_1 | 2023-06-02 07:56:35,753 [main] INFO  
org.apache.airavata.common.utils.ApplicationSettings {} - Settings loaded from 
file:/opt/apache-airavata-api-server/bin/airavata-server.properties
apiserver_1 | Exception in thread "main" 
org.apache.airavata.common.exception.ApplicationSettingsException: 
api.server.monitoring.enabled
apiserver_1 |   at 
org.apache.airavata.common.utils.ApplicationSettings.getSettingImpl(ApplicationSettings.java:196)
apiserver_1 |   at 
org.apache.airavata.common.utils.ApplicationSettings.getBooleanSetting(ApplicationSettings.java:350)
apiserver_1 |   at 

Re: Architecture for Cybershuttle Orchestration App

2023-06-27 Thread Pamidighantam, Sudhakar V
Praneeth,

We have an example in the SEAGrid electron  client that you may want to look 
at. Please see the link from sharepoint below. Please let me know if you want a 
quick demo of this.

https://indiana-my.sharepoint.com/:u:/r/personal/pamidigs_iu_edu/Documents/Projects/SEAGrid/SEAGrid_Client.dmg?csf=1=1=vgMcTg


Thanks,
Sudhakar.

From: Praneeth Kumar Chityala 
Date: Monday, June 26, 2023 at 7:40 PM
To: dev@airavata.apache.org , machr...@iu.edu 

Subject: Re: Architecture for Cybershuttle Orchestration App
Dear Marcus and All,

Thank you for the valuable feedback. I did incorporate the services as 
suggested.

Below is the design choices I finally picked and implemented a couple of 
services:

  *   For the desktop app I went ahead with the electronJS+Vue3 setup. As 
electronJS is more widely adopted and easier to migrate electronJS apps to web 
apps, electronJS is selected over Tauri.
  *   For the client side communication used grpc-web, which has a new 
implementation of gRPC which can be used on the browser side as well.
  *   Electronjs app code can be accessed with 
https://github.com/cyber-shuttle/cybershuttle-agent/tree/electron/cybershuttle-app

 *   @Marcus As suggested by you used provide-inject with the userService 
pointing to UserServiceGrpc, which can updated to other services as needed

*   UserServiceGrpc service: 
https://github.com/cyber-shuttle/cybershuttle-agent/blob/electron/cybershuttle-app/src/api/grpc/auth.js
*   provide grpc service in main: 
https://github.com/cyber-shuttle/cybershuttle-agent/blob/electron/cybershuttle-app/src/main.js#L13
*   injection grpc service in a view: 
https://github.com/cyber-shuttle/cybershuttle-agent/blob/electron/cybershuttle-app/src/components/Signup.vue#L97C1-L97C71

 *   For now I went ahead and built the total app with JavaScript as 
official grpc-web has good support for it.
 *   @Marcus please let me know if this implementation looks good to start 
with.

  *   Server side is built with Springboot and java 
(server:
 
https://github.com/cyber-shuttle/cybershuttle-server/tree/main/app-server
 . It for not reads clients request and respond with static data)
  *   As grpc-web sends request with http1 and gRPC server works on http2, used 
an envoy proxy server to mediate the communication between client and server 
(envoy 
proxy
 - 
https://github.com/cyber-shuttle/cybershuttle-server/blob/main/envoy.yaml)

I will be working on building the Appview where users will be able to see 
available applications/projects to launch from the desktop app.

Best,
Praneeth


On Fri, Jun 23, 2023 at 9:27 AM Christie, Marcus Aaron 
mailto:machr...@iu.edu>> wrote:
Hi Praneeth,

This looks good. I've been thinking about how we can reuse UI components that 
are developed for this local app in a web browser-based context. I think a good 
approach is a to create a service layer and then provide an implementation for 
those services when running in the desktop app. These implementations will 
communicate with a local gRPC client. But we can also create implementations of 
the same service interfaces that are implemented based on a REST proxy to the 
same gRPC services (or