Re: Improvements to Experiment input data model in order to support Gaussian application

2014-12-11 Thread Shameera Rathnayaka
Hi Amila,

According to my understanding, what this handle does is, read the user
given configuration at run time. I have no idea this will effect to qsub or
aprun or other parameter. It is better if someone explain it to me too.

We already have a way to provide the these configuration parameters with
the experiment itself by defining ComputeResourceScheduling. But there are
some use cases like gaussian, where gaussian users provide these
configurations with the input file it self.  IMO here we have two options,
either we can  ask gaussian users to adopt to the airavata way but still
those configuration in input file is required for gaussian application(I
guess , correct me if I am wrong here) or use airavata extension points to
support this scenario. Here the handler address the second option.

Thanks,
Shameera.

On Thu, Dec 11, 2014 at 12:00 AM, Amila Jayasekara thejaka.am...@gmail.com
wrote:

 Also, regarding the handler that Shameera is working on ...
 I guess that handler is going to change mainly qsub parameters or
 aprun parameters (Correct me if i am wrong). I think it would be more
 useful to write a handler which changes any parameter in qsub, aprun or
 mpiexec.

 In implementation wise I would imagine there is an abstract handler with
 concrete implementation for each job scheduling command.

 Thanks
 -Amila

 On Wed, Dec 10, 2014 at 9:17 AM, Marlon Pierce marpi...@iu.edu wrote:

 +1 for more generalization.

 We are collecting more raw material for chemistry application use cases
 at https://cwiki.apache.org/confluence/display/AIRAVATA/Use+Cases. We'll
 review them (and bio apps that we also collected previously) in a wiki
 document to see if our API mappings are correct.

 Preliminarily, we see the command line arguments don't contain the full
 list of input and output files.  Additional required inputs may be passed
 via control files, environment variables, etc.  Examples include data
 libraries for basis functions, names of checkpoint files, names of output
 files, and so forth.  So we need a way to say the application may take 4
 inputs, but only 1 is needed to construct a valid command line, for example.

 On the other hand, I don't think we need the InputMetadataType that
 Chathuri introduces below. This overlaps with what is already in the
 compute resource description fields.


 Marlon


 On 12/8/14, 10:17 PM, Amila Jayasekara wrote:

 Hi Chathuri,

 I do not know anything about Gaussian. So its kind of hard for me to
 understand what exactly is the meaning of the structures you introduced
 and
 why you exactly need those structures.

 A more important question is how to come up with a more abstract and
 generic thrift IDLS so that you dont need to change it every time we add
 a
 new application. Going through many example applications is certainly a
 good way to understand broad requirements and helps to abstract out many
 features.

 Thanks
 -Thejaka

 On Mon, Dec 8, 2014 at 10:22 AM, Chathuri Wimalasena 
 kamalas...@gmail.com
 wrote:

  Hi Devs,

 We are trying to add Gaussian application using airavata-appcatalog.
 While
 doing that, we face some limitations of the current design.

 In Gaussian there are several input files, some input files should used
 when the job run command is generated, but some does not.  Those which
 are
 not involved with job run command also need to be staged to working
 directory. Such flags are not supported in current design.

 Another interesting feature that in Gaussian is, in input file, we can
 specify the values for memory, cpu like options. If input file includes
 those parameters, we need to give priority to those values instead of
 the
 values specified in the request.

 To support these features, we need to slightly modify our thrift IDLS,
 specially to InputDataObjectType struct.

 Current struct is below.

 struct InputDataObjectType {
  1: required string name,
  2: optional string value,
  3: optional DataType type,
  4: optional string applicationArgument,
  5: optional bool standardInput = 0,
  6: optional string userFriendlyDescription,
  7: optional string metaData
 }

 In order to support 1st requirement, we introduce 2 enums.

 enum InputValidityType{
 REQUIRED,
 OPTIONAL
 }

 enum CommandLineType{
 INCLUSIVE,
 EXCLUSIVE
 }

 Please excuse me for names. You are welcome to suggest better names.

 To support 2nd requirement, we change metaData field to a map with
 another
 enum where we define all the metadata types that can have.

 enum InputMetadataType {
  MEMORY,
  CPU
 }

 So the new InputDataObjectType would be as below.

 struct InputDataObjectType {
  1: required string name,
  2: optional string value,
  3: optional DataType type,
  4: optional string applicationArgument,
  5: optional bool standardInput = 0,
  6: optional string userFriendlyDescription,
*  7: optional mapInputMetadataType, string metaData,*
 *8: optional InputValidityType inputValid;*
 *

Re: Improvements to Experiment input data model in order to support Gaussian application

2014-12-11 Thread Pamidighantam, Sudhakar V
We can not expect  users or applications to change behavior for Airavata. It is 
up to us enable applications and users as they are now.
As you have seen several applications have system input parameters inside a 
master input file and they are used by the application and
required to be used in scheduling. As I was suggesting the memory for 
scheduling should be higher than what is expected by the application.
Similarly time for scheduling should also be higher than what is given in the 
input to accommodate cleanup and other post processing as well.
Some schedulers allow soft and hard limits (admins may or may not enable them) 
and we can think of these pairs of system parameters as
soft and hard memory and time limits.

Thanks,
Sudhakar.


On Dec 11, 2014, at 2:02 AM, Shameera Rathnayaka 
shameerai...@gmail.commailto:shameerai...@gmail.com wrote:

Hi Amila,

According to my understanding, what this handle does is, read the user given 
configuration at run time. I have no idea this will effect to qsub or aprun or 
other parameter. It is better if someone explain it to me too.

We already have a way to provide the these configuration parameters with the 
experiment itself by defining ComputeResourceScheduling. But there are some use 
cases like gaussian, where gaussian users provide these configurations with the 
input file it self.  IMO here we have two options, either we can  ask gaussian 
users to adopt to the airavata way but still those configuration in input file 
is required for gaussian application(I guess , correct me if I am wrong here) 
or use airavata extension points to support this scenario. Here the handler 
address the second option.

Thanks,
Shameera.

On Thu, Dec 11, 2014 at 12:00 AM, Amila Jayasekara 
thejaka.am...@gmail.commailto:thejaka.am...@gmail.com wrote:
Also, regarding the handler that Shameera is working on ...
I guess that handler is going to change mainly qsub parameters or aprun 
parameters (Correct me if i am wrong). I think it would be more useful to write 
a handler which changes any parameter in qsub, aprun or mpiexec.

In implementation wise I would imagine there is an abstract handler with 
concrete implementation for each job scheduling command.

Thanks
-Amila

On Wed, Dec 10, 2014 at 9:17 AM, Marlon Pierce 
marpi...@iu.edumailto:marpi...@iu.edu wrote:
+1 for more generalization.

We are collecting more raw material for chemistry application use cases at 
https://cwiki.apache.org/confluence/display/AIRAVATA/Use+Cases. We'll review 
them (and bio apps that we also collected previously) in a wiki document to see 
if our API mappings are correct.

Preliminarily, we see the command line arguments don't contain the full list of 
input and output files.  Additional required inputs may be passed via control 
files, environment variables, etc.  Examples include data libraries for basis 
functions, names of checkpoint files, names of output files, and so forth.  So 
we need a way to say the application may take 4 inputs, but only 1 is needed to 
construct a valid command line, for example.

On the other hand, I don't think we need the InputMetadataType that Chathuri 
introduces below. This overlaps with what is already in the compute resource 
description fields.


Marlon


On 12/8/14, 10:17 PM, Amila Jayasekara wrote:
Hi Chathuri,

I do not know anything about Gaussian. So its kind of hard for me to
understand what exactly is the meaning of the structures you introduced and
why you exactly need those structures.

A more important question is how to come up with a more abstract and
generic thrift IDLS so that you dont need to change it every time we add a
new application. Going through many example applications is certainly a
good way to understand broad requirements and helps to abstract out many
features.

Thanks
-Thejaka

On Mon, Dec 8, 2014 at 10:22 AM, Chathuri Wimalasena 
kamalas...@gmail.commailto:kamalas...@gmail.com
wrote:

Hi Devs,

We are trying to add Gaussian application using airavata-appcatalog. While
doing that, we face some limitations of the current design.

In Gaussian there are several input files, some input files should used
when the job run command is generated, but some does not.  Those which are
not involved with job run command also need to be staged to working
directory. Such flags are not supported in current design.

Another interesting feature that in Gaussian is, in input file, we can
specify the values for memory, cpu like options. If input file includes
those parameters, we need to give priority to those values instead of the
values specified in the request.

To support these features, we need to slightly modify our thrift IDLS,
specially to InputDataObjectType struct.

Current struct is below.

struct InputDataObjectType {
 1: required string name,
 2: optional string value,
 3: optional DataType type,
 4: optional string applicationArgument,
 5: optional bool standardInput = 0,
 6: optional string 

Re: DataCat Project Progress

2014-12-11 Thread Pamidighantam, Sudhakar V
Supun:
I support these goals. I am available for you to engage with you and anything I 
can do to expedite the project please let me know. Even if you think I can not, 
 do please ask anyway.
I have written the original parsers in perl myself and have directed others 
when the Cup/JFLex system was put in place. I have generated and modified the 
CUP/JFlex code before, so I am familiar with how it works. I will look at the 
paper and may suggest  additions.

I can not test the system now by adding more data and see if this can parse the 
new data. How can we get to that point. This is critical first step for me 
before I can ask friendly users to test this further. Current state is a 
prototype only and is not interesting enough for any of our users. Unless we 
can add parsing of more data and more salient data and create products it is 
difficult engage end users in any meaningful way.

Perhaps if this is deployed somewhere in Indiana it may be easier to move 
forward. If you need more data please let me know where I should locate it for 
you to access.

Thanks,
Sudhakar.


On Dec 11, 2014, at 4:11 AM, Supun Nakandala 
supun.nakand...@gmail.commailto:supun.nakand...@gmail.com wrote:

Hi All,

We had the mid evaluation of the project last Tuesday and the following 
concerns were raised.

  1.  The lack of visibility of the overall solution in the project 
demonstration.
  2.  The ability to come up with a solution where, scientist who does not have 
a background in computer science can create new parsers (metadata extraction 
logic)

The project was demonstrated using the web interface that we developed. For the 
final evaluation we expect to demonstrate the system using laravel PHP 
Reference Gateway running in a production server and demonstrate how a new data 
product that gets generated will be identified, indexed and will be available 
for searching and hope this will handle the first issue.

We also had a meeting with Dr. Dilum our internal supervisor where we 
identified things that can be done from now to 15th January, the expected 
project completion date

  1.  Do a proper performance test and publish a paper before final marks for 
the project is finalized (marks will be finalized by the end of March).
  2.  Getting to work more parsers, so that Sudhakar can ask more users to use 
the system. This will help to get more feedback on the system and have a real 
world usage.
  3.  Implement the support for provenance aware workflow execution in Airavata 
using our system.

We have written a draft paper which I have attached here with. We showed this 
to Dr. Srinath and Dr Dilum and they suggested that we do a proper performance 
testing (The one that already done is not up to the expected standards). Given 
the available time we need to prioritize our work and select a set of tasks 
that is doable and has the most impact. What do you all think?

Draft Paper: 
https://docs.google.com/document/d/1PLfST6hLygQpsr4RlgiDoffmDEwMOWbmb1WZ0uKTtd8/edit#heading=h.6fjqfavj2nov

Literature Review: 
https://drive.google.com/file/d/0B0cLF-CLa59oaXRBazF1aURvQTg/view?usp=sharing

Supun



Re: DataCat Project Progress

2014-12-11 Thread Supun Nakandala
Hi Sudhakar,

Thank you very much for your support. Suresh gave us a machine(
gridchem.uits.iu.edu) in IU to deploy the server. But unfortunately we are
having some ssh issues when logging to that server . I will contact Suresh
and get the issue fixed and deploy  an instance of the server there so that
you can configure and test the parsers by your self.

On Thu, Dec 11, 2014 at 7:42 PM, Pamidighantam, Sudhakar V 
spami...@illinois.edu wrote:

  Supun:
 I support these goals. I am available for you to engage with you and
 anything I can do to expedite the project please let me know. Even if you
 think I can not,  do please ask anyway.
 I have written the original parsers in perl myself and have directed
 others when the Cup/JFLex system was put in place. I have generated and
 modified the CUP/JFlex code before, so I am familiar with how it works. I
 will look at the paper and may suggest  additions.

  I can not test the system now by adding more data and see if this can
 parse the new data. How can we get to that point. This is critical first
 step for me before I can ask friendly users to test this further. Current
 state is a prototype only and is not interesting enough for any of our
 users. Unless we can add parsing of more data and more salient data and
 create products it is difficult engage end users in any meaningful way.

  Perhaps if this is deployed somewhere in Indiana it may be easier to
 move forward. If you need more data please let me know where I should
 locate it for you to access.

  Thanks,
 Sudhakar.


  On Dec 11, 2014, at 4:11 AM, Supun Nakandala supun.nakand...@gmail.com
 wrote:

  Hi All,

  We had the mid evaluation of the project last Tuesday and the following
 concerns were raised.

1. The lack of visibility of the overall solution in the project
demonstration.
2. The ability to come up with a solution where, scientist who does
not have a background in computer science can create new parsers (metadata
extraction logic)

 The project was demonstrated using the web interface that we developed.
 For the final evaluation we expect to demonstrate the system using laravel
 PHP Reference Gateway running in a production server and demonstrate how a
 new data product that gets generated will be identified, indexed and will
 be available for searching and hope this will handle the first issue.

  We also had a meeting with Dr. Dilum our internal supervisor where we
 identified things that can be done from now to 15th January, the expected
 project completion date

1. Do a proper performance test and publish a paper before final marks
for the project is finalized (marks will be finalized by the end of March).
2. Getting to work more parsers, so that Sudhakar can ask more users
to use the system. This will help to get more feedback on the system and
have a real world usage.
3. Implement the support for provenance aware workflow execution in
Airavata using our system.

 We have written a draft paper which I have attached here with. We showed
 this to Dr. Srinath and Dr Dilum and they suggested that we do a proper
 performance testing (The one that already done is not up to the expected
 standards). Given the available time we need to prioritize our work and
 select a set of tasks that is doable and has the most impact. What do you
 all think?

  Draft Paper:
 https://docs.google.com/document/d/1PLfST6hLygQpsr4RlgiDoffmDEwMOWbmb1WZ0uKTtd8/edit#heading=h.6fjqfavj2nov

  Literature Review:
 https://drive.google.com/file/d/0B0cLF-CLa59oaXRBazF1aURvQTg/view?usp=sharing

  Supun





-- 
Thank you
Supun Nakandala
Dept. Computer Science and Engineering
University of Moratuwa


Re: DataCat Project Progress

2014-12-11 Thread Pamidighantam, Sudhakar V
Suresh is traveling. What issues are you facing. I have root access on this 
system.
Let me see if I can help there.

Thanks,
Sudhakar.

On Dec 11, 2014, at 10:29 AM, Supun Nakandala 
supun.nakand...@gmail.commailto:supun.nakand...@gmail.com wrote:

Hi Sudhakar,

Thank you very much for your support. Suresh gave us a 
machine(gridchem.uits.iu.eduhttp://gridchem.uits.iu.edu/) in IU to deploy the 
server. But unfortunately we are having some ssh issues when logging to that 
server . I will contact Suresh and get the issue fixed and deploy  an instance 
of the server there so that you can configure and test the parsers by your self.

On Thu, Dec 11, 2014 at 7:42 PM, Pamidighantam, Sudhakar V 
spami...@illinois.edumailto:spami...@illinois.edu wrote:
Supun:
I support these goals. I am available for you to engage with you and anything I 
can do to expedite the project please let me know. Even if you think I can not, 
 do please ask anyway.
I have written the original parsers in perl myself and have directed others 
when the Cup/JFLex system was put in place. I have generated and modified the 
CUP/JFlex code before, so I am familiar with how it works. I will look at the 
paper and may suggest  additions.

I can not test the system now by adding more data and see if this can parse the 
new data. How can we get to that point. This is critical first step for me 
before I can ask friendly users to test this further. Current state is a 
prototype only and is not interesting enough for any of our users. Unless we 
can add parsing of more data and more salient data and create products it is 
difficult engage end users in any meaningful way.

Perhaps if this is deployed somewhere in Indiana it may be easier to move 
forward. If you need more data please let me know where I should locate it for 
you to access.

Thanks,
Sudhakar.


On Dec 11, 2014, at 4:11 AM, Supun Nakandala 
supun.nakand...@gmail.commailto:supun.nakand...@gmail.com wrote:

Hi All,

We had the mid evaluation of the project last Tuesday and the following 
concerns were raised.

  1.  The lack of visibility of the overall solution in the project 
demonstration.
  2.  The ability to come up with a solution where, scientist who does not have 
a background in computer science can create new parsers (metadata extraction 
logic)

The project was demonstrated using the web interface that we developed. For the 
final evaluation we expect to demonstrate the system using laravel PHP 
Reference Gateway running in a production server and demonstrate how a new data 
product that gets generated will be identified, indexed and will be available 
for searching and hope this will handle the first issue.

We also had a meeting with Dr. Dilum our internal supervisor where we 
identified things that can be done from now to 15th January, the expected 
project completion date

  1.  Do a proper performance test and publish a paper before final marks for 
the project is finalized (marks will be finalized by the end of March).
  2.  Getting to work more parsers, so that Sudhakar can ask more users to use 
the system. This will help to get more feedback on the system and have a real 
world usage.
  3.  Implement the support for provenance aware workflow execution in Airavata 
using our system.

We have written a draft paper which I have attached here with. We showed this 
to Dr. Srinath and Dr Dilum and they suggested that we do a proper performance 
testing (The one that already done is not up to the expected standards). Given 
the available time we need to prioritize our work and select a set of tasks 
that is doable and has the most impact. What do you all think?

Draft Paper: 
https://docs.google.com/document/d/1PLfST6hLygQpsr4RlgiDoffmDEwMOWbmb1WZ0uKTtd8/edit#heading=h.6fjqfavj2nov

Literature Review: 
https://drive.google.com/file/d/0B0cLF-CLa59oaXRBazF1aURvQTg/view?usp=sharing

Supun




--
Thank you
Supun Nakandala
Dept. Computer Science and Engineering
University of Moratuwa



Re: DataCat Project Progress

2014-12-11 Thread Supun Nakandala
Earlier we were able to log into the system using private key. But now it
asks for a password. I think our keys have been removed. My public key is
ssh-rsa B3NzaC1yc2EDAQABAAABAQDH9Lzx8u7Bhi8GIQEBk5a9k6UROa26
OM2QawLSIHqdwwW15C8J493/jmdOsA9MuE4IXR3oVhlhkwJJhvJHap
llasaMGsED7pCltrRgumY8Tp/YKPYnUZCwt7CxzOlDh2dgq7wBn4bgwhC6/FDfYxpOeauhbaY+
rqfo1V8I62pgC8Nmb6iNHKqQBts+QRYrs0FvPwlKqD8fZeYnm8+NVvKi/
R9oDb1uwBGnxymAz1ks0yYtmGn6M5xggQFu+OdfrWOrXpVQh+RzjcGiafrJmHpeAH+
vhX7uxe0bRDqv59iZRCKVDrMN2UDpNH8fBGTLivL2LLNl0IZ08m72hQ5Xum7v
supun.nakand...@gmail.com

On Thu, Dec 11, 2014 at 10:03 PM, Pamidighantam, Sudhakar V 
spami...@illinois.edu wrote:

  Suresh is traveling. What issues are you facing. I have root access on
 this system.
 Let me see if I can help there.

  Thanks,
 Sudhakar.

  On Dec 11, 2014, at 10:29 AM, Supun Nakandala supun.nakand...@gmail.com
 wrote:

  Hi Sudhakar,

  Thank you very much for your support. Suresh gave us a machine(
 gridchem.uits.iu.edu) in IU to deploy the server. But unfortunately we
 are having some ssh issues when logging to that server . I will contact
 Suresh and get the issue fixed and deploy  an instance of the server there
 so that you can configure and test the parsers by your self.

 On Thu, Dec 11, 2014 at 7:42 PM, Pamidighantam, Sudhakar V 
 spami...@illinois.edu wrote:

 Supun:
 I support these goals. I am available for you to engage with you and
 anything I can do to expedite the project please let me know. Even if you
 think I can not,  do please ask anyway.
 I have written the original parsers in perl myself and have directed
 others when the Cup/JFLex system was put in place. I have generated and
 modified the CUP/JFlex code before, so I am familiar with how it works. I
 will look at the paper and may suggest  additions.

  I can not test the system now by adding more data and see if this can
 parse the new data. How can we get to that point. This is critical first
 step for me before I can ask friendly users to test this further. Current
 state is a prototype only and is not interesting enough for any of our
 users. Unless we can add parsing of more data and more salient data and
 create products it is difficult engage end users in any meaningful way.

  Perhaps if this is deployed somewhere in Indiana it may be easier to
 move forward. If you need more data please let me know where I should
 locate it for you to access.

  Thanks,
 Sudhakar.


  On Dec 11, 2014, at 4:11 AM, Supun Nakandala supun.nakand...@gmail.com
 wrote:

  Hi All,

  We had the mid evaluation of the project last Tuesday and the following
 concerns were raised.

1. The lack of visibility of the overall solution in the project
demonstration.
2. The ability to come up with a solution where, scientist who does
not have a background in computer science can create new parsers (metadata
extraction logic)

 The project was demonstrated using the web interface that we developed.
 For the final evaluation we expect to demonstrate the system using laravel
 PHP Reference Gateway running in a production server and demonstrate how a
 new data product that gets generated will be identified, indexed and will
 be available for searching and hope this will handle the first issue.

  We also had a meeting with Dr. Dilum our internal supervisor where we
 identified things that can be done from now to 15th January, the expected
 project completion date

1. Do a proper performance test and publish a paper before final
marks for the project is finalized (marks will be finalized by the end of
March).
2. Getting to work more parsers, so that Sudhakar can ask more users
to use the system. This will help to get more feedback on the system and
have a real world usage.
3. Implement the support for provenance aware workflow execution in
Airavata using our system.

 We have written a draft paper which I have attached here with. We showed
 this to Dr. Srinath and Dr Dilum and they suggested that we do a proper
 performance testing (The one that already done is not up to the expected
 standards). Given the available time we need to prioritize our work and
 select a set of tasks that is doable and has the most impact. What do you
 all think?

  Draft Paper:
 https://docs.google.com/document/d/1PLfST6hLygQpsr4RlgiDoffmDEwMOWbmb1WZ0uKTtd8/edit#heading=h.6fjqfavj2nov

  Literature Review:
 https://drive.google.com/file/d/0B0cLF-CLa59oaXRBazF1aURvQTg/view?usp=sharing

  Supun





  --
 Thank you
 Supun Nakandala
 Dept. Computer Science and Engineering
 University of Moratuwa





-- 
Thank you
Supun Nakandala
Dept. Computer Science and Engineering
University of Moratuwa


Re: DataCat Project Progress

2014-12-11 Thread Marlon Pierce

Please take those details off list.

Marlon

On 12/11/14, 11:33 AM, Pamidighantam, Sudhakar V wrote:

Suresh is traveling. What issues are you facing. I have root access on this 
system.
Let me see if I can help there.

Thanks,
Sudhakar.

On Dec 11, 2014, at 10:29 AM, Supun Nakandala 
supun.nakand...@gmail.commailto:supun.nakand...@gmail.com wrote:

Hi Sudhakar,

Thank you very much for your support. Suresh gave us a 
machine(gridchem.uits.iu.eduhttp://gridchem.uits.iu.edu/) in IU to deploy the 
server. But unfortunately we are having some ssh issues when logging to that server . 
I will contact Suresh and get the issue fixed and deploy  an instance of the server 
there so that you can configure and test the parsers by your self.

On Thu, Dec 11, 2014 at 7:42 PM, Pamidighantam, Sudhakar V 
spami...@illinois.edumailto:spami...@illinois.edu wrote:
Supun:
I support these goals. I am available for you to engage with you and anything I 
can do to expedite the project please let me know. Even if you think I can not, 
 do please ask anyway.
I have written the original parsers in perl myself and have directed others 
when the Cup/JFLex system was put in place. I have generated and modified the 
CUP/JFlex code before, so I am familiar with how it works. I will look at the 
paper and may suggest  additions.

I can not test the system now by adding more data and see if this can parse the 
new data. How can we get to that point. This is critical first step for me 
before I can ask friendly users to test this further. Current state is a 
prototype only and is not interesting enough for any of our users. Unless we 
can add parsing of more data and more salient data and create products it is 
difficult engage end users in any meaningful way.

Perhaps if this is deployed somewhere in Indiana it may be easier to move 
forward. If you need more data please let me know where I should locate it for 
you to access.

Thanks,
Sudhakar.


On Dec 11, 2014, at 4:11 AM, Supun Nakandala 
supun.nakand...@gmail.commailto:supun.nakand...@gmail.com wrote:

Hi All,

We had the mid evaluation of the project last Tuesday and the following 
concerns were raised.

   1.  The lack of visibility of the overall solution in the project 
demonstration.
   2.  The ability to come up with a solution where, scientist who does not 
have a background in computer science can create new parsers (metadata 
extraction logic)

The project was demonstrated using the web interface that we developed. For the 
final evaluation we expect to demonstrate the system using laravel PHP 
Reference Gateway running in a production server and demonstrate how a new data 
product that gets generated will be identified, indexed and will be available 
for searching and hope this will handle the first issue.

We also had a meeting with Dr. Dilum our internal supervisor where we 
identified things that can be done from now to 15th January, the expected 
project completion date

   1.  Do a proper performance test and publish a paper before final marks for 
the project is finalized (marks will be finalized by the end of March).
   2.  Getting to work more parsers, so that Sudhakar can ask more users to use 
the system. This will help to get more feedback on the system and have a real 
world usage.
   3.  Implement the support for provenance aware workflow execution in 
Airavata using our system.

We have written a draft paper which I have attached here with. We showed this 
to Dr. Srinath and Dr Dilum and they suggested that we do a proper performance 
testing (The one that already done is not up to the expected standards). Given 
the available time we need to prioritize our work and select a set of tasks 
that is doable and has the most impact. What do you all think?

Draft Paper: 
https://docs.google.com/document/d/1PLfST6hLygQpsr4RlgiDoffmDEwMOWbmb1WZ0uKTtd8/edit#heading=h.6fjqfavj2nov

Literature Review: 
https://drive.google.com/file/d/0B0cLF-CLa59oaXRBazF1aURvQTg/view?usp=sharing

Supun




--
Thank you
Supun Nakandala
Dept. Computer Science and Engineering
University of Moratuwa






Concerns with Gateway Interface

2014-12-11 Thread Nipurn Doshi
Hi Devs,

I have a couple of concerns that I wanted to share -

   - Queue objects in a Compute Resources do not have a unique primary key.
   Currently, queue names act as a primary key and the name cannot be changed
   after defining it once. What can be done for this?
   - An API is required that can help to get App Deployments by passing App
   Module Id and Compute Resource Id. Currently the interface shows all App
   Deployments and each deployment has to be checked out which module and
   resource it is connected to.


-- 
-Regards,
Nipurn Doshi


Re: Concerns with Gateway Interface

2014-12-11 Thread Shameera Rathnayaka
On Fri, Dec 12, 2014 at 12:45 AM, Nipurn Doshi nido...@umail.iu.edu wrote:

 Hi Devs,

 I have a couple of concerns that I wanted to share -

- Queue objects in a Compute Resources do not have a unique primary
key. Currently, queue names act as a primary key and the name cannot be
changed after defining it once. What can be done for this?


​Can't we use computeResourcePrimary key + queue name as composite primary
key? Because queue name can be duplicate across computeResources but one
computeResource can't have two queues with the same name.
​



- An API is required that can help to get App Deployments by passing
App Module Id and Compute Resource Id. Currently the interface shows all
App Deployments and each deployment has to be checked out which module and
resource it is connected to.


 --
 -Regards,
 Nipurn Doshi




-- 
Best Regards,
Shameera Rathnayaka.

email: shameera AT apache.org , shameerainfo AT gmail.com
Blog : http://shameerarathnayaka.blogspot.com/


Re: Concerns with Gateway Interface

2014-12-11 Thread Nipurn Doshi
I think that would work. Not allowing users with same queue name has been
taken care of, in the queue creation validation. This composite primary key
will have to be added to the queue object in the database+repo.

On Thu, Dec 11, 2014 at 2:24 PM, Shameera Rathnayaka shameerai...@gmail.com
 wrote:



 On Fri, Dec 12, 2014 at 12:45 AM, Nipurn Doshi nido...@umail.iu.edu
 wrote:

 Hi Devs,

 I have a couple of concerns that I wanted to share -

- Queue objects in a Compute Resources do not have a unique primary
key. Currently, queue names act as a primary key and the name cannot be
changed after defining it once. What can be done for this?


 ​Can't we use computeResourcePrimary key + queue name as composite primary
 key? Because queue name can be duplicate across computeResources but one
 computeResource can't have two queues with the same name.
 ​



- An API is required that can help to get App Deployments by passing
App Module Id and Compute Resource Id. Currently the interface shows all
App Deployments and each deployment has to be checked out which module and
resource it is connected to.


 --
 -Regards,
 Nipurn Doshi




 --
 Best Regards,
 Shameera Rathnayaka.

 email: shameera AT apache.org , shameerainfo AT gmail.com
 Blog : http://shameerarathnayaka.blogspot.com/




-- 
-Sincerely,
Nipurn Doshi


Re: Concerns with Gateway Interface

2014-12-11 Thread Chathuri Wimalasena
That's how it is at the moment in app-catalog DB. In BatchQueue table,
there is a composite primary key with COMPUTE_RESOURCE_ID + QUEUE_NAME.

On Thu, Dec 11, 2014 at 2:31 PM, Nipurn Doshi nido...@umail.iu.edu wrote:

 I think that would work. Not allowing users with same queue name has been
 taken care of, in the queue creation validation. This composite primary key
 will have to be added to the queue object in the database+repo.

 On Thu, Dec 11, 2014 at 2:24 PM, Shameera Rathnayaka 
 shameerai...@gmail.com wrote:



 On Fri, Dec 12, 2014 at 12:45 AM, Nipurn Doshi nido...@umail.iu.edu
 wrote:

 Hi Devs,

 I have a couple of concerns that I wanted to share -

- Queue objects in a Compute Resources do not have a unique primary
key. Currently, queue names act as a primary key and the name cannot be
changed after defining it once. What can be done for this?


 ​Can't we use computeResourcePrimary key + queue name as composite
 primary key? Because queue name can be duplicate across computeResources
 but one computeResource can't have two queues with the same name.
 ​



- An API is required that can help to get App Deployments by passing
App Module Id and Compute Resource Id. Currently the interface shows all
App Deployments and each deployment has to be checked out which module 
 and
resource it is connected to.


 --
 -Regards,
 Nipurn Doshi




 --
 Best Regards,
 Shameera Rathnayaka.

 email: shameera AT apache.org , shameerainfo AT gmail.com
 Blog : http://shameerarathnayaka.blogspot.com/




 --
 -Sincerely,
 Nipurn Doshi