Re: Airavata 0.16 Release Planning

2016-06-03 Thread Mangirish Wagle
Hi Suresh,

Not sure if this qualifies for the new release, but would you please take a
look at the following pull request, if this can be made a part of the
release?

https://github.com/apache/airavata/pull/34

Thanks & Regards,
Mangirish






On Fri, Jun 3, 2016 at 8:14 AM, Pierce, Marlon  wrote:

> + 1 from me also
>
>
>
> *From: *"Pamidighantam, Sudhakar V" 
> *Reply-To: *"dev@airavata.apache.org" 
> *Date: *Friday, June 3, 2016 at 9:10 AM
> *To: *"dev@airavata.apache.org" 
> *Subject: *Re: Airavata 0.16 Release Planning
>
>
>
> None. +1 for the release.
>
>
>
> Thanks,
>
> Sudhakar.
>
> On Jun 3, 2016, at 6:55 AM, Suresh Marru  wrote:
>
>
>
> I gated the release earlier but this task was long done. Any objections to
> move forward with 0.16 release?
>
>
>
> Suresh
>
>
>
> On Mar 30, 2016, at 2:53 PM, Suresh Marru  wrote:
>
>
>
> To contradict my proposal, I would like to work on
> https://issues.apache.org/jira/browse/AIRAVATA-1945
> 
>  before
> requesting feature freeze. Should not take long.
>
>
>
> Suresh
>
>
>
> On Mar 28, 2016, at 11:29 AM, Pierce, Marlon  wrote:
>
>
>
> Do we have any outstanding tasks that need to be wrapped up and committed
> to dev?
>
>
>
> *From: *Shameera Rathnayaka 
> *Reply-To: *"dev@airavata.apache.org" 
> *Date: *Monday, March 28, 2016 at 11:20 AM
> *To: *Airavata Dev 
> *Subject: *Re: Airavata 0.16 Release Planning
>
>
>
> +1
>
>
>
> On Mon, Mar 28, 2016 at 10:50 AM Suresh Marru  wrote:
>
> Hi All,
>
> Before we go too far, how about we call a feature freeze and stat working
> on 0.16 release? Unless any one is in the middle of a development activity,
> how about we target end of the week to start working on it?
>
> Suresh
>
> --
>
> Shameera Rathnayaka
>
>
>
>
>
>
>


Re: Airavata 0.16 Release Planning

2016-06-03 Thread Mangirish Wagle
Thanks Suresh!




On Fri, Jun 3, 2016 at 9:16 AM, Suresh Marru <sma...@apache.org> wrote:

> Thanks Mangirish for this reminder. Yes we should certainly make this part
> of the release and will add it to “experimental” features.
>
> Suresh
>
> On Jun 3, 2016, at 10:15 AM, Mangirish Wagle <vaglomangir...@gmail.com>
> wrote:
>
> Hi Suresh,
>
> Not sure if this qualifies for the new release, but would you please take
> a look at the following pull request, if this can be made a part of the
> release?
>
> https://github.com/apache/airavata/pull/34
>
> Thanks & Regards,
> Mangirish
>
>
>
>
>
>
> On Fri, Jun 3, 2016 at 8:14 AM, Pierce, Marlon <marpi...@iu.edu> wrote:
>
>> + 1 from me also
>>
>>
>>
>> *From: *"Pamidighantam, Sudhakar V" <spami...@illinois.edu>
>> *Reply-To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
>> *Date: *Friday, June 3, 2016 at 9:10 AM
>> *To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
>> *Subject: *Re: Airavata 0.16 Release Planning
>>
>>
>>
>> None. +1 for the release.
>>
>>
>>
>> Thanks,
>>
>> Sudhakar.
>>
>> On Jun 3, 2016, at 6:55 AM, Suresh Marru <sma...@apache.org> wrote:
>>
>>
>>
>> I gated the release earlier but this task was long done. Any objections
>> to move forward with 0.16 release?
>>
>>
>>
>> Suresh
>>
>>
>>
>> On Mar 30, 2016, at 2:53 PM, Suresh Marru <sma...@apache.org> wrote:
>>
>>
>>
>> To contradict my proposal, I would like to work on
>> https://issues.apache.org/jira/browse/AIRAVATA-1945
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_AIRAVATA-2D1945=CwMFAg=8hUWFZcy2Z-Za5rBPlktOQ=7_-LbDwTKOoIiO4P4OLfUTX6lSdjys9jh2AJ7sBl9ag=R2ZVpIpwPyZsEiWW5Vb2qO6Fpy2bxRx2eeUVc32o0ws=HXTGQcfXi-0_JFcA-kExFEiEWO-XBIRLvYpHzs7ZFnQ=>
>>  before
>> requesting feature freeze. Should not take long.
>>
>>
>>
>> Suresh
>>
>>
>>
>> On Mar 28, 2016, at 11:29 AM, Pierce, Marlon <marpi...@iu.edu> wrote:
>>
>>
>>
>> Do we have any outstanding tasks that need to be wrapped up and committed
>> to dev?
>>
>>
>>
>> *From: *Shameera Rathnayaka <shameerai...@gmail.com>
>> *Reply-To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
>> *Date: *Monday, March 28, 2016 at 11:20 AM
>> *To: *Airavata Dev <dev@airavata.apache.org>
>> *Subject: *Re: Airavata 0.16 Release Planning
>>
>>
>>
>> +1
>>
>>
>>
>> On Mon, Mar 28, 2016 at 10:50 AM Suresh Marru <sma...@apache.org> wrote:
>>
>> Hi All,
>>
>> Before we go too far, how about we call a feature freeze and stat working
>> on 0.16 release? Unless any one is in the middle of a development activity,
>> how about we target end of the week to start working on it?
>>
>> Suresh
>>
>> --
>>
>> Shameera Rathnayaka
>>
>>
>>
>>
>>
>>
>>
>
>
>


Re: Floating IPs association issue fixed on Jetstream

2016-05-25 Thread Mangirish Wagle
Hello Team,

I was just thinking of possible reasons for the floating IP pool flooding
observed and one of the possible reasons is that the floating iP
association logic in cloud-provisioning always creates a new floating IP in
the pool and associates to the VM. Even though the VM deletion handles
removing and deleting the associated floating IP, there might have been
cases that the machine gets an floating IP associated and then the machine
is deleted through some other flow, leaving the floating IP unused in the
pool.

I have now modified the code to reuse any existing floating IPs in the
pool, instead of requesting a new one every time. Now the logic would
request a new floating ip only if the available floating IP list is empty,
or all the floating IPs in the list are associated to VMs.

I have verified that the maven build with the tests runs successfully and
also tested the code for various possible scenarios.

Link for pull request for this change is:-

https://github.com/apache/airavata/pull/34

I request you to please review and consider merging this change in the
codebase, so as to have better Floating IP management.

Please let me know if you have any comments/ suggestions.

Regards,
Mangirish


On Thu, May 12, 2016 at 7:09 PM, Suresh Marru <sma...@apache.org> wrote:

> Very nice summary, thanks for shepherding through the issue and following
> up on the list Mangirish.
>
> Suresh
>
> On May 12, 2016, at 4:56 PM, Mangirish Wagle <vaglomangir...@gmail.com>
> wrote:
>
> Hello,
>
> Dropping this mail for teams awareness about a network issue faced on
> Jetstream Openstack.
>
> Pankaj and I were facing a problem with association of Floating IPs to the
> VMs provisioned on Jetstream using scigap credentials, and thus the VMs
> could not be accessed publicly.
>
> We also noticed further that the Network Topology in the Horizon UI
> refused to load.
>
> After following up on this issue with Mike Lowe on Jetstream Slack
> channel, it was realized that it was possibly because of a firewall rule
> induced by some security update which blocked some traffic on the compute
> nodes.
>
> The issue was then resolved by Mike and the topology loaded fine.
>
> Further, I noticed that the router configuration did not have an interface
> for airavata network to be connected to public network. I added the
> interface back and now the floating IP association seems to work fine.
>
> Thanks and Regards,
> Mangirish Wagle
>
>
>
>
>
>
>


Issues and Suggestions on Testdrive portal

2016-02-12 Thread Mangirish Wagle
Hello Airavata Developers,

I am a student attending Science Gateway Architecture course and would like
to share some of the issues and suggestions that I noticed with the
testdrive portal.

Following are some of the issues/ bugs and improvements:-

*Issues/ bugs:-*

Issue with user registration:- When I registered for the first time for an
account on testdrive portal, the registration page reported success but
when I tried logging in, it reported invalid username or password.
Eventually the password needed to be reset from the backend by the support
team member to grant me access.

Typo in Airavata description on testdrive homepage:- On the Airavata
testdrive homepage (), there is a misplaced space in the first line of
Apache Airavata description: "Apache Airavata is a* softwar e*framework"
(Screenshot attached: testdrive_airavata.png)

*Suggestions:-*

Input file validations for experiments required:- In case of Application
WRF, there are no input file validations, which means one can try uploading
huge sized files through multiple requests and can possibly jam the
bandwidth and network. Also, I just tried launching an experiment with WRF
application by interchanging the input files (Experiment ID:
TestRandom_91e49639-5b92-4740-b2a4-dcacc2dcc2a5). The job failed as
expected but it gave me a segmentation fault with locations of some
locations of core libraries on the stampede supercomputer in the stderr
file (Attached: WRF.stderr) which can be a potential security risk.

Indicate availability of the supercomputing resource:- The portal can
provide the user with info about the availability of the supercomputing
resources indicating if they are ready to accept jobs or down for
maintenance.

Thank you.

Best Regards,

Mangirish Wagle


Re: [GSOC Proposal] Cloud based clusters for Apache Airavata

2016-04-11 Thread Mangirish Wagle
Hello,

I have created a new pull request for cloud-provisioning project after
making all the changes suggested during the code review conducted today
during the meeting with Suresh and Shameera. Following is the link:-
https://github.com/apache/airavata/pull/31

Also, for the team's awareness, we have managed to configure a new network
topology in the Jetstream Openstack cloud. The name of the network is
"airavata" and it is connected to the "public" network using a router. This
now enables us to provision instances and associate publicly accessible
floating IPs so that they are accessible (over ssh) from Internet.

Thanks.

Best Regards,
Mangirish

On Wed, Apr 6, 2016 at 12:08 AM, Mangirish Wagle <vaglomangir...@gmail.com>
wrote:

> Hello,
>
> I have managed to put together a Cloud Interface project as initial POC
> with utility functions to create, delete servers. I have created a common
> cloud interface which has been implemented for Openstack Clouds using
> Openstack4j.
>
> A maven build has been setup for the project and a sample unit test has
> been added to the project to test and demonstrate a server create with
> associated keypair and delete operation on Jetstream Openstack using scigap
> credentials. A README file added to the project contain the steps to test
> run the project.
>
> The current code does not handle the network setup that is required to
> make the virtual machines created, accessible over the public network. I
> shall work on getting this done as soon as I find some time out of my
> academic activities and schedule.
>
> I have created following pull request for the current code from my forked
> repo to Airavata repo:-
>
> https://github.com/apache/airavata/pull/30
>
> You may please review and let me know your comments.
>
> Thanks.
>
> Best Regards,
> Mangirish
>
>
> On Thu, Mar 24, 2016 at 9:42 PM, Suresh Marru <sma...@apache.org> wrote:
>
>> Hi Mangirish,
>>
>> Yes now I noticed the scaling within the heat section. Yes it makes sense
>> to leave it behind the orchestration layer not to re-invent that logic.
>>
>> Airavata Orchestrator will be the natural plan to call the provisioning
>> service and bootstrap the mesos cluster.  The ansible I referred to are not
>> yet contributed into the repo. I am cc’ing Pankaj and Renan who can
>> probably make that contribution. You can read about their effort in
>> http://onlinelibrary.wiley.com/doi/10.1002/cpe.3708/full
>>
>> Renan,
>>
>> Mangirish is proposing a project to programmatically interact with Cloud
>> Interfaces (like Open Stack on Jetstream) and provision resources. I would
>> assume then the component you have developed will take over and bootstrap
>> the mesos cluster which GFac can then submit jobs to (through Aurora).
>>
>> Suresh
>>
>>
>> On Mar 24, 2016, at 9:14 PM, Mangirish Wagle <vaglomangir...@gmail.com>
>> wrote:
>>
>> Hello,
>>
>> I was trying to understand the end result flow of the Airavata with Cloud
>> Orchestrator and had the following question:-
>>
>> Once the cluster has been setup, as we discussed, an ansible or some
>> configuration management tool would boostrap and configure mesos. Which
>> component in Airavata would host and call the ansible script and what event
>> would trigger it?
>>
>> Thanks.
>>
>> Regards,
>> Mangirish
>>
>> On Thu, Mar 24, 2016 at 9:07 PM, Mangirish Wagle <
>> vaglomangir...@gmail.com> wrote:
>>
>>> Thanks for your feedback Suresh!
>>>
>>> I have mentioned about the Autoscaling in the Heat Orchestration
>>> solution, which does the dynamic scaling of resources in an existing cloud.
>>> Please let me know if you think that needs to be restructured.
>>>
>>> Also, I have updated the Google doc and Wiki with the revised proposal,
>>> after making changes as per Marlon's review comments.
>>>
>>> I request you to please review again and check if there is anything that
>>> needs still needs to be revised.
>>>
>>> Thank you!
>>>
>>> Regards,
>>> Mangirish
>>>
>>> On Thu, Mar 24, 2016 at 7:18 PM, Suresh Marru <sma...@apache.org> wrote:
>>>
>>>> Hi Mangirish,
>>>>
>>>> Your proposal has all the required good detail. One optional addition
>>>> you can clarify on if you can expand or contract resources to a previously
>>>> provisioned cloud.
>>>>
>>>> Suresh
>>>>
>>>> On Mar 23, 2016, at 9:10 PM, Mangirish Wagle <

Re: [GSOC Proposal] Cloud based clusters for Apache Airavata

2016-04-05 Thread Mangirish Wagle
Hello,

I have managed to put together a Cloud Interface project as initial POC
with utility functions to create, delete servers. I have created a common
cloud interface which has been implemented for Openstack Clouds using
Openstack4j.

A maven build has been setup for the project and a sample unit test has
been added to the project to test and demonstrate a server create with
associated keypair and delete operation on Jetstream Openstack using scigap
credentials. A README file added to the project contain the steps to test
run the project.

The current code does not handle the network setup that is required to make
the virtual machines created, accessible over the public network. I shall
work on getting this done as soon as I find some time out of my academic
activities and schedule.

I have created following pull request for the current code from my forked
repo to Airavata repo:-

https://github.com/apache/airavata/pull/30

You may please review and let me know your comments.

Thanks.

Best Regards,
Mangirish


On Thu, Mar 24, 2016 at 9:42 PM, Suresh Marru <sma...@apache.org> wrote:

> Hi Mangirish,
>
> Yes now I noticed the scaling within the heat section. Yes it makes sense
> to leave it behind the orchestration layer not to re-invent that logic.
>
> Airavata Orchestrator will be the natural plan to call the provisioning
> service and bootstrap the mesos cluster.  The ansible I referred to are not
> yet contributed into the repo. I am cc’ing Pankaj and Renan who can
> probably make that contribution. You can read about their effort in
> http://onlinelibrary.wiley.com/doi/10.1002/cpe.3708/full
>
> Renan,
>
> Mangirish is proposing a project to programmatically interact with Cloud
> Interfaces (like Open Stack on Jetstream) and provision resources. I would
> assume then the component you have developed will take over and bootstrap
> the mesos cluster which GFac can then submit jobs to (through Aurora).
>
> Suresh
>
>
> On Mar 24, 2016, at 9:14 PM, Mangirish Wagle <vaglomangir...@gmail.com>
> wrote:
>
> Hello,
>
> I was trying to understand the end result flow of the Airavata with Cloud
> Orchestrator and had the following question:-
>
> Once the cluster has been setup, as we discussed, an ansible or some
> configuration management tool would boostrap and configure mesos. Which
> component in Airavata would host and call the ansible script and what event
> would trigger it?
>
> Thanks.
>
> Regards,
> Mangirish
>
> On Thu, Mar 24, 2016 at 9:07 PM, Mangirish Wagle <vaglomangir...@gmail.com
> > wrote:
>
>> Thanks for your feedback Suresh!
>>
>> I have mentioned about the Autoscaling in the Heat Orchestration
>> solution, which does the dynamic scaling of resources in an existing cloud.
>> Please let me know if you think that needs to be restructured.
>>
>> Also, I have updated the Google doc and Wiki with the revised proposal,
>> after making changes as per Marlon's review comments.
>>
>> I request you to please review again and check if there is anything that
>> needs still needs to be revised.
>>
>> Thank you!
>>
>> Regards,
>> Mangirish
>>
>> On Thu, Mar 24, 2016 at 7:18 PM, Suresh Marru <sma...@apache.org> wrote:
>>
>>> Hi Mangirish,
>>>
>>> Your proposal has all the required good detail. One optional addition
>>> you can clarify on if you can expand or contract resources to a previously
>>> provisioned cloud.
>>>
>>> Suresh
>>>
>>> On Mar 23, 2016, at 9:10 PM, Mangirish Wagle <vaglomangir...@gmail.com>
>>> wrote:
>>>
>>> Thanks Shameera for the info and sharing the JIRA Epic details.
>>>
>>> I have drafted my GSOC Proposal for the project and I request you to
>>> please review the same:-
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/AIRAVATA/GSOC+Proposal-+Cloud+Based+Clusters+for+Apache+Airavata
>>>
>>> I shall submit this on the GSOC portal by tomorrow, once I get my
>>> enrollment verification proof.
>>>
>>> Regards,
>>> Mangirish
>>>
>>>
>>>
>>> On Wed, Mar 23, 2016 at 12:29 PM, Shameera Rathnayaka <
>>> shameerai...@gmail.com> wrote:
>>>
>>>> Hi Mangirish,
>>>>
>>>> Yes your above understanding is right. Gfac is like task executor which
>>>> execute what ever task given by Orchestrator.
>>>>
>>>> Here is the epic https://issues.apache.org/jira/browse/AIRAVATA-1924,
>>>> Open stack integration is part of this epic, you can create a new top level
>>>>

[GSOC Proposal] Cloud based clusters for Apache Airavata

2016-03-19 Thread Mangirish Wagle
Hello Dev Team,

I had the opportunity to interact with Suresh and Shameera wherein we
discussed an open requirement in Airavata to be addressed. The requirement
is to expand the capabilities of Apache Airavata to submit jobs to cloud
based clusters in addition to HPC/ HTC clusters.

The idea is to dynamically provision a cloud cluster in an environment like
Jetstream, based on the configuration figured out by Airavata, which would
be operated by a distributed system management software like Mesos. An
initial high level goals would be:-

   1. Airavata categorizes certain jobs to be run on cloud based clusters
   and figure out the required hardware config for the cluster.
   2. The proposed service would provision the cluster with the required
   resources.
   3. An ansible script would configure a Mesos cluster with the resources
   provisioned.
   4. Airavata submits the job to the Mesos cluster.
   5. Mesos then figures out the efficient resource allocation within the
   cluster and runs the job and fetches the result.
   6. The cluster is then deprovisioned automatically when not in use.

The project would mainly focus on point 2 and 6 above.

To start with, I am currently trying to get a working prototype of setting
up compute nodes on an openstack environment using JClouds (Targetted for
Jetstream). Also, I am planning to explore the option of using Openstack
Heat engine to orchestrate the cluster. However, going ahead Airavata would
be supporting other clouds like Amazon EC2 or Comet cluster, so we need to
have a generic solution for achieving the goal.

Another approach which might be efficient in terms of performance and time
is using a container based clouds using Docker, Kubernetes which would have
substantially less bootstrap time compared to cloud VMs. This would be a
future prospect as we may not have all the clusters supporting
containerization.

This has been considered as a potential GSOC project and I would be working
on drafting a proposal on this idea.

Any inputs/ comments/ suggestions would be very helpful.

Best Regards,
Mangirish Wagle


Re: [GSOC Proposal] Cloud based clusters for Apache Airavata

2016-03-23 Thread Mangirish Wagle
Thanks Marlon for the info. So what I get is that the Orchestrator would
decide if the job needs to be submitted to cloud based cluster and route it
to GFAC which would have a separate interfacing with the cloud cluster
service.

Also I wanted to know if there is any Story/ Epic created in JIRA for this
project which I can use to create and track tasks? If not can I create one?

Thanks.

Regards,
Mangirish

On Wed, Mar 23, 2016 at 12:01 PM, Pierce, Marlon <marpi...@iu.edu> wrote:

> The Application Factory component is called “gfac” in the code base.  This
> is the part that handles the interfacing to the remote resource (most often
> by ssh but other providers exist). The Orchestrator routes jobs to GFAC
> instances.
>
> From: Mangirish Wagle <vaglomangir...@gmail.com>
> Reply-To: "dev@airavata.apache.org" <dev@airavata.apache.org>
> Date: Wednesday, March 23, 2016 at 11:56 AM
> To: "dev@airavata.apache.org" <dev@airavata.apache.org>
> Subject: Re: [GSOC Proposal] Cloud based clusters for Apache Airavata
>
> Hello Team,
>
> I was drafting the GSOC proposal and I just had a quick question about the
> integration of the project with Apache Airavata.
>
> Which is the component in Airavata that would call the service to
> provision the cloud cluster?
>
> I am looking at the Airavata architecture diagram and my understanding is
> that this would be treated as a new Application and would have a separate
> application interface in 'Application Factory' component. Also the workflow
> orchestrator would be having the intelligence to figure out which jobs to
> be submitted to cloud based clusters.
>
> Please let me know whether my understanding is correct.
>
> Thank you.
>
> Best Regards,
> Mangirish Wagle
>
> On Tue, Mar 22, 2016 at 2:28 PM, Pierce, Marlon <marpi...@iu.edu> wrote:
>
>> Hi Mangirish, please add your proposal to the GSOC 2016 site.
>>
>> From: Mangirish Wagle <vaglomangir...@gmail.com>
>> Reply-To: "dev@airavata.apache.org" <dev@airavata.apache.org>
>> Date: Thursday, March 17, 2016 at 3:35 PM
>> To: "dev@airavata.apache.org" <dev@airavata.apache.org>
>> Subject: [GSOC Proposal] Cloud based clusters for Apache Airavata
>>
>> Hello Dev Team,
>>
>> I had the opportunity to interact with Suresh and Shameera wherein we
>> discussed an open requirement in Airavata to be addressed. The requirement
>> is to expand the capabilities of Apache Airavata to submit jobs to cloud
>> based clusters in addition to HPC/ HTC clusters.
>>
>> The idea is to dynamically provision a cloud cluster in an environment
>> like Jetstream, based on the configuration figured out by Airavata, which
>> would be operated by a distributed system management software like Mesos.
>> An initial high level goals would be:-
>>
>>1. Airavata categorizes certain jobs to be run on cloud based
>>clusters and figure out the required hardware config for the cluster.
>>2. The proposed service would provision the cluster with the required
>>resources.
>>3. An ansible script would configure a Mesos cluster with the
>>resources provisioned.
>>4. Airavata submits the job to the Mesos cluster.
>>5. Mesos then figures out the efficient resource allocation within
>>the cluster and runs the job and fetches the result.
>>6. The cluster is then deprovisioned automatically when not in use.
>>
>> The project would mainly focus on point 2 and 6 above.
>>
>> To start with, I am currently trying to get a working prototype of
>> setting up compute nodes on an openstack environment using JClouds
>> (Targetted for Jetstream). Also, I am planning to explore the option of
>> using Openstack Heat engine to orchestrate the cluster. However, going
>> ahead Airavata would be supporting other clouds like Amazon EC2 or Comet
>> cluster, so we need to have a generic solution for achieving the goal.
>>
>> Another approach which might be efficient in terms of performance and
>> time is using a container based clouds using Docker, Kubernetes which would
>> have substantially less bootstrap time compared to cloud VMs. This would be
>> a future prospect as we may not have all the clusters supporting
>> containerization.
>>
>> This has been considered as a potential GSOC project and I would be
>> working on drafting a proposal on this idea.
>>
>> Any inputs/ comments/ suggestions would be very helpful.
>>
>> Best Regards,
>> Mangirish Wagle
>>
>
>


Re: Re: Note to potential GSOC students

2016-03-23 Thread Mangirish Wagle
Hi Marlon,

I have submitted the first draft of my proposal as per your guidelines to
ASF with Apache Airavata in the title, in the GSOC portal.
Please let me know if you can see my draft.

Thank you.

Regards,
Mangirish Wagle

On Wed, Mar 23, 2016 at 11:49 AM, Jatin Balodhi <mywork.ja...@gmail.com>
wrote:

> There's an option of " Apache Software Foundation proposal tag" what
> should i select there?
>
> Thanks
> Jatin
>
>  Forwarded Message 
> Subject: Re: Note to potential GSOC students
> Date: Wed, 23 Mar 2016 15:36:58 +
> From: Pierce, Marlon <marpi...@iu.edu> <marpi...@iu.edu>
> To: Jatin Balodhi <mywork.ja...@gmail.com> <mywork.ja...@gmail.com>
>
> Hi Jatin, you have not yet started your application to GSOC program, as
> far as I can tell.  I do not see your application in the project listings.
> Do this first. Make sure Apache Airavata is in your title.
>
> Marlon
>
>
> From: Jatin Balodhi <mywork.ja...@gmail.com>
> Reply-To: " <dev@airavata.apache.org>dev@airavata.apache.org" <
> dev@airavata.apache.org>
> Date: Wednesday, March 23, 2016 at 11:17 AM
> To: " <dev@airavata.apache.org>dev@airavata.apache.org" <
> dev@airavata.apache.org>
> Subject: Re: Note to potential GSOC students
>
> Hi Marlon,
>
> I made some changes to my GSOC proposal as you said, can you look at it
> once more?
>
> Thanks
> Jatin
>
> On Wednesday 23 March 2016 07:52 PM, Pierce, Marlon wrote:
>
> Please make sure you have started the proposal submission process
> correctly in the GSOC site,  <https://summerofcode.withgoogle.com/>
> https://summerofcode.withgoogle.com/.  I see proposal drafts for only
> about half of those students who have expressed interest.
>
> Thanks,
>
> Marlon
>
>
>
>
>


Re: [GSOC Proposal] Cloud based clusters for Apache Airavata

2016-03-24 Thread Mangirish Wagle
Thanks for your feedback Suresh!

I have mentioned about the Autoscaling in the Heat Orchestration solution,
which does the dynamic scaling of resources in an existing cloud. Please
let me know if you think that needs to be restructured.

Also, I have updated the Google doc and Wiki with the revised proposal,
after making changes as per Marlon's review comments.

I request you to please review again and check if there is anything that
needs still needs to be revised.

Thank you!

Regards,
Mangirish

On Thu, Mar 24, 2016 at 7:18 PM, Suresh Marru <sma...@apache.org> wrote:

> Hi Mangirish,
>
> Your proposal has all the required good detail. One optional addition you
> can clarify on if you can expand or contract resources to a previously
> provisioned cloud.
>
> Suresh
>
> On Mar 23, 2016, at 9:10 PM, Mangirish Wagle <vaglomangir...@gmail.com>
> wrote:
>
> Thanks Shameera for the info and sharing the JIRA Epic details.
>
> I have drafted my GSOC Proposal for the project and I request you to
> please review the same:-
>
>
> https://cwiki.apache.org/confluence/display/AIRAVATA/GSOC+Proposal-+Cloud+Based+Clusters+for+Apache+Airavata
>
> I shall submit this on the GSOC portal by tomorrow, once I get my
> enrollment verification proof.
>
> Regards,
> Mangirish
>
>
>
> On Wed, Mar 23, 2016 at 12:29 PM, Shameera Rathnayaka <
> shameerai...@gmail.com> wrote:
>
>> Hi Mangirish,
>>
>> Yes your above understanding is right. Gfac is like task executor which
>> execute what ever task given by Orchestrator.
>>
>> Here is the epic https://issues.apache.org/jira/browse/AIRAVATA-1924,
>> Open stack integration is part of this epic, you can create a new top level
>> jira ticket and create subtask under that ticket.
>>
>> Regards,
>> Shameera.
>>
>> On Wed, Mar 23, 2016 at 12:20 PM Mangirish Wagle <
>> vaglomangir...@gmail.com> wrote:
>>
>>> Thanks Marlon for the info. So what I get is that the Orchestrator would
>>> decide if the job needs to be submitted to cloud based cluster and route it
>>> to GFAC which would have a separate interfacing with the cloud cluster
>>> service.
>>>
>>> Also I wanted to know if there is any Story/ Epic created in JIRA for
>>> this project which I can use to create and track tasks? If not can I create
>>> one?
>>>
>>> Thanks.
>>>
>>> Regards,
>>> Mangirish
>>>
>>> On Wed, Mar 23, 2016 at 12:01 PM, Pierce, Marlon <marpi...@iu.edu>
>>> wrote:
>>>
>>>> The Application Factory component is called “gfac” in the code base.
>>>> This is the part that handles the interfacing to the remote resource (most
>>>> often by ssh but other providers exist). The Orchestrator routes jobs to
>>>> GFAC instances.
>>>>
>>>> From: Mangirish Wagle <vaglomangir...@gmail.com>
>>>> Reply-To: "dev@airavata.apache.org" <dev@airavata.apache.org>
>>>> Date: Wednesday, March 23, 2016 at 11:56 AM
>>>> To: "dev@airavata.apache.org" <dev@airavata.apache.org>
>>>> Subject: Re: [GSOC Proposal] Cloud based clusters for Apache Airavata
>>>>
>>>> Hello Team,
>>>>
>>>> I was drafting the GSOC proposal and I just had a quick question about
>>>> the integration of the project with Apache Airavata.
>>>>
>>>> Which is the component in Airavata that would call the service to
>>>> provision the cloud cluster?
>>>>
>>>> I am looking at the Airavata architecture diagram and my understanding
>>>> is that this would be treated as a new Application and would have a
>>>> separate application interface in 'Application Factory' component. Also the
>>>> workflow orchestrator would be having the intelligence to figure out which
>>>> jobs to be submitted to cloud based clusters.
>>>>
>>>> Please let me know whether my understanding is correct.
>>>>
>>>> Thank you.
>>>>
>>>> Best Regards,
>>>> Mangirish Wagle
>>>>
>>>> On Tue, Mar 22, 2016 at 2:28 PM, Pierce, Marlon <marpi...@iu.edu>
>>>> wrote:
>>>>
>>>>> Hi Mangirish, please add your proposal to the GSOC 2016 site.
>>>>>
>>>>> From: Mangirish Wagle <vaglomangir...@gmail.com>
>>>>> Reply-To: "dev@airavata.apache.org" <dev@airavata.apache.org>
>>>>> Date: Thursday, March 17, 2016 at 3:35 PM
>

Re: [GSOC Proposal] Cloud based clusters for Apache Airavata

2016-03-24 Thread Mangirish Wagle
Hello,

I was trying to understand the end result flow of the Airavata with Cloud
Orchestrator and had the following question:-

Once the cluster has been setup, as we discussed, an ansible or some
configuration management tool would boostrap and configure mesos. Which
component in Airavata would host and call the ansible script and what event
would trigger it?

Thanks.

Regards,
Mangirish

On Thu, Mar 24, 2016 at 9:07 PM, Mangirish Wagle <vaglomangir...@gmail.com>
wrote:

> Thanks for your feedback Suresh!
>
> I have mentioned about the Autoscaling in the Heat Orchestration solution,
> which does the dynamic scaling of resources in an existing cloud. Please
> let me know if you think that needs to be restructured.
>
> Also, I have updated the Google doc and Wiki with the revised proposal,
> after making changes as per Marlon's review comments.
>
> I request you to please review again and check if there is anything that
> needs still needs to be revised.
>
> Thank you!
>
> Regards,
> Mangirish
>
> On Thu, Mar 24, 2016 at 7:18 PM, Suresh Marru <sma...@apache.org> wrote:
>
>> Hi Mangirish,
>>
>> Your proposal has all the required good detail. One optional addition you
>> can clarify on if you can expand or contract resources to a previously
>> provisioned cloud.
>>
>> Suresh
>>
>> On Mar 23, 2016, at 9:10 PM, Mangirish Wagle <vaglomangir...@gmail.com>
>> wrote:
>>
>> Thanks Shameera for the info and sharing the JIRA Epic details.
>>
>> I have drafted my GSOC Proposal for the project and I request you to
>> please review the same:-
>>
>>
>> https://cwiki.apache.org/confluence/display/AIRAVATA/GSOC+Proposal-+Cloud+Based+Clusters+for+Apache+Airavata
>>
>> I shall submit this on the GSOC portal by tomorrow, once I get my
>> enrollment verification proof.
>>
>> Regards,
>> Mangirish
>>
>>
>>
>> On Wed, Mar 23, 2016 at 12:29 PM, Shameera Rathnayaka <
>> shameerai...@gmail.com> wrote:
>>
>>> Hi Mangirish,
>>>
>>> Yes your above understanding is right. Gfac is like task executor which
>>> execute what ever task given by Orchestrator.
>>>
>>> Here is the epic https://issues.apache.org/jira/browse/AIRAVATA-1924,
>>> Open stack integration is part of this epic, you can create a new top level
>>> jira ticket and create subtask under that ticket.
>>>
>>> Regards,
>>> Shameera.
>>>
>>> On Wed, Mar 23, 2016 at 12:20 PM Mangirish Wagle <
>>> vaglomangir...@gmail.com> wrote:
>>>
>>>> Thanks Marlon for the info. So what I get is that the Orchestrator
>>>> would decide if the job needs to be submitted to cloud based cluster and
>>>> route it to GFAC which would have a separate interfacing with the cloud
>>>> cluster service.
>>>>
>>>> Also I wanted to know if there is any Story/ Epic created in JIRA for
>>>> this project which I can use to create and track tasks? If not can I create
>>>> one?
>>>>
>>>> Thanks.
>>>>
>>>> Regards,
>>>> Mangirish
>>>>
>>>> On Wed, Mar 23, 2016 at 12:01 PM, Pierce, Marlon <marpi...@iu.edu>
>>>> wrote:
>>>>
>>>>> The Application Factory component is called “gfac” in the code base.
>>>>> This is the part that handles the interfacing to the remote resource (most
>>>>> often by ssh but other providers exist). The Orchestrator routes jobs to
>>>>> GFAC instances.
>>>>>
>>>>> From: Mangirish Wagle <vaglomangir...@gmail.com>
>>>>> Reply-To: "dev@airavata.apache.org" <dev@airavata.apache.org>
>>>>> Date: Wednesday, March 23, 2016 at 11:56 AM
>>>>> To: "dev@airavata.apache.org" <dev@airavata.apache.org>
>>>>> Subject: Re: [GSOC Proposal] Cloud based clusters for Apache Airavata
>>>>>
>>>>> Hello Team,
>>>>>
>>>>> I was drafting the GSOC proposal and I just had a quick question about
>>>>> the integration of the project with Apache Airavata.
>>>>>
>>>>> Which is the component in Airavata that would call the service to
>>>>> provision the cloud cluster?
>>>>>
>>>>> I am looking at the Airavata architecture diagram and my understanding
>>>>> is that this would be treated as a new Application and would have a
>>>>> separate application

Floating IPs association issue fixed on Jetstream

2016-05-12 Thread Mangirish Wagle
Hello,

Dropping this mail for teams awareness about a network issue faced on
Jetstream Openstack.

Pankaj and I were facing a problem with association of Floating IPs to the
VMs provisioned on Jetstream using scigap credentials, and thus the VMs
could not be accessed publicly.

We also noticed further that the Network Topology in the Horizon UI refused
to load.

After following up on this issue with Mike Lowe on Jetstream Slack channel,
it was realized that it was possibly because of a firewall rule induced by
some security update which blocked some traffic on the compute nodes.

The issue was then resolved by Mike and the topology loaded fine.

Further, I noticed that the router configuration did not have an interface
for airavata network to be connected to public network. I added the
interface back and now the floating IP association seems to work fine.

Thanks and Regards,
Mangirish Wagle


Re: [GSOC Proposal] Cloud based clusters for Apache Airavata

2016-04-15 Thread Mangirish Wagle
Hello Team,

I have created a new pull request with the changes that I added today:-
https://github.com/apache/airavata/pull/32

Following are the main changes added with this request:-
1) Added method to Cloud Interface to associate floating ip. The floating
ip will also get deallocated on deletion of server instance.
2) Changed the methods to use network name instead of network id, read from
the properties, for better understanding.
3) Added log statements in the Interface implementation for OpenStack
(Jetstream).

Thanks.

Regards,
Mangirish

On Tue, Apr 12, 2016 at 12:56 AM, Mangirish Wagle <vaglomangir...@gmail.com>
wrote:

> Hello,
>
> I have created a new pull request for cloud-provisioning project after
> making all the changes suggested during the code review conducted today
> during the meeting with Suresh and Shameera. Following is the link:-
> https://github.com/apache/airavata/pull/31
>
> Also, for the team's awareness, we have managed to configure a new network
> topology in the Jetstream Openstack cloud. The name of the network is
> "airavata" and it is connected to the "public" network using a router. This
> now enables us to provision instances and associate publicly accessible
> floating IPs so that they are accessible (over ssh) from Internet.
>
> Thanks.
>
> Best Regards,
> Mangirish
>
> On Wed, Apr 6, 2016 at 12:08 AM, Mangirish Wagle <vaglomangir...@gmail.com
> > wrote:
>
>> Hello,
>>
>> I have managed to put together a Cloud Interface project as initial POC
>> with utility functions to create, delete servers. I have created a common
>> cloud interface which has been implemented for Openstack Clouds using
>> Openstack4j.
>>
>> A maven build has been setup for the project and a sample unit test has
>> been added to the project to test and demonstrate a server create with
>> associated keypair and delete operation on Jetstream Openstack using scigap
>> credentials. A README file added to the project contain the steps to test
>> run the project.
>>
>> The current code does not handle the network setup that is required to
>> make the virtual machines created, accessible over the public network. I
>> shall work on getting this done as soon as I find some time out of my
>> academic activities and schedule.
>>
>> I have created following pull request for the current code from my forked
>> repo to Airavata repo:-
>>
>> https://github.com/apache/airavata/pull/30
>>
>> You may please review and let me know your comments.
>>
>> Thanks.
>>
>> Best Regards,
>> Mangirish
>>
>>
>> On Thu, Mar 24, 2016 at 9:42 PM, Suresh Marru <sma...@apache.org> wrote:
>>
>>> Hi Mangirish,
>>>
>>> Yes now I noticed the scaling within the heat section. Yes it makes
>>> sense to leave it behind the orchestration layer not to re-invent that
>>> logic.
>>>
>>> Airavata Orchestrator will be the natural plan to call the provisioning
>>> service and bootstrap the mesos cluster.  The ansible I referred to are not
>>> yet contributed into the repo. I am cc’ing Pankaj and Renan who can
>>> probably make that contribution. You can read about their effort in
>>> http://onlinelibrary.wiley.com/doi/10.1002/cpe.3708/full
>>>
>>> Renan,
>>>
>>> Mangirish is proposing a project to programmatically interact with Cloud
>>> Interfaces (like Open Stack on Jetstream) and provision resources. I would
>>> assume then the component you have developed will take over and bootstrap
>>> the mesos cluster which GFac can then submit jobs to (through Aurora).
>>>
>>> Suresh
>>>
>>>
>>> On Mar 24, 2016, at 9:14 PM, Mangirish Wagle <vaglomangir...@gmail.com>
>>> wrote:
>>>
>>> Hello,
>>>
>>> I was trying to understand the end result flow of the Airavata with
>>> Cloud Orchestrator and had the following question:-
>>>
>>> Once the cluster has been setup, as we discussed, an ansible or some
>>> configuration management tool would boostrap and configure mesos. Which
>>> component in Airavata would host and call the ansible script and what event
>>> would trigger it?
>>>
>>> Thanks.
>>>
>>> Regards,
>>> Mangirish
>>>
>>> On Thu, Mar 24, 2016 at 9:07 PM, Mangirish Wagle <
>>> vaglomangir...@gmail.com> wrote:
>>>
>>>> Thanks for your feedback Suresh!
>>>>
>>>> I have mentioned about the Autoscaling in the Heat Orchestration
>>

Re: Jetstream VM creation through Airavata

2016-04-20 Thread Mangirish Wagle
Hi Pankaj,

You may find a sample test code for the module in this unit test file:-

https://github.com/apache/airavata/blob/develop/modules/cloud/cloud-provisioning/src/test/java/org/apache/airavata/cloud/test/CloudIntfTest.java

Also, if you want to test run the code, you may please follow this quick
README:-

https://github.com/apache/airavata/blob/develop/modules/cloud/cloud-provisioning/README

Please let me know if you need some more info or code walkthrough. Sending
you the jetstream openrc credentials separately.

Thanks.

Regards,
Mangirish

On Wed, Apr 20, 2016 at 11:42 AM, Suresh Marru  wrote:

> Hi Pankaj,
>
> Please switch to ‘develop’ branch and look for the cloud provisioning
> module -
> https://github.com/apache/airavata/tree/develop/modules/cloud/cloud-provisioning
>
> Suresh
>
> On Apr 20, 2016, at 11:37 AM, Pankaj Saha  wrote:
>
> Hello Mangirish,
> I have latest Airavata mater branch installed in my local system. Can you
> please give us some clue how to start creating the VMs through you
> application. Please specify where we can find your corresponding java code.
> Is it through PGA website?
> Please share the required password to me in a separate email.
>
> Thanks
> Pankaj
>
>
>


Running MPI jobs on Mesos based clusters

2016-09-21 Thread Mangirish Wagle
Hello All,

I would like to post for everybody's awareness about the study that I am
undertaking this fall, i.e. to evaluate various different frameworks that
would facilitate MPI jobs on Mesos based clusters for Apache Airavata.

Some of the options that I am looking at are:-

   1. MPI support framework bundled with Mesos
   2. Apache Aurora
   3. Marathon
   4. Chronos

Some of the evaluation criteria that I am planning to base my investigation
are:-

   - Ease of setup
   - Documentation
   - Reliability features like HA
   - Scaling and Fault recovery
   - Performance
   - Community Support

Gourav and Shameera are working on ansible based automation to spin up a
mesos based cluster and I am planning to use it to setup a cluster for
experimentation.

Any suggestions or information about prior work on this would be highly
appreciated.

Thank you.

Best Regards,
Mangirish Wagle


Re: Running MPI jobs on Mesos based clusters

2016-09-27 Thread Mangirish Wagle
Hello Devs,

Thanks Gourav and Shameera for all the work w.r.t. setting up the
Mesos-Marathon cluster on Jetstream.

I am currently evaluating MPICH (http://www.mpich.org/about/overview/) to
be used for launching MPI jobs on top of mesos. MPICH version 1.2 supports
Mesos based MPI scheduling. I have been also trying to submit jobs to the
cluster through Marathon. However, in either cases I am currently facing
issues which I am working to get resolved.

I am compiling my notes into the following google doc. You may please
review and let me know your comments, suggestions.

https://docs.google.com/document/d/1p_Y4Zd4I4lgt264IHspXJli3la25y6bcPcmrTD6nR8g/edit?usp=sharing

Thanks and Regards,
Mangirish Wagle



On Wed, Sep 21, 2016 at 3:20 PM, Shenoy, Gourav Ganesh <goshe...@indiana.edu
> wrote:

> Hi Mangirish,
>
>
>
> I have set up a Mesos-Marathon cluster for you on Jetstream. I will share
> with you with the cluster details in a separate email. Kindly note that
> there are 3 masters & 2 slaves in this cluster.
>
>
>
> I am also working on automating this process for Jetstream (similar to
> Shameera’s ansible script for EC2) and when that is ready, we can create
> clusters or add/remove slave machines from the cluster.
>
>
>
> Thanks and Regards,
>
> Gourav Shenoy
>
>
>
> *From: *Mangirish Wagle <vaglomangir...@gmail.com>
> *Reply-To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
> *Date: *Wednesday, September 21, 2016 at 2:36 PM
> *To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
> *Subject: *Running MPI jobs on Mesos based clusters
>
>
>
> Hello All,
>
>
>
> I would like to post for everybody's awareness about the study that I am
> undertaking this fall, i.e. to evaluate various different frameworks that
> would facilitate MPI jobs on Mesos based clusters for Apache Airavata.
>
>
>
> Some of the options that I am looking at are:-
>
>1. MPI support framework bundled with Mesos
>2. Apache Aurora
>3. Marathon
>4. Chronos
>
> Some of the evaluation criteria that I am planning to base my
> investigation are:-
>
>- Ease of setup
>- Documentation
>- Reliability features like HA
>- Scaling and Fault recovery
>- Performance
>- Community Support
>
> Gourav and Shameera are working on ansible based automation to spin up a
> mesos based cluster and I am planning to use it to setup a cluster for
> experimentation.
>
>
>
> Any suggestions or information about prior work on this would be highly
> appreciated.
>
>
>
> Thank you.
>
>
>
> Best Regards,
>
> Mangirish Wagle
>
>


Re: Running MPI jobs on Mesos based clusters

2016-10-17 Thread Mangirish Wagle
Hello Devs,

Here is an update on some new learnings and thoughts based on my
interactions with Mesos and Aurora devs.

MPI implementations in Mesos repositories (like MPI Hydra) rely on obsolete
MPI platforms and no longer supported my the developer community. Hence it
is not recommended that we use this for our purpose.

One of the known ways of running MPI jobs over mesos is using "gang
scheduling" which is basically distributing the MPI run over multiple jobs
on mesos in place of multiple nodes. The challenge here is the jobs need to
be scheduled as one task and any job errored should collectively error out
the main program including all the distributed jobs.

One of the Mesos developer (Niklas Nielsen) pointed me out to his work on
gang scheduling: https://github.com/nqn. This code may not be fully tested
but certainly a good starting point to explore gang scheduling.

One of the Aurora developer (Stephen Erb) suggests using gang scheduling on
top of Aurora. Aurora scheduler assumes that every job is independent.
Hence, there would be a need to develop some external scaffolding to
coordinate and schedule these jobs, which might not be trivial. One
advantage of using Aurora as a backend for gang scheduling is that we would
inherit the robustness of Aurora, which otherwise would be a key challenge
if targeting bare mesos.

Alternative to all the options above, I think we should probably be able to
run a 1 node MPI job through Aurora. A resource offer with CPUs and Memory
from Mesos is abstracted as a single runtime, but is mapped to multiple
nodes underneath, which eventually would exploit distributed resource
capabilities.

I intend to try out the 1 node MPI job submission approach first and
simultaneously explore the gang scheduling approach.

Please let me know your thoughts/ suggestions.

Best Regards,
Mangirish



On Thu, Oct 13, 2016 at 12:39 PM, Mangirish Wagle <vaglomangir...@gmail.com>
wrote:

> Hi Marlon,
> Thanks for confirming and sharing the legal link.
>
> -Mangirish
>
> On Thu, Oct 13, 2016 at 12:13 PM, Pierce, Marlon <marpi...@iu.edu> wrote:
>
>> BSD is ok: https://www.apache.org/legal/resolved.
>>
>>
>>
>> *From: *Mangirish Wagle <vaglomangir...@gmail.com>
>> *Reply-To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
>> *Date: *Thursday, October 13, 2016 at 12:03 PM
>> *To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
>> *Subject: *Re: Running MPI jobs on Mesos based clusters
>>
>>
>>
>> Hello Devs,
>>
>> I needed some advice on the license of the MPI libraries. The MPICH
>> library that I have been trying claims to have a "BSD Like" license (
>> http://git.mpich.org/mpich.git/blob/HEAD:/COPYRIGHT).
>>
>> I am aware that OpenMPI which uses BSD license is currently used in our
>> application. I had chosen to start investigating MPICH because it claims to
>> be a highly portable and high quality implementation of latest MPI
>> standard, suitable to cloud based clusters.
>>
>> If anyone could please advise on the acceptance of the MPICH libraries
>> MSD Like license for ASF, that would help.
>>
>> Thank you.
>>
>> Best Regards,
>>
>> Mangirish Wagle
>>
>>
>>
>> On Thu, Oct 6, 2016 at 1:48 AM, Mangirish Wagle <vaglomangir...@gmail.com>
>> wrote:
>>
>> Hello Devs,
>>
>>
>>
>> The network issue mentioned above now stands resolved. The problem was
>> with the iptables had some conflicting rules which blocked the traffic. It
>> was resolved by simple iptables flush.
>>
>>
>>
>> Here is the test MPI program running on multiple machines:-
>>
>>
>>
>> [centos@mesos-slave-1 ~]$ mpiexec -f machinefile -n 2 ./mpitest
>>
>> Hello world!  I am process number: 0 on host mesos-slave-1
>>
>> Hello world!  I am process number: 1 on host mesos-slave-2
>>
>>
>>
>> The next step is to try invoking this through framework like Marathon.
>> However, the job submission still does not run through Marathon. It seems
>> to gets stuck in the 'waiting' state forever (For example
>> http://149.165.170.245:8080/ui/#/apps/%2Fmaw-try). Further, I notice
>> that Marathon is listed under 'inactive frameworks' in mesos dashboard (
>> http://149.165.171.33:5050/#/frameworks).
>>
>>
>>
>> I am trying to get this working, though any help/ clues with this would
>> be really helpful.
>>
>>
>>
>> Thanks and Regards,
>>
>> Mangirish Wagle
>>
>>
>>
>>
>> On Fri, Sep 30, 2016 at 9:21 PM, Mangirish Wagle <
>> vaglomangir...@gmail.com> w

Re: mesos and moving jobs between clusters

2016-10-25 Thread Mangirish Wagle
Hi Mark,

Thanks for your question. So if I understand you correctly, you need kind
of load balancing between identical clusters through a single Mesos master?

With the current setup, from what I understand, we have a separate mesos
masters for every cluster on separate clouds. However, its a good
investigative topic if we can have single mesos master targeting multiple
identical clusters. We have some work ongoing to use a virtual cluster
setup with compute resources across clouds to install mesos, but not sure
if that is what you are looking for.

Regards,
Mangirish




On Tue, Oct 25, 2016 at 11:05 AM, Miller, Mark  wrote:

> Hi all,
>
>
>
> I posed a question to Suresh (see below), and he asked me to put this
> question on the dev list.
>
> So here it is. I will be grateful for any comments about the issues you
> all are facing, and what has come up in trying this, as
>
> It seems likely that this is a much simpler problem in concept than it is
> in practice, but its solution has many benefits.
>
>
>
> Here is my question:
>
> A group of us have been discussing how we might simplify submitting jobs
> to different compute resources in our current implementation of CIPRES, and
> how cloud computing might facilitate this. But none of us are cloud
> experts. As I understand it, the mesos cluster that I have been seeing in
> the Airavata email threads is intended to make it possible to deploy jobs
> to multiple virtual clusters. I am (we are) wondering if Mesos manages
> submissions to identical virtual clusters on multiple machines, and if that
> works efficiently.
>
>
>
> In our implementation, we have to change the rules to run efficiently on
> different machines, according to gpu availability, and cores per node. I am
> wondering how Mesos/ virtual clusters affect those considerations.
>
> Can mesos create basically identical virtual clusters independent of
> machine?
>
>
> Thanks for any advice.
>
>
>
> Mark
>
>
>
>
>
>
>
>
>


Re: Running MPI jobs on Mesos based clusters

2016-10-21 Thread Mangirish Wagle
Hello Devs,

I was able to run a basic single node MPI program on Mesos cluster on EC2
using the OpenMPI library through Aurora.

Link to the Mesos Sandbox of the MPI Job
<http://52.91.23.81:5050/#/agents/8d8ad711-1a0f-410e-840d-6190173c69ca-S0/browse?path=%2Fvar%2Flib%2Fmesos%2Fslaves%2F8d8ad711-1a0f-410e-840d-6190173c69ca-S0%2Fframeworks%2Fa257854f-3f3c-462c-8edb-c7b9dc3c79f5-%2Fexecutors%2Fthermos-centos-devel-mpi_test-0-a73d0de0-e232-4b27-8ebd-c42e57a38253%2Fruns%2F393f692c-1876-4c54-90bc-bf58a67cf1ae%2Fsandbox>

Link to the outputs of the job
<http://52.91.23.81:5050/#/agents/8d8ad711-1a0f-410e-840d-6190173c69ca-S0/browse?path=%2Fvar%2Flib%2Fmesos%2Fslaves%2F8d8ad711-1a0f-410e-840d-6190173c69ca-S0%2Fframeworks%2Fa257854f-3f3c-462c-8edb-c7b9dc3c79f5-%2Fexecutors%2Fthermos-centos-devel-mpi_test-0-a73d0de0-e232-4b27-8ebd-c42e57a38253%2Fruns%2F393f692c-1876-4c54-90bc-bf58a67cf1ae%2Fsandbox%2F.logs%2Ftest_mpi%2F0>

Following are the details of steps that I took to run the MPI program.


   1. For running the MPI, the Mesos slaves need to be equipped with an MPI
   library. I Installed OpenMPI 2.0.1 on slaves, using a quick installation
   script that I have created (*openmpi_install.sh*).
   2. Prerequisite setup on Mesos slave:-
  - I used a sample test MPI C program and compiled it using mpicc
  compiler provided by OpenMPI. I have attached the code file '
  *mpi_test.c*' that I used with this email (*Reference:
  https://hpcc.usc.edu/support/documentation/examples-of-mpi-programs
  <https://hpcc.usc.edu/support/documentation/examples-of-mpi-programs>*
  ).
  - The 'mpirun' tool provided by OpenMPI requires a machine host file
  that specifies the list of hosts to run the jobs on. I used a host file
  with just one 'localhost' entry, targeting single node MPI
execution local
  to the target slave on which Aurora would run the job.
   3. Next step is to launch an Aurora job calling mpirun on the compiled
   binary of the C program.
   4. I created an Aurora config file '*mpi_test.aurora*' which had steps
   to copy the binary and machine host file inside the execution container and
   call mpirun.
   5. The job was then submitted using the aurora command line client:-
  - # aurora job create example/centos/devel/mpi_test mpi_test.aurora


*Further improvements:-*

   - We may have a shared file system between the masters and slaves using
   NFS/ SSHFS which could be used to share the MPI executables avoiding manual
   copy in the steps above.
   - Slave configurations described could be automated through ansible.


*Further work I would want to focus on w.r.t gang scheduling:-*
Multiple nodes could be mimicked by launching multiple Aurora processes
using separate containers. But the key issues that need to be addressed
are:-

   1. We need a reliable way for inter container communication for parallel
   processes.
   2. We need to figure out a reliable technique for external scaffolding
   that is required to synchronize between the parallel processes.


Any thoughts/ suggestions would be highly appreciated.

Best Regards,
Mangirish


On Thu, Oct 20, 2016 at 11:53 PM, Mangirish Wagle <vaglomangir...@gmail.com>
wrote:

> Thanks *Gourav, *for sharing the information. I may need your help with
> quick ramp up for jumping into using Aurora on the mesos cluster and
> exploring its capabilities more.
>
> Hi *Suresh*,
>
> Thanks for bringing that up. I did notice that repository earlier. It is
> maintained by the same developer whom I am in touch over emails from Mesos
> team. He did not specifically say anything about mesos-slurm repo in his
> earlier emails, rather recommended looking at the GaSc repo. I observed
> that the code is almost the same as the main slurm repo code (
> https://github.com/SchedMD/slurm). The readme instructions are not
> specific to mesos. Nonetheless, I have dropped Niklas an email asking him
> if there has been some mesos specific customization in this repo. It would
> be interesting to know if/ how he has played around with it over mesos.
> I shall keep updating about the info that I get from him on the dev list.
>
> Regards,
> Mangirish
>
> On Thu, Oct 20, 2016 at 11:19 PM, Suresh Marru <sma...@apache.org> wrote:
>
>> Hi Gourav, Mangirish,
>>
>> Did you checkout SLURM on Mesos - https://github.com/nqn/slurm-mesos
>>
>> Note that this is GPL licensed code and incompatible with ASL V2. It does
>> not preclude from using it, but need to watch out when integrating
>> incompatible licensed codes.
>>
>> Suresh
>>
>> On Oct 20, 2016, at 10:26 PM, Shenoy, Gourav Ganesh <goshe...@indiana.edu>
>> wrote:
>>
>> Hi Mangirish, devs:
>>
>> The Aurora documentation for “Tasks” & “Processes” provides very good
>> information which I felt 

Re: Running MPI jobs on Mesos based clusters

2016-10-20 Thread Mangirish Wagle
Thanks *Gourav, *for sharing the information. I may need your help with
quick ramp up for jumping into using Aurora on the mesos cluster and
exploring its capabilities more.

Hi *Suresh*,

Thanks for bringing that up. I did notice that repository earlier. It is
maintained by the same developer whom I am in touch over emails from Mesos
team. He did not specifically say anything about mesos-slurm repo in his
earlier emails, rather recommended looking at the GaSc repo. I observed
that the code is almost the same as the main slurm repo code (
https://github.com/SchedMD/slurm). The readme instructions are not specific
to mesos. Nonetheless, I have dropped Niklas an email asking him if there
has been some mesos specific customization in this repo. It would be
interesting to know if/ how he has played around with it over mesos.
I shall keep updating about the info that I get from him on the dev list.

Regards,
Mangirish

On Thu, Oct 20, 2016 at 11:19 PM, Suresh Marru <sma...@apache.org> wrote:

> Hi Gourav, Mangirish,
>
> Did you checkout SLURM on Mesos - https://github.com/nqn/slurm-mesos
>
> Note that this is GPL licensed code and incompatible with ASL V2. It does
> not preclude from using it, but need to watch out when integrating
> incompatible licensed codes.
>
> Suresh
>
> On Oct 20, 2016, at 10:26 PM, Shenoy, Gourav Ganesh <goshe...@indiana.edu>
> wrote:
>
> Hi Mangirish, devs:
>
> The Aurora documentation for “Tasks” & “Processes” provides very good
> information which I felt would be helpful in implementing gang scheduling,
> as you mentioned.
>
> http://aurora.apache.org/documentation/latest/reference/configuration/
>
>
> From what I understood, there are these constraints:
> 1.   If targeting single-node (multi-core) MPI, then a “JOB” will be
> broken down into multiple “PROCESSESES”, each of which will run on these
> multi-cores.
> 2.   Even if *any one* of these processes fail, then the JOB should
> be marked as failed.
>
> As mentioned in my earlier email, Aurora provides Job abstraction – “a job
> consists of multiple tasks, which in turn consist of multiple processes”.
> This abstraction comes in extremely handy if we want to run MPI jobs on a
> single node.
>
> While submitting a job to Aurora, we can control the following parameters
> for a TASK:
>
> a.   “max_failures” for a TASK – the number of failed processes which
> is needed to mark a task as failed. Hence if we set max_failures = 1, then
> even if a single process in a task fails, Aurora will mark that task as
> failed.
> *Note*: Since a JOB can have multiple tasks, and even a JOB has
> “max_task_failures” parameter, we can set this to 1.
>
> b.   “max_concurrency” for a TASK – number of processes to run in
> parallel. If a node has 16 cores, then we can limit the amount of
> parallelism to <=16.
>
> I did not get much time to experiment with these parameters for job
> submission, but found this document to be handy and worth sharing. Hope
> this helps!
>
> Thanks and Regards,
> Gourav Shenoy
>
> *From: *Mangirish Wagle <vaglomangir...@gmail.com>
> *Reply-To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
> *Date: *Tuesday, October 18, 2016 at 11:48 AM
> *To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
> *Subject: *Re: Running MPI jobs on Mesos based clusters
>
> Sure Suresh, will update my findings on the mailing list. Thanks!
>
> On Tue, Oct 18, 2016 at 7:59 AM, Suresh Marru <sma...@apache.org> wrote:
>
> Hi Mangirish,
>
> This is interesting. Looking forward to see what you will find our further
> on gang scheduling support. Since the compute nodes are getting bigger,
> even if you can explore single node MPI (on Jetstream using 22 cores) that
> will help.
>
> Suresh
>
> P.S. Good to see the momentum on mailing list discussions on such topics.
>
>
> On Oct 18, 2016, at 1:54 AM, Mangirish Wagle <vaglomangir...@gmail.com>
> wrote:
>
>
> Hello Devs,
>
> Here is an update on some new learnings and thoughts based on my
> interactions with Mesos and Aurora devs.
>
> MPI implementations in Mesos repositories (like MPI Hydra) rely on
> obsolete MPI platforms and no longer supported my the developer community.
> Hence it is not recommended that we use this for our purpose.
>
> One of the known ways of running MPI jobs over mesos is using "gang
> scheduling" which is basically distributing the MPI run over multiple jobs
> on mesos in place of multiple nodes. The challenge here is the jobs need to
> be scheduled as one task and any job errored should collectively error out
> the main program including all the distributed jobs.
>
> One of the Mesos dev

Re: Running MPI jobs on Mesos based clusters

2016-10-13 Thread Mangirish Wagle
Hello Devs,

I needed some advice on the license of the MPI libraries. The MPICH library
that I have been trying claims to have a "BSD Like" license (
http://git.mpich.org/mpich.git/blob/HEAD:/COPYRIGHT).

I am aware that OpenMPI which uses BSD license is currently used in our
application. I had chosen to start investigating MPICH because it claims to
be a highly portable and high quality implementation of latest MPI
standard, suitable to cloud based clusters.

If anyone could please advise on the acceptance of the MPICH libraries MSD
Like license for ASF, that would help.

Thank you.

Best Regards,
Mangirish Wagle

On Thu, Oct 6, 2016 at 1:48 AM, Mangirish Wagle <vaglomangir...@gmail.com>
wrote:

> Hello Devs,
>
> The network issue mentioned above now stands resolved. The problem was
> with the iptables had some conflicting rules which blocked the traffic. It
> was resolved by simple iptables flush.
>
> Here is the test MPI program running on multiple machines:-
>
> [centos@mesos-slave-1 ~]$ mpiexec -f machinefile -n 2 ./mpitest
> Hello world!  I am process number: 0 on host mesos-slave-1
> Hello world!  I am process number: 1 on host mesos-slave-2
>
> The next step is to try invoking this through framework like Marathon.
> However, the job submission still does not run through Marathon. It seems
> to gets stuck in the 'waiting' state forever (For example
> http://149.165.170.245:8080/ui/#/apps/%2Fmaw-try). Further, I notice that
> Marathon is listed under 'inactive frameworks' in mesos dashboard (
> http://149.165.171.33:5050/#/frameworks).
>
> I am trying to get this working, though any help/ clues with this would be
> really helpful.
>
> Thanks and Regards,
> Mangirish Wagle
>
>
>
>
> On Fri, Sep 30, 2016 at 9:21 PM, Mangirish Wagle <vaglomangir...@gmail.com
> > wrote:
>
>> Hello Devs,
>>
>> I am currently running a sample MPI C program using 'mpiexec' provided by
>> MPICH. I followed their installation guide
>> <http://www.mpich.org/static/downloads/3.2/mpich-3.2-installguide.pdf> to
>> install the libraries on the master and slave nodes of the mesos cluster.
>>
>> The approach that I am trying out here is that I am equipping the
>> underlying nodes with MPI handling tools and then use the Mesos framework
>> like Marathon/ Aurora to submit jobs to run MPI programs by invoking these
>> tools.
>>
>> You can potentially run an MPI program using mpiexec in the following
>> manner:-
>>
>> # *mpiexec -f machinefile -n 2 ./mpitest*
>>
>>- *machinefile *-> File which contains an inventory of machines to
>>run the program on and number of processes on each machine.
>>- *mpitest *-> MPI program compiled in C using mpicc compiler. The
>>program returns the process number and he hostname of the machine running
>>the process.
>>- *-n *option indicates number of processes that it needs to spawn
>>
>> Example of machinefile contents:-
>>
>> # Entries in the format :
>> mesos-slave-1:1
>> mesos-slave-2:1
>>
>> The reason for choosing slaves is that Mesos runs the jobs on slaves,
>> managed by 'agents' pertaining to the slaves.
>>
>> Output of the program with '-n 1':-
>>
>> # mpiexec -f machinefile -n 1 ./mpitest
>> Hello world!  I am process number: 0 on host mesos-slave-1
>>
>> But when I try for '-n 2', I am hitting the following error:-
>>
>> # mpiexec -f machinefile -n 2 ./mpitest
>> [proxy:0:1@mesos-slave-2] HYDU_sock_connect
>> (/home/centos/mpich-3.2/src/pm/hydra/utils/sock/sock.c:172): unable to
>> connect from "mesos-slave-2" to "mesos-slave-1" (No route to host)
>> [proxy:0:1@mesos-slave-2] main 
>> (/home/centos/mpich-3.2/src/pm/hydra/pm/pmiserv/pmip.c:189):
>> *unable to connect to server mesos-slave-1 at port 44788* (check for
>> firewalls!)
>>
>> It seems to not allow the program execution due to network traffic being
>> blocked. I checked security groups in scigap openstack for mesos-slave-1,
>> mesos-slave-2 nodes and it is set to 'wideopen' policy. Furthermore, I
>> tried adding explicit rules to the policies to allow all TCP and UDP
>> (Currently I am not sure what protocol is used underneath), even then it
>> continues throwing this error.
>>
>> Any clues, suggestions, comments about the error or approach as a whole
>> would be helpful.
>>
>> Thanks and Regards,
>> Mangirish Wagle
>>
>>
>> On Tue, Sep 27, 2016 at 11:23 AM, Mangirish Wagle <
>> vaglomangir...@gmail.com> wrote:
>>
>>> Hello Devs,
>>>
>>> Tha

Re: Running MPI jobs on Mesos based clusters

2016-10-13 Thread Mangirish Wagle
Hi Marlon,
Thanks for confirming and sharing the legal link.

-Mangirish

On Thu, Oct 13, 2016 at 12:13 PM, Pierce, Marlon <marpi...@iu.edu> wrote:

> BSD is ok: https://www.apache.org/legal/resolved.
>
>
>
> *From: *Mangirish Wagle <vaglomangir...@gmail.com>
> *Reply-To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
> *Date: *Thursday, October 13, 2016 at 12:03 PM
> *To: *"dev@airavata.apache.org" <dev@airavata.apache.org>
> *Subject: *Re: Running MPI jobs on Mesos based clusters
>
>
>
> Hello Devs,
>
> I needed some advice on the license of the MPI libraries. The MPICH
> library that I have been trying claims to have a "BSD Like" license (
> http://git.mpich.org/mpich.git/blob/HEAD:/COPYRIGHT).
>
> I am aware that OpenMPI which uses BSD license is currently used in our
> application. I had chosen to start investigating MPICH because it claims to
> be a highly portable and high quality implementation of latest MPI
> standard, suitable to cloud based clusters.
>
> If anyone could please advise on the acceptance of the MPICH libraries MSD
> Like license for ASF, that would help.
>
> Thank you.
>
> Best Regards,
>
> Mangirish Wagle
>
>
>
> On Thu, Oct 6, 2016 at 1:48 AM, Mangirish Wagle <vaglomangir...@gmail.com>
> wrote:
>
> Hello Devs,
>
>
>
> The network issue mentioned above now stands resolved. The problem was
> with the iptables had some conflicting rules which blocked the traffic. It
> was resolved by simple iptables flush.
>
>
>
> Here is the test MPI program running on multiple machines:-
>
>
>
> [centos@mesos-slave-1 ~]$ mpiexec -f machinefile -n 2 ./mpitest
>
> Hello world!  I am process number: 0 on host mesos-slave-1
>
> Hello world!  I am process number: 1 on host mesos-slave-2
>
>
>
> The next step is to try invoking this through framework like Marathon.
> However, the job submission still does not run through Marathon. It seems
> to gets stuck in the 'waiting' state forever (For example
> http://149.165.170.245:8080/ui/#/apps/%2Fmaw-try). Further, I notice that
> Marathon is listed under 'inactive frameworks' in mesos dashboard (
> http://149.165.171.33:5050/#/frameworks).
>
>
>
> I am trying to get this working, though any help/ clues with this would be
> really helpful.
>
>
>
> Thanks and Regards,
>
> Mangirish Wagle
>
>
>
>
> On Fri, Sep 30, 2016 at 9:21 PM, Mangirish Wagle <vaglomangir...@gmail.com>
> wrote:
>
> Hello Devs,
>
>
>
> I am currently running a sample MPI C program using 'mpiexec' provided by
> MPICH. I followed their installation guide
> <http://www.mpich.org/static/downloads/3.2/mpich-3.2-installguide.pdf> to
> install the libraries on the master and slave nodes of the mesos cluster.
>
>
>
> The approach that I am trying out here is that I am equipping the
> underlying nodes with MPI handling tools and then use the Mesos framework
> like Marathon/ Aurora to submit jobs to run MPI programs by invoking these
> tools.
>
>
>
> You can potentially run an MPI program using mpiexec in the following
> manner:-
>
>
>
> # *mpiexec -f machinefile -n 2 ./mpitest*
>
>- *machinefile *-> File which contains an inventory of machines to run
>the program on and number of processes on each machine.
>- *mpitest *-> MPI program compiled in C using mpicc compiler. The
>program returns the process number and he hostname of the machine running
>the process.
>- *-n *option indicates number of processes that it needs to spawn
>
> Example of machinefile contents:-
>
>
>
> # Entries in the format :
>
> mesos-slave-1:1
>
> mesos-slave-2:1
>
>
>
> The reason for choosing slaves is that Mesos runs the jobs on slaves,
> managed by 'agents' pertaining to the slaves.
>
>
>
> Output of the program with '-n 1':-
>
>
>
> # mpiexec -f machinefile -n 1 ./mpitest
>
> Hello world!  I am process number: 0 on host mesos-slave-1
>
>
>
> But when I try for '-n 2', I am hitting the following error:-
>
>
>
> # mpiexec -f machinefile -n 2 ./mpitest
>
> [proxy:0:1@mesos-slave-2] HYDU_sock_connect (/home/centos/mpich-3.2/src/
> pm/hydra/utils/sock/sock.c:172): unable to connect from "mesos-slave-2"
> to "mesos-slave-1" (No route to host)
>
> [proxy:0:1@mesos-slave-2] main (/home/centos/mpich-3.2/src/
> pm/hydra/pm/pmiserv/pmip.c:189): *unable to connect to server
> mesos-slave-1 at port 44788* (check for firewalls!)
>
>
>
> It seems to not allow the program execution due to network traffic being
> blocked. I checked secu

Re: Need inputs on running MPI jobs on Mesos

2016-10-14 Thread Mangirish Wagle
Hi Joseph,

Thanks for your response.
What I really want to know is, whether there are any particular reasons why
the community has not been supporting any work related to MPI on Mesos.
There has been some good demand for cloud based MPI support. Given the
known resource management capabilities of Mesos, we at Apache Airavata are
targeting a Mesos based schedular for MPI jobs for extending the
capabilities to cloud based clusters and not necessarily HPC/ HTC.

I would be interested to know or contribute to any work towards supporting
MPI on Mesos planned in near future.

Thanks and Regards,
Mangirish

CC: Airavata dev mailing list.

On Fri, Oct 14, 2016 at 12:21 PM, Joseph Wu <jos...@mesosphere.io> wrote:

> Other than test frameworks or frameworks Mesos considers part of its CLI,
> there shouldn't be any other Frameworks that are part of the Mesos
> codebase.  (Imagine shipping Spark or Marathon or a bunch of other
> humongous frameworks along with Mesos.)  Same thing goes for MPI, which may
> or may not even work anymore.  I don't know anyone that has run the MPI
> framework in the past several years.
>
> On Fri, Oct 14, 2016 at 8:51 AM, Mangirish Wagle <vaglomangir...@gmail.com
> >
> wrote:
>
> > Thanks for the response.
> > May I know if there are any reasons for not continuing to develop and
> > support MPI framework? Are there any known issues with running MPI jobs
> on
> > Mesos?
> >
> > Best Regards,
> > Mangirish
> >
> > On Fri, Oct 14, 2016 at 2:20 AM, haosdent <haosd...@gmail.com> wrote:
> >
> > > Refer to https://issues.apache.org/jira/browse/MESOS-6084, I think the
> > MPI
> > > framework would be deprecated.
> > >
> > > On Fri, Oct 14, 2016 at 1:57 PM, Mangirish Wagle <
> > vaglomangir...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Hello Mesos Devs,
> > > >
> > > > I am contributing to Apache Airavata <http://airavata.apache.org/>
> and
> > > > currently working on extending the support for the science gateways
> to
> > > run
> > > > MPI jobs on cloud based Mesos clusters.
> > > >
> > > > I am looking at mpiexec-mesos
> > > > <https://github.com/apache/mesos/tree/master/mpi> and Mesos Hydra
> > > > <https://github.com/mesosphere/mesos-hydra> but I am also interested
> > in
> > > > knowing about any latest work that is being done in this area. In
> > > general,
> > > > I want to seek your advice and thoughts on what is the right tool
> that
> > I
> > > > should use, and the appropriate direction to proceed to achieve the
> > > > objective of running MPI jobs on Mesos.
> > > >
> > > > Thank you.
> > > >
> > > > Regards,
> > > > Mangirish Wagle
> > > > Graduate Student, Indiana University Bloomington.
> > > >
> > >
> > >
> > >
> > > --
> > > Best Regards,
> > > Haosdent Huang
> > >
> >
>


Re: Welcome Ajinkya Dhamnaskar as Airavata Committer

2017-04-09 Thread Mangirish Wagle
Many Congratulations Ajinkya!

On Apr 9, 2017 10:57 PM, "Suresh Marru"  wrote:

> Hi All,
>
> The Project Management Committee (PMC) for Apache Airavata has asked
> Ajinkya Dhamnaskar to become a committer based on his contributions to the
> project. We are pleased to announce that he has accepted.
>
> Being a committer enables easier contribution to the project since there
> is no need to go via the patch submission process. This should enable
> better productivity.
>
> Please join me in welcoming Ajinkya to Airavata.
>
> Suresh
> (On Behalf of Apache Airavata PMC)


Re: Welcome Marcus Christie as Airavata PMC member

2017-11-17 Thread Mangirish Wagle
Congratulations Marcus!

On Fri, Nov 17, 2017 at 2:27 PM, Shenoy, Gourav Ganesh  wrote:

> Congratulations Marcus!
>
>
>
> *PS: Apoorv will be delighted ;-)*
>
>
>
> Thanks and Regards,
>
> Gourav Shenoy
>
>
>
> *From: *DImuthu Upeksha 
> *Reply-To: *"dev@airavata.apache.org" 
> *Date: *Friday, November 17, 2017 at 2:22 PM
> *To: *Airavata Dev 
> *Cc: *"us...@airavata.apache.org" 
> *Subject: *Re: Welcome Marcus Christie as Airavata PMC member
>
>
>
> Congratulations Marcus!
>
>
>
> On Sat, Nov 18, 2017 at 12:51 AM, Supun Nakandala <
> supun.nakand...@gmail.com> wrote:
>
> Congratulations Marcus!
>
>
>
> On Fri, Nov 17, 2017 at 11:19 AM, Miller, Mark  wrote:
>
> Congratulations, and welcome!
>
> -Original Message-
> From: Suresh Marru [mailto:sma...@apache.org]
> Sent: Friday, November 17, 2017 11:11 AM
> To: Airavata Dev 
> Cc: Airavata Users 
> Subject: Welcome Marcus Christie as Airavata PMC member
>
> Hi All,
>
> The Project Management Committee (PMC) for Apache Airavata has asked
> Marcus Christie to become a PMC member based on his contributions to the
> project. We are pleased to announce that he has accepted.
>
> As you know, Marcus has been stewarding Airavata already as a committer
> and being a PMC member will enable him to assist with the management and to
> guide the direction of the project as well.
>
> Please join me in welcoming Marcus to Airavata PMC
>
> Cheers,
> Suresh
> (On Behalf of Apache Airavata PMC)
>
>
>
>
>