Hi Suresh,

On Thu, Sep 29, 2011 at 11:54 AM, Suresh Marru <[email protected]> wrote:

> Hi All,
>
> We should get a consensus on the release features and document the road map
> on the website and march towards a release. I will start the draft, please
> look through and comment:
>
> I will define the feature list of Release 0.1-Incubating by means of a
> tutorial we should document on the website.
>
> Airavata Modules for the release:
> GFac-Axis2: An axis2 web service which can consume user defined command
> line descriptions and generate axis2 application web services.
> XBaya - A desktop (and webstart by JNLP) application which lets users to
> construct, execute and monitor workflow executions.
>
I am not yet sure whether we will be able to create JNLP for XBaya, because
when we host the files in to maven repository different files are located in
different locations and for some libraries we do not have control of the
repository, and we need to manually figure out each and every jar location.
Currently we have a script to run the XBaya but it is giving some issues
too. I will try to find an easy way to generate a JNLP using artifacts in
maven repository if its a must or else we can live with the shell script.

> XBaya is also used in this release as a user management, application
> management and data browser. In the future these UI's will be web gadgets to
> be deployed into containers like Apache Rave.

Workflow Interpreter: Axis2 wrapper around XBaya dynamic executor. This is a
> simple and interactive workflow execution engine. Future releases will
> support Apache ODE in addition to interpreter service.
> WS-Messenger: WS-Eventing/WS-Notification based messaging system.
> Registry-API: A thick client registry API for Airavata to put and get
> documents. Current JCR implementation is supported by Jack-Rabbit.
>
> Build & Deploy:
> We should have a one single maven build which builds and deploys all
> services to a axis2 tomcat container. We should have shell scripts to launch
> xbaya.
>
What do you mean by axis2 tomcat container ? Currently when we build we
create a pack which you can use with SimpleAxis2Server and yes for XBaya we
have a script but its not yet in a working condition (It fails without
giving an error, I will have a look in to that).

>
> All tutorials have the pre requite of build and deploy steps.
>
> 5 minute Airavata Tutorial:
> 1) Create/Login to Jack-Rabbit account from XBaya
> 2) Construct a sample workflow with included sample math axis2 services.
> 3) Store and retrieve the workflow from registry
> 4) Execute the workflow with monitoring through events
> 5) View workflow execution summary and inputs and outputs from registry
> browser.
>
+1

For 5 minute thing, how about providing a script which starts Axis2
Instance, Jackrabbit together, currently we have two separate scripts for
that ?

>
> 15 minute Airavata Tutorial:
> 1) Create/Login to Jack-Rabbit account from XBaya
> 2) Identify sample command line applications and provide descriptions to
> register applications to registry.
> 3) Construct workflow with the registered and generated application
> services.
> 4) Execute workflow invoking the newly created axis2 application services.
> 5) View workflow execution summary and inputs and outputs from registry
> browser.
>
> Please note that I am listing the simple steps to start with. Once
> agreeable to every one, we should all document detailed developer
> information, like how the execution from xbaya is going to go to workflow
> intepreter and then gfac and so on.
>
> Once we agree upon the features, we should also iterate on the timelines
> for release and rough estimates for future releases.
>

I hope to start working in documentation in google docs(I will send a mail
once I start with links) so that everyone can see and do modifications.

Thanks
Lahiru

>
> Cheers,
> Suresh
>
>
>
> On May 13, 2011, at 8:37 AM, Suresh Marru wrote:
>
> > Hi All,
> >
> > All of us clearly know what Airavata software is about in varying
> details,  but at the same time I realize not every one of us on the list
> have a full understanding of the architecture as a whole and sub-components.
> Along with inheriting the code donation, I suggest we focus on bringing
> every one to speed by means of high level and low level architecture
> diagrams. I will start a detailed email thread about this task. In short,
> currently the software assumes understanding of e-Science in general and
> some details of Grid Computing. Our first focus should be to bring the
> software to a level any java developer can understand and contribute. Next
> the focus can be to make it easy for novice users.
> >
> > I thought a good place to start might be to list out the high level goals
> and then focus on the first goal with detailed JIRA tasks. I am assuming you
> will steer us with a orthogonal roadmap to graduation. I hope I am not
> implying we need to meet the following goals to graduate, because some of
> them are very open ended. Also, please note that Airavata may have some of
> these features already, I am mainly categorizing so we will have a focused
> effort in testing, re-writing or new implementations.
> >
> > Airavata high level feature list:
> >
> > Phase 1: Construct, Execute and monitor workflows from pre-deployed web
> services. The workflow enactment engine will be the inherent Airavata
> Workflow Interpreter. Register command line applications as web services,
> construct and execute workflows with these application services. The
> applications may run locally, on Grid enabled resources or by ssh'ing to a
> remote resource. The client to test this phase workflows can be Airavata
> Workflow Client (XBaya) running as a desktop application.
> >
> > Phase 2: Execute all of phase 1 workflows on Apache ODE engine by
> generating and deploying BPEL. Develop and deploy gadget interfaces to
> Apache Rave container to support application registration, workflow
> submission and monitoring components. Support applications running on
> virtual machine images to be deployed to Amazon EC2, EUCALYPTUS and similar
> infrastructure-as-a-service cloud deployments.
> >
> > Phase 3:  Expand the compute resources to Elastic Map Reduce and Hadoop
> based executions. Focus on the data and metadata catalog integration like
> Apache OODT.
> >
> > I will stop here, to allow us to discuss the same. Once we narrow down on
> the high level phase 1 goals, I will start a detailed discussion on where
> the code is now and the steps to get to goal1.
> >
> > Comments, Barbs?
> >
> > Suresh
>
>


-- 
System Analyst Programmer
PTI Lab
Indiana University

Reply via email to