Simon and Daniel,

Wouldn't it be easier to use a process engine such as JBPM or another
bpel engine such as apache ode?  They are made for this type of work -
parallel execution, wait, fork, join, etc.

Another question: 
- Are there plans in tuscany project to support JBMP process executions?
That is to link JBPM actions to sca components?  JBMP unlike unlike
other pure bpel engines can compose it process not only out of wsdl
based web services but out of java classes as well.

Regards,
Radu Marian
CRM Services Architecture Team
Bank of America, Charlotte NC
(980) 387-6233
[EMAIL PROTECTED]

-----Original Message-----
From: Simon Laws [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 06, 2008 10:59 AM
To: tuscany-user@ws.apache.org
Subject: Re: How to use references using multiplicity="1..n" with
binding.ws or binding.rmi

Hi Daniel

A few days ago I raised the issue of fixing the API that is causing you
these problems (
http://www.mail-archive.com/[EMAIL PROTECTED]/msg31110.html).
I've had one positive response to it so no great raft of opposition so
I'll double check that people are OK and hope to put a fix in. Re. your
specific scenario I've made a few comments below. Thinking aloud really
so I may be slightly off target.

snip...


> Yes, this is exactly what I want to achieve. The scope of the 
> Controller is COMPOSITE.


You could split the controller into two halves.

Controller - decides what to do and is COMPOSITE scoped Dispatcher -
owns the collection of references to crawlers and is STATELESS

For each search that the Controller wants to perform it calls through to
a Dispatcher (over the local SCA binding). This fires up a new instance
of the Dispatcher, with a new set of proxies, and the Dispatcher is free
to hold a conversation with the appropriate Crawler through an injected
proxy. This removes the need to retrieve the proxy from the component
context.


> The idea is that a Controller has 1..n references to concrete 
> Crawlers, e.g. a FilesystemCrawler and a WebCrawler.
> The Controller initiates crawls of Datasources (e.g. C:\folder or
> http://someserver.org) and receives the crawled objects from the 
> Crawler and processes them. It should be possible to crawl multiple 
> Datasources with the same Crawler instance in parallel. In the 
> Controller there is a seperate thread for each crawled datasource 
> (programmed manually). On the Crawler side one needs something similar

> (either sepereate threads or multiple instances) and for simplicity I 
> wanted to make use of SCA Conversations to take care of this. Am I 
> mis-using the Conversation feature ?


Conversations would be able to do this for you but you would use
conversational semantics if you really wanted to hold a conversation
with the crawler, i.e. identify the same context over a series of
separate invocations. An example of this kind of conversation would be
if you wanted to start the crawler and then come back later and ask for
the results so far. You could employ callabacks if you wanted the
crawler to report progress back to the controlled rather than bing
polled. The same issue arises though that you need to be able to get
hold of a service reference and set an id (either conversation or
callback) on it.

You could go with a stateless crawler without conversations or
callbacks.
This would mean that the controller would have to block on the thread
for that crawler until the crawl has been completed. I can imagine that
this is not desirable if you have crawlers running in a distributed
fashion for performance reasons.


>
>
> > (the real answer of course is for us to fix the JIRA;-)
> Well, of course this would be ideal, but I need a solution (or at 
> least a workaround) for now. Implementing my own conversation handling

> seems to be lots of overhead if the Jira is fixed somtime.
>

>
>
> Bye,
> Daniel
>

Reply via email to