Re: Camel Aggregation : All Messages Lost from the Queue.

2010-10-04 Thread Claus Ibsen
On Mon, Oct 4, 2010 at 2:33 AM, bxkrish balainf...@gmail.com wrote:

 Hi

 I am using Camel 2.0. I am trying to do a simple aggregation of messages
 dropped in one Queue to another Queue.

 For Example: Q1 = 1,2,3,4...10 messages, I want it to be 1 message in Q2
 which is concatenation of (1,2,3, 4...10).

 I do not have any specific Correlation Expression, hence just creating a
 Constant Header.

 Below is my configuration from Camel Context.

 camelContext trace=false id=EIP_CamelConext
 xmlns=http://camel.apache.org/schema/spring;
        route
          from uri=jms:Q1 /
          setHeader headerName=Test
             constantYes/constant
          /setHeader
          aggregator strategyRef=aggregatorStrategy
           correlationExpression
             simpleheader.Test/simple
           /correlationExpression
          to uri=jms:Q2/
          /aggregator
        /route
 /camelContext
 bean id=aggregatorStrategy
 class=com.aggregation.strategy.MessageAggregation/

 //JAVA Code for custom Aggregation Strategy.
 public class MessageAggregation implements AggregationStrategy{
        public Exchange aggregate(Exchange oldExchange, Exchange
 newExchange) {
                System.out.println(Here);
                Message newIn = newExchange.getIn();
                String oldBody = oldExchange.getIn().getBody(String.class);
                String newBody = newIn.getBody(String.class);
                newIn.setBody(oldBody + newBody);
                return newExchange;
        }
 }

 At runtime the messages are getting cleared from Q1 and nothing is available
 on Q2. I can see the MessageAggregation is getting called.


Is there a reason why you use the old Camel 2.0 release? If possible
upgrade to the latest stable release which is 2.4.

Anyway make sure that you don't throw an exceptions from your custom
MessageAggregation code.
That could potential cause the aggregated message not to complete and send to Q2



 Please help.

 Thanks
 Bala
 --
 View this message in context: 
 http://camel.465427.n5.nabble.com/Camel-Aggregation-All-Messages-Lost-from-the-Queue-tp3138936p3138936.html
 Sent from the Camel - Users mailing list archive at Nabble.com.




-- 
Claus Ibsen
Apache Camel Committer

Author of Camel in Action: http://www.manning.com/ibsen/
Open Source Integration: http://fusesource.com
Blog: http://davsclaus.blogspot.com/
Twitter: http://twitter.com/davsclaus


Re: publishedEndpointUrl for cxf:cxfEndpoint

2010-10-04 Thread Scott Christopher
Thanks Willem.

On 02/10/2010, at 10:47 PM, Willem Jiang wrote:

 Hi Scott,
 
 I just checked the schema of cxfEndpoint, it doesn't support the 
 publishedEndpointUrl.
 I filled a JIRA[1] for it.
 
 [1]https://issues.apache.org/activemq/browse/CAMEL-3190
 
 On 10/2/10 8:33 PM, Scott Christopher wrote:
 On 02/10/2010, at 9:05 PM, Willem Jiang wrote:
 
 On 10/2/10 1:33 PM, Scott Christopher wrote:
 On 02/10/2010, at 1:18 AM, Ashwin Karpe wrote:
 
 Not sure why setting the address attribute in the spring.xml file does not
 change the address for you since that is the value that overrides the WSDL
 address... I know that it works!!!
 
 My apologies, the cxfEndpoint address does allow you to set any hostname, 
 which updates the generated WSDL accordingly (I was confusing it with the 
 Jetty component, which doesn't allow you to bind to a hostname that 
 doesn't resolve to an IP address on a local interface).
 The publishedEndpointUrl is just used to reset the Address information
 from the WSDL, it have nothing to do with the address that the service
 will be bound to.
 
 
 This is exactly what I'm trying to achieve, and as per my original question, 
 is it possible to use with a cxfEndpoint bean? Adding the 
 publishedEndpointUrl attribute to the cxfEndpoint bean results in Attribute 
 'publishedEndpointUrl' is not allowed to appear in element 
 'cxf:cxfEndpoint'.
 
 It would be great if I could do something like:
 
 cxf:cxfEndpoint id=endpoint
  address=http://0.0.0.0:8000/service;
  publishedEndpointUrl=https://www.example.com/service;
  serviceClass=com.example.CxfTest /
 
 Regards,
  Scott Christopher
 
 
 -- 
 Willem
 --
 Open Source Integration: http://www.fusesource.com
 Blog:http://willemjiang.blogspot.com (English)
  http://jnn.javaeye.com (Chinese)
 Twitter: http://twitter.com/willemjiang

Scott Christopher
Analyst Programmer, Software Engineering

[ Direct ]  +61 8 82282529  schristop...@internode.com.au
[ Internode ]   +61 13 6633 150 Grenfell St, Adelaide, SA 5000



Re: message not getting delivered

2010-10-04 Thread Mark Webb
Thanks.  I have things working now.

It seems weird to me though that if in a Processor I take a message
in, transform it into a newly created Message object that I should
call Exchange.setIn(Message) instead of Exchange.setOut(Message).  I
think of a Processor as taking in a message and then sending it
out, but it looks like that is not the case.  Just need to adjust
the way I think about things.



On Sat, Oct 2, 2010 at 3:06 AM, Claus Ibsen claus.ib...@gmail.com wrote:
 See this FAQ
 http://camel.apache.org/using-getin-or-getout-methods-on-exchange.html

 On Fri, Oct 1, 2010 at 10:37 PM, Mark Webb elihusma...@gmail.com wrote:
 I am sending messages through a Camel route in ActiveMQ.  My message
 reaches the end of the processing chain, and at the last processor I
 call exchange.setOut( newly created DefaultMessage ).  When I look at
 the admin page for ActiveMQ, the topic shows that there is a message
 to be dequeued.  It even says that there is a consumer connected to
 that topic, which is a GUI tool I wrote.  The GUI tool makes a call to
 Consumer.setMessageListener.  So why are the messages not making their
 way to my GUI tool?  I am stumped as to why the messages sit in the
 topic and never leave if there is a listener for that topic.

 Of course the first thought is, is the Connection started?  Yeah I
 verified that.  In fact I can send messages to the topic via the
 web-based admin tool for ActiveMQ and the GUI receives them.

 Thanks for any help you have,
 Mark




 --
 Claus Ibsen
 Apache Camel Committer

 Author of Camel in Action: http://www.manning.com/ibsen/
 Open Source Integration: http://fusesource.com
 Blog: http://davsclaus.blogspot.com/
 Twitter: http://twitter.com/davsclaus



Re: message not getting delivered

2010-10-04 Thread Claus Ibsen
On Mon, Oct 4, 2010 at 3:57 PM, Mark Webb elihusma...@gmail.com wrote:
 Thanks.  I have things working now.

 It seems weird to me though that if in a Processor I take a message
 in, transform it into a newly created Message object that I should
 call Exchange.setIn(Message) instead of Exchange.setOut(Message).  I
 think of a Processor as taking in a message and then sending it
 out, but it looks like that is not the case.  Just need to adjust
 the way I think about things.


You are not the only one. See this FAQ
http://camel.apache.org/using-getin-or-getout-methods-on-exchange.html



 On Sat, Oct 2, 2010 at 3:06 AM, Claus Ibsen claus.ib...@gmail.com wrote:
 See this FAQ
 http://camel.apache.org/using-getin-or-getout-methods-on-exchange.html

 On Fri, Oct 1, 2010 at 10:37 PM, Mark Webb elihusma...@gmail.com wrote:
 I am sending messages through a Camel route in ActiveMQ.  My message
 reaches the end of the processing chain, and at the last processor I
 call exchange.setOut( newly created DefaultMessage ).  When I look at
 the admin page for ActiveMQ, the topic shows that there is a message
 to be dequeued.  It even says that there is a consumer connected to
 that topic, which is a GUI tool I wrote.  The GUI tool makes a call to
 Consumer.setMessageListener.  So why are the messages not making their
 way to my GUI tool?  I am stumped as to why the messages sit in the
 topic and never leave if there is a listener for that topic.

 Of course the first thought is, is the Connection started?  Yeah I
 verified that.  In fact I can send messages to the topic via the
 web-based admin tool for ActiveMQ and the GUI receives them.

 Thanks for any help you have,
 Mark




 --
 Claus Ibsen
 Apache Camel Committer

 Author of Camel in Action: http://www.manning.com/ibsen/
 Open Source Integration: http://fusesource.com
 Blog: http://davsclaus.blogspot.com/
 Twitter: http://twitter.com/davsclaus





-- 
Claus Ibsen
Apache Camel Committer

Author of Camel in Action: http://www.manning.com/ibsen/
Open Source Integration: http://fusesource.com
Blog: http://davsclaus.blogspot.com/
Twitter: http://twitter.com/davsclaus


Re: XMPP pubsub

2010-10-04 Thread Claus Ibsen
On Sun, Oct 3, 2010 at 9:14 AM, preben p...@dr.dk wrote:

 Hi

 Seems that camel-xmpp doesn't implement pubsub Smack features (XEP-0060
 Publish-Subscribe). There is a jivesoftware library that supports pubsub,
 and both ejabberd and openfire servers support pubsub.

 Anybody have any ideas about at Camel component for this ?
 eg. create a new one or extend the existing xmpp component.


We love contributions so if anyone want to help improving the existing
camel-xmpp.
Unfortunately Smack hasn't been released a new version in a while. I
think 3.1.0 is stil the latest version.
Could you check if Smack 3.1.0 supports the feature seek?

If anyone knows of better Java clients then I would like to know.

And feel free to create a ticket in JIRA for this.


 thanks
 Preben
 --
 View this message in context: 
 http://camel.465427.n5.nabble.com/XMPP-pubsub-tp3074364p3074364.html
 Sent from the Camel - Users mailing list archive at Nabble.com.




-- 
Claus Ibsen
Apache Camel Committer

Author of Camel in Action: http://www.manning.com/ibsen/
Open Source Integration: http://fusesource.com
Blog: http://davsclaus.blogspot.com/
Twitter: http://twitter.com/davsclaus


Re: XMPP pubsub

2010-10-04 Thread preben

The latest 3.1.0 doesn't support pubsub, but on trunc
http://svn.igniterealtime.org/svn/repos/smack/trunk/source/org/jivesoftware/smackx/pubsub/
the feature is implemented. Seems that only a release is needed.

An alternative would be nice since the latest Smack release is back in
20008. 

/preben

-- 
View this message in context: 
http://camel.465427.n5.nabble.com/XMPP-pubsub-tp3074364p3173539.html
Sent from the Camel - Users mailing list archive at Nabble.com.


FTP to HDFS - large gzipped files

2010-10-04 Thread coolgold

I'm considering using camel and activemq for moving gzipped (~1gb) files form
ftp server to hdfs. I'm aware of existence of ftp and hdfs components, but
I'm not sure about the support for splitting for gzipped files or streaming
method with FTP entry point.  What is the most straightforward/scalable
approach?
-- 
View this message in context: 
http://camel.465427.n5.nabble.com/FTP-to-HDFS-large-gzipped-files-tp3192431p3192431.html
Sent from the Camel - Users mailing list archive at Nabble.com.


Re: message not getting delivered

2010-10-04 Thread Hadrian Zbarcea
Yeah, there is still a lot of confusion. 
Unfortunately that page is bollocks and I'll have to get it cleaned up.
Mark, I think you did the right thing actually, in a processor one should *not* 
modify the in, but produce an out, if needed. It's time to get that clarified!

My $0.02,
Hadrian

On Oct 4, 2010, at 10:01 AM, Claus Ibsen wrote:

 On Mon, Oct 4, 2010 at 3:57 PM, Mark Webb elihusma...@gmail.com wrote:
 Thanks.  I have things working now.
 
 It seems weird to me though that if in a Processor I take a message
 in, transform it into a newly created Message object that I should
 call Exchange.setIn(Message) instead of Exchange.setOut(Message).  I
 think of a Processor as taking in a message and then sending it
 out, but it looks like that is not the case.  Just need to adjust
 the way I think about things.
 
 
 You are not the only one. See this FAQ
 http://camel.apache.org/using-getin-or-getout-methods-on-exchange.html
 
 
 
 On Sat, Oct 2, 2010 at 3:06 AM, Claus Ibsen claus.ib...@gmail.com wrote:
 See this FAQ
 http://camel.apache.org/using-getin-or-getout-methods-on-exchange.html
 
 On Fri, Oct 1, 2010 at 10:37 PM, Mark Webb elihusma...@gmail.com wrote:
 I am sending messages through a Camel route in ActiveMQ.  My message
 reaches the end of the processing chain, and at the last processor I
 call exchange.setOut( newly created DefaultMessage ).  When I look at
 the admin page for ActiveMQ, the topic shows that there is a message
 to be dequeued.  It even says that there is a consumer connected to
 that topic, which is a GUI tool I wrote.  The GUI tool makes a call to
 Consumer.setMessageListener.  So why are the messages not making their
 way to my GUI tool?  I am stumped as to why the messages sit in the
 topic and never leave if there is a listener for that topic.
 
 Of course the first thought is, is the Connection started?  Yeah I
 verified that.  In fact I can send messages to the topic via the
 web-based admin tool for ActiveMQ and the GUI receives them.
 
 Thanks for any help you have,
 Mark
 
 
 
 
 --
 Claus Ibsen
 Apache Camel Committer
 
 Author of Camel in Action: http://www.manning.com/ibsen/
 Open Source Integration: http://fusesource.com
 Blog: http://davsclaus.blogspot.com/
 Twitter: http://twitter.com/davsclaus
 
 
 
 
 
 -- 
 Claus Ibsen
 Apache Camel Committer
 
 Author of Camel in Action: http://www.manning.com/ibsen/
 Open Source Integration: http://fusesource.com
 Blog: http://davsclaus.blogspot.com/
 Twitter: http://twitter.com/davsclaus



Quartz Camel Spring Example?

2010-10-04 Thread Russell, Brian
I have deployed a working quartz cron trigger outside of the camel
configuration.

We are in the process of implementing our backend processes using camel.
So far, this is going well and have not hit any major obstacles -- so
very pleased.

I have a new quartz trigger that I am wanting to setup and can easily do
it outside of camel.

I am trying to setup the new quartz trigger inside of camel but am not
quite getting it implemented properly.

Basically, I want a trigger to fire every few minutes and load a List of
java objects from the database.

I want to take this list (assuming the Splitter pattern) and produce a
message onto a jms queue.

Are there any similar examples of this that I can reference using the
Spring Configuration?

I'm not quite understanding how to return the List of objects from the
quartz job into the camel route where I can split it out from there.

Thanks for any guidance.










Confidentiality Notice: The information contained in this electronic 
transmission is confidential and may be legally privileged. It is intended only 
for the addressee(s) named above. If you are not an intended recipient, be 
aware that any disclosure, copying, distribution or use of the information 
contained in this transmission is prohibited and may be unlawful. If you have 
received this transmission in error, please notify us by telephone (513) 
229-5500 or by email (postmas...@medplus.com). After replying, please erase it 
from your computer system.





Re: Quartz Camel Spring Example?

2010-10-04 Thread Ashwin Karpe

Hi,

Check out the following links

https://svn.apache.org/viewvc/camel/trunk/components/camel-quartz/src/test/resources/org/apache/camel/component/quartz/SpringQuartzCronRouteTest.xml?view=markup
https://svn.apache.org/viewvc/camel/trunk/components/camel-quartz/src/test/resources/org/apache/camel/component/quartz/SpringQuartzCronRouteTest.xml?view=markup
 

https://svn.apache.org/viewvc/camel/trunk/components/camel-quartz/src/test/java/org/apache/camel/component/quartz/SpringQuartzCronRouteTest.java?view=markup
https://svn.apache.org/viewvc/camel/trunk/components/camel-quartz/src/test/java/org/apache/camel/component/quartz/SpringQuartzCronRouteTest.java?view=markup
 

Cheers,

Ashwin...


-
-
Ashwin Karpe
Apache Camel Committer  Sr Principal Consultant
FUSESource (a Progress Software Corporation subsidiary)
http://fusesource.com http://fusesource.com 

Blog: http://opensourceknowledge.blogspot.com
http://opensourceknowledge.blogspot.com 
-
-- 
View this message in context: 
http://camel.465427.n5.nabble.com/message-not-getting-delivered-tp3073281p3198239.html
Sent from the Camel - Users mailing list archive at Nabble.com.


Split XPath JBoss

2010-10-04 Thread lexs...@gmail.com

Hi,

I have this split route that works well during unit testing, but not when
deployed in JBoss 5.1 environment. I am using Camel 2.4.0.  My route is
written as Spring XML file:

split stopOnException=true
 
xpath/message/body/category_definitions/category_definition[fixed_part='IMP']/xpath
  bean ref=CategoryProcessor method=processImprint/
/split

My CategoryProcessor bean has this method:

public Imprint processImprint(
 @XPath(resultType=String.class,
value=/category_definition/variable_part/text()) String code, 
 @XPath(resultType=String.class, value=/category_definition/text/text())
String description
) 
{
  log.info(Processing imprint with code ' + code + ' and description ' +
description + ');
  .
  .
}

The method is actually being called but the values of code and description
parameters are empty.
The only thing I noticed being different is that logging the exchange during
unit testing shows:

Exchange[ExchangePattern:InOnly,
BodyType:org.apache.xerces.dom.DeferredElementNSImpl,
Body:category_definition
fixed_partIMP/fixed_part
variable_partBLAH/variable_part
textBLAH BLAH/text
  /category_definition]

But during integration testing (in the container) it shows:

Exchange[ExchangePattern:InOnly,
BodyType:org.apache.xerces.dom.DeferredElementImpl,
Body:[category_definition: null]]

Notice the BodyType and Body values are different. What is the difference
between the types DeferredElementNSImpl and DeferredElementImpl?  

Please help me.

FYI:  JBoss uses version 2.9.1 of Xerces, if that makes any difference.  I
am using this same version during unit test.

Thanks in advance.




-- 
View this message in context: 
http://camel.465427.n5.nabble.com/Split-XPath-JBoss-tp3198294p3198294.html
Sent from the Camel - Users mailing list archive at Nabble.com.


Re: message not getting delivered

2010-10-04 Thread Mark Webb
I agree. But when producing an out I think you need to call
Exchange.setIn(Message).  Seems like when you call
Exchange.setOut(Message) you are setting up a request-reply scenario
which is not what I wanted.


On Mon, Oct 4, 2010 at 1:54 PM, Hadrian Zbarcea hzbar...@gmail.com wrote:
 Yeah, there is still a lot of confusion.
 Unfortunately that page is bollocks and I'll have to get it cleaned up.
 Mark, I think you did the right thing actually, in a processor one should 
 *not* modify the in, but produce an out, if needed. It's time to get that 
 clarified!

 My $0.02,
 Hadrian

 On Oct 4, 2010, at 10:01 AM, Claus Ibsen wrote:

 On Mon, Oct 4, 2010 at 3:57 PM, Mark Webb elihusma...@gmail.com wrote:
 Thanks.  I have things working now.

 It seems weird to me though that if in a Processor I take a message
 in, transform it into a newly created Message object that I should
 call Exchange.setIn(Message) instead of Exchange.setOut(Message).  I
 think of a Processor as taking in a message and then sending it
 out, but it looks like that is not the case.  Just need to adjust
 the way I think about things.


 You are not the only one. See this FAQ
 http://camel.apache.org/using-getin-or-getout-methods-on-exchange.html



 On Sat, Oct 2, 2010 at 3:06 AM, Claus Ibsen claus.ib...@gmail.com wrote:
 See this FAQ
 http://camel.apache.org/using-getin-or-getout-methods-on-exchange.html

 On Fri, Oct 1, 2010 at 10:37 PM, Mark Webb elihusma...@gmail.com wrote:
 I am sending messages through a Camel route in ActiveMQ.  My message
 reaches the end of the processing chain, and at the last processor I
 call exchange.setOut( newly created DefaultMessage ).  When I look at
 the admin page for ActiveMQ, the topic shows that there is a message
 to be dequeued.  It even says that there is a consumer connected to
 that topic, which is a GUI tool I wrote.  The GUI tool makes a call to
 Consumer.setMessageListener.  So why are the messages not making their
 way to my GUI tool?  I am stumped as to why the messages sit in the
 topic and never leave if there is a listener for that topic.

 Of course the first thought is, is the Connection started?  Yeah I
 verified that.  In fact I can send messages to the topic via the
 web-based admin tool for ActiveMQ and the GUI receives them.

 Thanks for any help you have,
 Mark




 --
 Claus Ibsen
 Apache Camel Committer

 Author of Camel in Action: http://www.manning.com/ibsen/
 Open Source Integration: http://fusesource.com
 Blog: http://davsclaus.blogspot.com/
 Twitter: http://twitter.com/davsclaus





 --
 Claus Ibsen
 Apache Camel Committer

 Author of Camel in Action: http://www.manning.com/ibsen/
 Open Source Integration: http://fusesource.com
 Blog: http://davsclaus.blogspot.com/
 Twitter: http://twitter.com/davsclaus




Re: Split XPath JBoss

2010-10-04 Thread lexsoto

More findings.  I've Modified my bean method to this:

public void processImprint(Node node) throws TransformerException
{
TransformerFactory transfac = TransformerFactory.newInstance();
Transformer trans = transfac.newTransformer();
trans.setOutputProperty(OutputKeys.INDENT, yes);

StringWriter sw = new StringWriter();

StreamResult out = new StreamResult(sw);
DOMSource source = new DOMSource(node);
trans.transform(source, out);

   log.info(Node:  + node +  XML:  + sw.toString());
}


Looks like the correct XML is being received by the method:

Node: [category_definition: null] XML: ?xml version=1.0
encoding=UTF-8?
category_definition group_of_company=*
fixed_partIMP/fixed_part
variable_partBLAH/variable_part
textBLAH BLAH/text
  /category_definition

So the problem may be in the use of the @Xpath annotations on the method
parameters!?
I could just write code to extract the values from the node object, but
isn't the @XPath supposed to do that for me?


-- 
View this message in context: 
http://camel.465427.n5.nabble.com/Split-XPath-JBoss-tp3198294p3198367.html
Sent from the Camel - Users mailing list archive at Nabble.com.


Controlling how, and which messages, a JMS Consumer pulls

2010-10-04 Thread Seth Call
Hi all,

Let me describe my end-goal, and see what you all think:

I have this concept of a 'worker', which is on the consuming end of a
JMS queue. The idea is that I'd like to have many of these workers
deployed.

At the same time, there are different types of work to be done, and a
worker can do 1 or more of these types of work; however, I'd like to
limit it such that a worker can only do 1 thing at once.

So, a worker has capabilities that may or may not mesh with a task,
and workers have a maximum number of concurrent tasks they can do.

It seems to me the right thing to do is to have one queue per
unit-of-work type.  So if you are a worker and support 2 types of
work, then you'd ultimately be listening to two queues. But, once you
started working on one task pulled from one of the queues, it seems
you'd want to dynamically stop listening to the other queue--and
conversely, once the task is done, you'd start listening on that queue
again.  But right away I'm concerned how one could guarantee just one
message is read from either queue at a time.

Another scenario is if no workers are currently available that can
handle a particular type of task because they are all busy, but there
are other things to do in the queue.  So in that case, with the first
message that no one can currently handle, I'd like to put it back on
the queue (or you could say, just never take it off the queue), but
push it back so I can have the available workers look at the next
messages in line.

You can see, though, how eventually there could be an issue if there
were even two items in the queue, and they both are unable to be
processed at the moment because all the workers are busy. With the
reordering of the queue suggestion, you could quickly get into a
situation where the queue is read and thrashed at a high rate unless
something throttles this behavior.

Any suggestions as to what I could look at?  I'm not seeing anything
on the JMS Camel page, or anything in the *Processors that might be of
help, but I'm also concerned I'm going about this the wrong way.

I realize this is a possibly very loaded question and help may not be
forthcoming. But I thought I'd ask.

Regards,
Seth


Re: Controlling how, and which messages, a JMS Consumer pulls

2010-10-04 Thread Seth Call
And then I find the Dead Letter Channel.   It seems that perhaps, with
this mechanism, it might make sense to just reject the message once I
started a unit of work (or if I find the current worker that pulled
the task can't handle it), and let the routing re-route to the next
worker in line.

Not sure yet.  I'll definitely dig into the DLC  as much as I can go.

On Mon, Oct 4, 2010 at 3:46 PM, Seth Call sethc...@gmail.com wrote:
 Hi all,

 Let me describe my end-goal, and see what you all think:

 I have this concept of a 'worker', which is on the consuming end of a
 JMS queue. The idea is that I'd like to have many of these workers
 deployed.

 At the same time, there are different types of work to be done, and a
 worker can do 1 or more of these types of work; however, I'd like to
 limit it such that a worker can only do 1 thing at once.

 So, a worker has capabilities that may or may not mesh with a task,
 and workers have a maximum number of concurrent tasks they can do.

 It seems to me the right thing to do is to have one queue per
 unit-of-work type.  So if you are a worker and support 2 types of
 work, then you'd ultimately be listening to two queues. But, once you
 started working on one task pulled from one of the queues, it seems
 you'd want to dynamically stop listening to the other queue--and
 conversely, once the task is done, you'd start listening on that queue
 again.  But right away I'm concerned how one could guarantee just one
 message is read from either queue at a time.

 Another scenario is if no workers are currently available that can
 handle a particular type of task because they are all busy, but there
 are other things to do in the queue.  So in that case, with the first
 message that no one can currently handle, I'd like to put it back on
 the queue (or you could say, just never take it off the queue), but
 push it back so I can have the available workers look at the next
 messages in line.

 You can see, though, how eventually there could be an issue if there
 were even two items in the queue, and they both are unable to be
 processed at the moment because all the workers are busy. With the
 reordering of the queue suggestion, you could quickly get into a
 situation where the queue is read and thrashed at a high rate unless
 something throttles this behavior.

 Any suggestions as to what I could look at?  I'm not seeing anything
 on the JMS Camel page, or anything in the *Processors that might be of
 help, but I'm also concerned I'm going about this the wrong way.

 I realize this is a possibly very loaded question and help may not be
 forthcoming. But I thought I'd ask.

 Regards,
 Seth



Re: Controlling how, and which messages, a JMS Consumer pulls

2010-10-04 Thread Illtud Daniel

On 04/10/10 21:46, Seth Call wrote:


It seems to me the right thing to do is to have one queue per
unit-of-work type.  So if you are a worker and support 2 types of
work, then you'd ultimately be listening to two queues. But, once you
started working on one task pulled from one of the queues, it seems
you'd want to dynamically stop listening to the other queue--and
conversely, once the task is done, you'd start listening on that queue
again.  But right away I'm concerned how one could guarantee just one
message is read from either queue at a time.


I think this could be done by using wildcards. If you have jobs
of type A, B, C, D  E, and jobs A, B and C are processorder
jobs (for example) and D  E are dispatchorder jobs, and you
have 3 consumers, X, Y  Z. X can do processorders and dispatchorder
-type jobs, Y can do processorders, and Z can do dispatchorder,
then you can have queues:

queue.processorder.A
queue.processorder.B
queue.processorder.C
queue.dispatchorder.D
queue.dispatchorder.E

Then X could subscribe to queue.*, Y to queue.processorder.*
and Z to queue.dispatchorder.*

If your workers are single-threaded, and do their work in an onmessage
(Java) method, then they'll automatically do the stuff you want
(stop listening, not block other consumers) while they work on the
message.


Another scenario is if no workers are currently available that can
handle a particular type of task because they are all busy, but there
are other things to do in the queue.


My solution would work for that, since there are separate queues.

The only issue is if your task hierarchy doesn't break down handily
like mine does (eg if Z could do A  E, but not the other tasks), but
you'd have pretty odd workers to have that scenario (or I don't have
much imagination).

--
Illtud Daniel illtud.dan...@llgc.org.uk
Prif Swyddog Technegol  Chief Technical Officer
Llyfrgell Genedlaethol Cymru  National Library of Wales


Re: Controlling how, and which messages, a JMS Consumer pulls

2010-10-04 Thread Seth Call
Hi Illtud,


Essentially I have no natural hierarchy.  The issue is that these
workers will generally interoping with 3rd-party services, meaning
it's very hard to know upfront what their runtime characteristics will
be.   We may find that one API of one 3rd-party fails alot and/or
takes a long time, and we decide we need to fire up many Workers that
are dedicated to that service so that we can improve concurrency and
prevent that one service from backlogging the whole system.

However, the whole point is that, after runnig this thing in
production, and realizing it makes sense to have jobs of type 'A, B,
C' lumped in usually with one worker type, and jobs D, E should always
have a dedicated worker, then you may then have a hierarchy like:

job.combinable.A
job.combinable.B
job.combinable.C
job.standalone.D
job.standalone.E

And then if I wanted a worker to do A, B, C, my path is clear.  If I
wanted to do A and D, then I have a problem, sure, but the idea here
is that I shouldn't need to once I know my 'realistic hierarchy' if
that makes sense.

Good suggestion, thanks very much.

Seth



On Mon, Oct 4, 2010 at 4:25 PM, Illtud Daniel illtud.dan...@llgc.org.uk wrote:
 On 04/10/10 21:46, Seth Call wrote:

 It seems to me the right thing to do is to have one queue per
 unit-of-work type.  So if you are a worker and support 2 types of
 work, then you'd ultimately be listening to two queues. But, once you
 started working on one task pulled from one of the queues, it seems
 you'd want to dynamically stop listening to the other queue--and
 conversely, once the task is done, you'd start listening on that queue
 again.  But right away I'm concerned how one could guarantee just one
 message is read from either queue at a time.

 I think this could be done by using wildcards. If you have jobs
 of type A, B, C, D  E, and jobs A, B and C are processorder
 jobs (for example) and D  E are dispatchorder jobs, and you
 have 3 consumers, X, Y  Z. X can do processorders and dispatchorder
 -type jobs, Y can do processorders, and Z can do dispatchorder,
 then you can have queues:

 queue.processorder.A
 queue.processorder.B
 queue.processorder.C
 queue.dispatchorder.D
 queue.dispatchorder.E

 Then X could subscribe to queue.*, Y to queue.processorder.*
 and Z to queue.dispatchorder.*

 If your workers are single-threaded, and do their work in an onmessage
 (Java) method, then they'll automatically do the stuff you want
 (stop listening, not block other consumers) while they work on the
 message.

 Another scenario is if no workers are currently available that can
 handle a particular type of task because they are all busy, but there
 are other things to do in the queue.

 My solution would work for that, since there are separate queues.

 The only issue is if your task hierarchy doesn't break down handily
 like mine does (eg if Z could do A  E, but not the other tasks), but
 you'd have pretty odd workers to have that scenario (or I don't have
 much imagination).

 --
 Illtud Daniel                                 illtud.dan...@llgc.org.uk
 Prif Swyddog Technegol                          Chief Technical Officer
 Llyfrgell Genedlaethol Cymru                  National Library of Wales



Re: How to change directory while using sftp component

2010-10-04 Thread Lorrin Nelson
Hi Claus, thanks for the continued support. With the latest code (revision 
1003927) relative URIs are working again for me.

This the first time I'm using camel-ftp. The server I'm talking to is a Fedora 
11 box running OpenSSH_5.2p1. Regular FTP is an unlikely option for us. 
Relative URLs are an acceptable work-around.

Below is a trace log of an absolute URL.

On the first attempt, note that ~/tmp does exist. Therefore when it tries to go 
to *relative* path tmp (which is the wrong thing to do), it succeeds. Then when 
it tries to go one level deeper into pitch_activity_logs it fails. The second 
attempt is really interesting. Note there that the current working directory 
has remained at /home/tomcat/tmp. So when it fails to go to *relative* path 
tmp, it fails right away. BTW, note that it says Retrieving file: 
tmp/activity_logs/activity_log.1285909200 with no leading slash, unlike the 
polling stage where it says doPollDirectory from absolutePath: 
/tmp/activity_logs.

Cheers
-Lorrin


SftpConsumer 2010-10-04 15:45:07,445 -- INFO -- Connected and logged in to: 
sftp://tom...@host:22
SftpConsumer 2010-10-04 15:45:07,494 -- TRACE -- doPollDirectory from 
absolutePath: /tmp/activity_logs, dirName: null
SftpOperations 2010-10-04 15:45:07,494 -- TRACE -- Changing directory: /
SftpOperations 2010-10-04 15:45:07,608 -- TRACE -- Changing directory: tmp
SftpOperations 2010-10-04 15:45:07,697 -- TRACE -- Changing directory: 
activity_logs
SftpConsumer 2010-10-04 15:45:07,788 -- TRACE -- Polling directory: 
/tmp/activity_logs
SftpConsumer 2010-10-04 15:45:08,011 -- TRACE -- Found 4 in directory: 
/tmp/activity_logs
SftpOperations 2010-10-04 15:45:08,012 -- TRACE -- Changing directory: /
SftpOperations 2010-10-04 15:45:08,099 -- TRACE -- Changing directory: home
SftpOperations 2010-10-04 15:45:08,189 -- TRACE -- Changing directory: tomcat
SftpConsumer 2010-10-04 15:45:08,274 -- DEBUG -- Took 0.829 seconds to poll: 
/tmp/activity_logs/
SftpConsumer 2010-10-04 15:45:08,275 -- DEBUG -- Total 1 files to consume
SftpConsumer 2010-10-04 15:45:08,276 -- TRACE -- Processing file: 
GenericFile[activity_log.1285909200]
SftpConsumer 2010-10-04 15:45:08,276 -- TRACE -- Retrieving file: 
tmp/activity_logs/activity_log.1285909200 from: 
Endpoint[sftp://tom...@host//tmp/activity_logs/?delay=15000idempotent=trueidempotentRepository=%23apacheLogFilenameRepositoryAcceptNewestinclude=.*.log.%5B-%2F%3A0-9%5D*initialDelay=3000noop=truepassword=**]
SftpOperations 2010-10-04 15:45:08,277 -- TRACE -- Changing directory: tmp
SftpOperations 2010-10-04 15:45:08,367 -- TRACE -- Changing directory: 
activity_logs
SftpConsumer 2010-10-04 15:45:08,414 -- ERROR -- Caused by: 
[org.apache.camel.component.file.GenericFileOperationFailedException - Cannot 
change directory to: activity_logs]
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot 
change directory to: activity_logs
at 
org.apache.camel.component.file.remote.SftpOperations.doChangeDirectory(SftpOperations.java:383)
at 
org.apache.camel.component.file.remote.SftpOperations.changeCurrentDirectory(SftpOperations.java:368)
at 
org.apache.camel.component.file.remote.SftpOperations.retrieveFileToStreamInBody(SftpOperations.java:451)
at 
org.apache.camel.component.file.remote.SftpOperations.retrieveFile(SftpOperations.java:430)
at 
org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:299)
at 
org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:155)
at 
org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:121)
at 
org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:97)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:637)
Caused by: 2: No such file
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2289)
at com.jcraft.jsch.ChannelSftp._realpath(ChannelSftp.java:1822)
at com.jcraft.jsch.ChannelSftp.cd(ChannelSftp.java:268)
at 
org.apache.camel.component.file.remote.SftpOperations.doChangeDirectory(SftpOperations.java:381)

Re: Camel Aggregation : All Messages Lost from the Queue.

2010-10-04 Thread bxkrish

Thanks for the reply Claus. Camel 2.0 is part of our product version and we
will not be able to change it without releasing a different version of our
product. I am sure 2.4 is a more stable version but Aggregation should be
working with Camel 2.0.
I debugged and found that the oldExchange is null always and not only
for the first time. After handling the NullPointerException the messages
reach the Q2 individually. However I want them to be concatenated as 1
message. Can you please tell me what am I doing wrong?
-- 
View this message in context: 
http://camel.465427.n5.nabble.com/Camel-Aggregation-All-Messages-Lost-from-the-Queue-tp3138936p3198589.html
Sent from the Camel - Users mailing list archive at Nabble.com.


Re: Controlling how, and which messages, a JMS Consumer pulls

2010-10-04 Thread Seth Call
From more help by Hadrian and andrewm in IRC, I got it sorted out:

The missing piece is that I needed a way to let the consumer to be a
part of the routing, which is basically not what most of the canned
'Processors' let you do when JMS Queue's are involved, because pulling
a message off of the JMS Queue is not a trivial thing to recover from,
once you've done it.

So anyway, a JMS Selector let's a worker indicate to the broker what
type of jobs it can handle on startup.  All I have to do is make sure
I pass in a well-defined header in the message that indicates the job
type, and my worker's selector can easily pinpoint what type of tasks
that given worker supports.

Combine that with a single queue (which is also straightforward with
the wildcard suggestions from Illtud), and a prefetchSize of 0 or 1,
and I have it all sorted out I believe!

Thanks everyone for their help!

Seth

On Mon, Oct 4, 2010 at 4:38 PM, Seth Call sethc...@gmail.com wrote:
 Hi Illtud,


 Essentially I have no natural hierarchy.  The issue is that these
 workers will generally interoping with 3rd-party services, meaning
 it's very hard to know upfront what their runtime characteristics will
 be.   We may find that one API of one 3rd-party fails alot and/or
 takes a long time, and we decide we need to fire up many Workers that
 are dedicated to that service so that we can improve concurrency and
 prevent that one service from backlogging the whole system.

 However, the whole point is that, after runnig this thing in
 production, and realizing it makes sense to have jobs of type 'A, B,
 C' lumped in usually with one worker type, and jobs D, E should always
 have a dedicated worker, then you may then have a hierarchy like:

 job.combinable.A
 job.combinable.B
 job.combinable.C
 job.standalone.D
 job.standalone.E

 And then if I wanted a worker to do A, B, C, my path is clear.  If I
 wanted to do A and D, then I have a problem, sure, but the idea here
 is that I shouldn't need to once I know my 'realistic hierarchy' if
 that makes sense.

 Good suggestion, thanks very much.

 Seth



 On Mon, Oct 4, 2010 at 4:25 PM, Illtud Daniel illtud.dan...@llgc.org.uk 
 wrote:
 On 04/10/10 21:46, Seth Call wrote:

 It seems to me the right thing to do is to have one queue per
 unit-of-work type.  So if you are a worker and support 2 types of
 work, then you'd ultimately be listening to two queues. But, once you
 started working on one task pulled from one of the queues, it seems
 you'd want to dynamically stop listening to the other queue--and
 conversely, once the task is done, you'd start listening on that queue
 again.  But right away I'm concerned how one could guarantee just one
 message is read from either queue at a time.

 I think this could be done by using wildcards. If you have jobs
 of type A, B, C, D  E, and jobs A, B and C are processorder
 jobs (for example) and D  E are dispatchorder jobs, and you
 have 3 consumers, X, Y  Z. X can do processorders and dispatchorder
 -type jobs, Y can do processorders, and Z can do dispatchorder,
 then you can have queues:

 queue.processorder.A
 queue.processorder.B
 queue.processorder.C
 queue.dispatchorder.D
 queue.dispatchorder.E

 Then X could subscribe to queue.*, Y to queue.processorder.*
 and Z to queue.dispatchorder.*

 If your workers are single-threaded, and do their work in an onmessage
 (Java) method, then they'll automatically do the stuff you want
 (stop listening, not block other consumers) while they work on the
 message.

 Another scenario is if no workers are currently available that can
 handle a particular type of task because they are all busy, but there
 are other things to do in the queue.

 My solution would work for that, since there are separate queues.

 The only issue is if your task hierarchy doesn't break down handily
 like mine does (eg if Z could do A  E, but not the other tasks), but
 you'd have pretty odd workers to have that scenario (or I don't have
 much imagination).

 --
 Illtud Daniel                                 illtud.dan...@llgc.org.uk
 Prif Swyddog Technegol                          Chief Technical Officer
 Llyfrgell Genedlaethol Cymru                  National Library of Wales




Re: Camel Aggregation : All Messages Lost from the Queue.

2010-10-04 Thread Claus Ibsen
On Tue, Oct 5, 2010 at 1:16 AM, bxkrish balainf...@gmail.com wrote:

 Thanks for the reply Claus. Camel 2.0 is part of our product version and we
 will not be able to change it without releasing a different version of our
 product. I am sure 2.4 is a more stable version but Aggregation should be
 working with Camel 2.0.
 I debugged and found that the oldExchange is null always and not only
 for the first time. After handling the NullPointerException the messages
 reach the Q2 individually. However I want them to be concatenated as 1
 message. Can you please tell me what am I doing wrong?

See the wiki page
http://camel.apache.org/aggregator

You may need to set a higher batchTimeout than the default of 1 sec.
This allows Camel more time to aggregate more messages.


 --
 View this message in context: 
 http://camel.465427.n5.nabble.com/Camel-Aggregation-All-Messages-Lost-from-the-Queue-tp3138936p3198589.html
 Sent from the Camel - Users mailing list archive at Nabble.com.




-- 
Claus Ibsen
Apache Camel Committer

Author of Camel in Action: http://www.manning.com/ibsen/
Open Source Integration: http://fusesource.com
Blog: http://davsclaus.blogspot.com/
Twitter: http://twitter.com/davsclaus


Re: Split XPath JBoss

2010-10-04 Thread Claus Ibsen
And you have seen this FAQ?
http://camel.apache.org/how-do-i-run-activemq-and-camel-in-jboss.html

You must use the camel-jboss component when running in JBoss.

On Mon, Oct 4, 2010 at 9:44 PM, lexs...@gmail.com lexs...@gmail.com wrote:

 Hi,

 I have this split route that works well during unit testing, but not when
 deployed in JBoss 5.1 environment. I am using Camel 2.4.0.  My route is
 written as Spring XML file:

 split stopOnException=true

 xpath/message/body/category_definitions/category_definition[fixed_part='IMP']/xpath
  bean ref=CategoryProcessor method=processImprint/
 /split

 My CategoryProcessor bean has this method:

 public Imprint processImprint(
 �...@xpath(resultType=String.class,
 value=/category_definition/variable_part/text()) String code,
 �...@xpath(resultType=String.class, value=/category_definition/text/text())
 String description
 )
 {
  log.info(Processing imprint with code ' + code + ' and description ' +
 description + ');
  .
  .
 }

 The method is actually being called but the values of code and description
 parameters are empty.
 The only thing I noticed being different is that logging the exchange during
 unit testing shows:

 Exchange[ExchangePattern:InOnly,
 BodyType:org.apache.xerces.dom.DeferredElementNSImpl,
 Body:category_definition
        fixed_partIMP/fixed_part
        variable_partBLAH/variable_part
        textBLAH BLAH/text
      /category_definition]

 But during integration testing (in the container) it shows:

 Exchange[ExchangePattern:InOnly,
 BodyType:org.apache.xerces.dom.DeferredElementImpl,
 Body:[category_definition: null]]

 Notice the BodyType and Body values are different. What is the difference
 between the types DeferredElementNSImpl and DeferredElementImpl?

 Please help me.

 FYI:  JBoss uses version 2.9.1 of Xerces, if that makes any difference.  I
 am using this same version during unit test.

 Thanks in advance.




 --
 View this message in context: 
 http://camel.465427.n5.nabble.com/Split-XPath-JBoss-tp3198294p3198294.html
 Sent from the Camel - Users mailing list archive at Nabble.com.




-- 
Claus Ibsen
Apache Camel Committer

Author of Camel in Action: http://www.manning.com/ibsen/
Open Source Integration: http://fusesource.com
Blog: http://davsclaus.blogspot.com/
Twitter: http://twitter.com/davsclaus