Re: Camel body is coming as null to processor

2016-12-01 Thread Josef Ludvíček
Hi,

Claus had good tip with reading Stream twice.

I just wanted to ask - does your  public class DetailsInfoProcessorV2 
implements org.apache.camel.Processor ?
If so, it might be better/readable to "call processor" directly, instead of 
bean invocation.


Cheers, Josef


On Thursday 01 of December 2016 20:57:44 Claus Ibsen wrote:
> Hi
> 
> See this FAQ
> http://camel.apache.org/why-is-my-message-body-empty.html
> 
> On Thu, Dec 1, 2016 at 7:37 PM, reddy.janke  wrote:
> >
> > 
> > 
> >  > logName="com.skandha.eim.air" />
> > 
> > ${body}
> >  > uri="bean:dcDetailsInfoProcessorV2?method=process" />
> > 
> > I am able to print camel body from camel context. But when I am trying to
> > unmarshal through ref or pass to processor Body convert as null and getting
> > below exception.
> >
> > org.apache.camel.TypeConversionException: Error during type conversion from
> > type: org.apache.camel.converter.stream.InputStreamCache to the required
> > type: com.skandha.eim.air.jaxb.v1.AirLowFareSearchRQ with value
> > org.apache.camel.converter.stream.InputStreamCache@bbf9b0a due null
> > at
> > org.apache.camel.converter.jaxb.FallbackTypeConverter.convertTo(FallbackTypeConverter.java:103)
> > at
> > org.apache.camel.impl.converter.BaseTypeConverterRegistry.doConvertTo(BaseTypeConverterRegistry.java:316)
> > at
> > org.apache.camel.impl.converter.BaseTypeConverterRegistry.convertTo(BaseTypeConverterRegistry.java:114)
> > at 
> > org.apache.camel.impl.MessageSupport.getBody(MessageSupport.java:72)
> > at 
> > org.apache.camel.impl.MessageSupport.getBody(MessageSupport.java:47)
> > at
> > com.skandha.eim.air.processor.DcDetailsInfoProcessorV2.process(DcDetailsInfoProcessorV2.java:82)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at 
> > org.apache.camel.component.bean.MethodInfo.invoke(MethodInfo.java:408)
> > at
> > org.apache.camel.component.bean.MethodInfo$1.doProceed(MethodInfo.java:279)
> > at
> > org.apache.camel.component.bean.MethodInfo$1.proceed(MethodInfo.java:252)
> > at
> > org.apache.camel.component.bean.BeanProcessor.process(BeanProcessor.java:167)
> > at
> > org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:105)
> > at
> > org.apache.camel.component.bean.BeanProcessor.process(BeanProcessor.java:67)
> > at
> > org.apache.camel.impl.ProcessorEndpoint.onExchange(ProcessorEndpoint.java:103)
> > at
> > org.apache.camel.impl.ProcessorEndpoint$1.process(ProcessorEndpoint.java:71)
> > at
> > org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
> > at 
> > org.apache.camel.processor.SendProcessor.process(SendProcessor.java:120)
> > at
> > org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)
> > at
> > org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:416)
> > at
> > org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
> > at
> > org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
> > at
> > org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:51)
> > at 
> > org.apache.camel.processor.SendProcessor.process(SendProcessor.java:120)
> > at
> > org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)
> > at
> > org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:416)
> > at
> > org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
> > at org.apache.camel.processor.Pipeline.process(Pipeline.java:118)
> > at org.apache.camel.processor.Pipeline.process(Pipeline.java:80)
> > at
> > org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
> > at
> > org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:105)
> > at
> > org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:87)
> > at
> > org.apache.camel.component.http.CamelServlet.service(CamelServlet.java:144)
> > at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
> > at
> > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> > 

Re: HDFS2 Component and NORMAL_FILE type

2015-03-24 Thread Josef Ludvíček

Hi,

related to hdfs2 and normal file, you might find,
that camel sends message per data chunk,
NOT message per file (which I would expect).

They probably don't intent to change it.

It was reported
as bug https://issues.apache.org/jira/browse/CAMEL-8040 (won't fix)
and as doc enhancment https://issues.apache.org/jira/browse/CAMEL-8150 
(done).


Btw nice catch with that tmp file :)

Josef

On 03/24/2015 09:19 PM, Sergey Zhemzhitsky wrote:

Hello,

Really interesting question.
The answer is this jira issue: https://issues.apache.org/jira/browse/CAMEL-4555
and this diff: 
http://mail-archives.apache.org/mod_mbox/camel-commits/201110.mbox/%3c20111022140442.94f362388...@eris.apache.org%3E

It would be really great if
1. the component will make this feature optional to be able to stream 
multigigabyte data from within hdfs directly
on the file by file basis
2. the component will merge the files on the fly without any intermediate 
storage.

Just raised the JIRA: https://issues.apache.org/jira/browse/CAMEL-8542

Regards,
Sergey


Hi, all!
I'm looking at ways to use hdfs2 component to read files stored in a Hadoop
directory. As a quite new Hadoop user I assume that simplest way is when
data is stored in normal file format.
I was looking at code in
'org.apache.camel.component.hdfs2.HdfsFileType#NORMAL_FILE' class that is
responsible for creating the input stream and noticed that it will copy the
whole file to the local file system (in temp file) before opening input
stream (the case when using 'hdfs://' URI).
I wonder what is the reason behind this? Isn't it possible that file can be
very large and then this operation will be quite costly? Maybe I missing
some basic restrictions on using normal files in Hadoop?
Thanks in advance
Alexey







Re: HDFS2 Component and NORMAL_FILE type

2015-03-24 Thread Josef Ludvíček

Hi,

related to hdfs2 and normal file, you might find,
that camel sends message per data chunk,
NOT message per file (which I would expect).

They probably don't intent to change it.

It was reported
as bug https://issues.apache.org/jira/browse/CAMEL-8040 (won't fix)
and as doc enhancment https://issues.apache.org/jira/browse/CAMEL-8150 
(done).


Btw nice catch with that tmp file :)

Josef

On 03/24/2015 09:19 PM, Sergey Zhemzhitsky wrote:

Hello,

Really interesting question.
The answer is this jira issue: https://issues.apache.org/jira/browse/CAMEL-4555
and this diff: 
http://mail-archives.apache.org/mod_mbox/camel-commits/201110.mbox/%3c20111022140442.94f362388...@eris.apache.org%3E

It would be really great if
1. the component will make this feature optional to be able to stream 
multigigabyte data from within hdfs directly
on the file by file basis
2. the component will merge the files on the fly without any intermediate 
storage.

Just raised the JIRA: https://issues.apache.org/jira/browse/CAMEL-8542

Regards,
Sergey


Hi, all!
I'm looking at ways to use hdfs2 component to read files stored in a Hadoop
directory. As a quite new Hadoop user I assume that simplest way is when
data is stored in normal file format.
I was looking at code in
'org.apache.camel.component.hdfs2.HdfsFileType#NORMAL_FILE' class that is
responsible for creating the input stream and noticed that it will copy the
whole file to the local file system (in temp file) before opening input
stream (the case when using 'hdfs://' URI).
I wonder what is the reason behind this? Isn't it possible that file can be
very large and then this operation will be quite costly? Maybe I missing
some basic restrictions on using normal files in Hadoop?
Thanks in advance
Alexey







error in DeadLetterChannel propagated to source

2014-08-25 Thread Josef Ludvíček
Hello camel users,

I'd like to ask you for your opinion on following DeadLetter channel example. 
Is it bug, feature or I'm missing something?

I create route with DeadLetter error handler. 
On error message goes to route dead-letter-route.
 - use case: format some meaningfull description of problem and send e-mail to 
admin or 
whatever
 - there is LoggingErrorHandler for dead-letter-route route -if sending mail / 
templating / 
whatever fails, I need to know what happend, maby try redelivery.


In following example:
 - new e-mail in source-route
 - http post fails 
 - message is moved to dead-letter-route
 - http post in dead-letter-route fails
 - I expect error to be logged and it should be end of story BUT
 - error is propagated back to source-route and to imap endpoint, which results 
to e-mail being 
marked as unread = inifnite loop. Shouldn't dead letter channel in original 
route prevent such 
situations?

Tried with camel 2.12.0 and 2.13.0. 

Thanks for opinion.

Josef


--- spring xml --- dependencies: camel-mail, camel-http4
--

camel:camelContext xmlns=http://camel.apache.org/schema/spring;


camel:errorHandler id=source-route-eh type=DeadLetterChannel 
deadLetterUri=direct:dead
camel:redeliveryPolicy maximumRedeliveries=2 
logStackTrace=false
logRetryAttempted=true 
logExhausted=true 
retryAttemptedLogLevel=WARN
retriesExhaustedLogLevel=ERROR/
/camel:errorHandler


camel:errorHandler id=logging-dead-route type=LoggingErrorHandler
camel:redeliveryPolicy maximumRedeliveries=1
logRetryAttempted=true 
logExhausted=true 
retryAttemptedLogLevel=WARN
retriesExhaustedLogLevel=ERROR 
logStackTrace=false/
/camel:errorHandler

camel:route id=source-route errorHandlerRef=source-route-eh
camel:from 
uri=imaps:user_lo...@imap.gmail.com?password=USER_PASSWORDamp;consumer.delay=
1000/
!--camel:from uri=quartz2:tests/test?trigger.repeatCount=0/--
camel:to uri=http4://localhost:666/nonexistinguri/
/camel:route

camel:route id=dead-letter-route 
errorHandlerRef=logging-dead-route
camel:from uri=direct:dead/
camel:log message=message in dead letter queue/
camel:to uri=http4://localhost:667/deadnonexistinguri/
/camel:route

/camel:camelContext
-


Re: Camel ActiveMQ performance

2014-08-03 Thread Josef Ludvíček
You could connect to camel using JMX console (eg. VisualVM with MBean plugin, 
or JConsole). In 
that console you could see how long it took to process one message, number of 
messages, mean 
time, ...

Screen of JMX console (connecting with VisualVM to JBoss Fuse container with 
camel route 
deployed as bundle) here: http://i.imgur.com/Vulwdet.png

Josef

On Saturday 02 of August 2014 23:08:49 balavino wrote:
 I'm using a (producer)camel route to post a message to a activeMQ queue
 broker. I would like to check the performance of the number of messages that
 can the posted to the queue in a second.
 
 Let me know the reliable way of doing the same.
 
 
 
 --
 View this message in context:
 http://camel.465427.n5.nabble.com/Camel-ActiveMQ-performance-tp5754747.html
 Sent from the Camel - Users mailing list archive at Nabble.com.