Re: Camel use case

2024-01-31 Thread Jeremy Ross
I don't know if actual stack frames are used up, but each iteration of the
loop definitely shows up in the route history. For this reason, I don't use
loop for iterating through a large number of items.

On Wed, Jan 31, 2024 at 12:31 PM Anthony Wu  wrote:

> Hi folks - I had thought that the loop EIP was meant only for testing
> purposes? In the 3.14.x LTS docs the doc page reads, my emphasis:
>
> The Loop EIP allows for processing a message a number of times, possibly in
> a different way for each iteration. _Useful mostly during testing._
>
> See
>
> https://stackoverflow.com/questions/51257248/camel-stackoverflow-error-when-route-is-called-recursively
> as well.
>
> In the past I've used a SEDA queue like the following in Java DSL:
>
>
> from("seda:foo").process(processorThatTerminatesWhenBodyIsExhausted).to("seda:foo")
>
> Any insight on whether the loop EIP is safe to use (no longer suffers from
> memory overrun) here is greatly appreciated.
>
> On Wed, Jan 31, 2024 at 8:45 AM Jeremy Ross 
> wrote:
>
> > If you keep copy=false (default), loop sends the same exchange for each
> > iteration. This allows you to manipulate headers inside the loop and
> > subsequent iterations would see those header changes.
> >
> > On Wed, Jan 31, 2024 at 2:18 AM Ghassen Kahri  >
> > wrote:
> >
> > > Hi Jeremy,
> > >
> > > The idea of using the loop EIP crossed my mind as well, but I'm
> uncertain
> > > about the feasibility of manipulating headers for each iteration.
> > >
> > > I appreciate your concern.
> > >
> > > Thank you.
> > >
> > > Le lun. 29 janv. 2024 à 18:35, Jeremy Ross  a
> > > écrit :
> > >
> > > > > To achieve this, I iterated through the route X times, each time
> > > > executing
> > > > a query with a different offset. I utilized Camel headers to store
> the
> > > > offset and other flags, as mentioned in my initial email.
> > > >
> > > > This is a perfectly reasonable approach IMO.
> > > >
> > > > > Does Camel have any built-in functionality that
> > > > accomplishes the same task? Additionally, since I was "improvising,"
> > I'm
> > > > curious if my code adheres to best practices. I sensed that it might
> > not,
> > > > given that I implemented business logic at the route level.
> > > >
> > > > The EIPs are the building blocks that allow you to accomplish this
> type
> > > of
> > > > use case. Apart from EIPs, Camel doesn't have specific functionality
> to
> > > > query and process paged resources. The Loop EIP (
> > > > https://camel.apache.org/components/4.0.x/eips/loop-eip.html) might
> > be a
> > > > little more idiomatic than a route calling itself recursively.
> > > >
> > > >
> > > > On Fri, Jan 26, 2024 at 3:07 AM Ghassen Kahri <
> > ghassen.ka...@codeonce.fr
> > > >
> > > > wrote:
> > > >
> > > > > Hey Raymond, I appreciate your response.
> > > > >
> > > > > We are both on board with the idea of dividing the query response
> > into
> > > > > chunks. Let's discuss the "how" in Camel.
> > > > >
> > > > > To achieve this, I iterated through the route X times, each time
> > > > executing
> > > > > a query with a different offset. I utilized Camel headers to store
> > the
> > > > > offset and other flags, as mentioned in my initial email.
> > > > >
> > > > > My primary question is: Does Camel have any built-in functionality
> > that
> > > > > accomplishes the same task? Additionally, since I was
> "improvising,"
> > > I'm
> > > > > curious if my code adheres to best practices. I sensed that it
> might
> > > not,
> > > > > given that I implemented business logic at the route level.
> > > > >
> > > > > Le jeu. 25 janv. 2024 à 15:46, ski n  a
> > > écrit
> > > > :
> > > > >
> > > > > > Yes, dividing it into chunks is a good practice. This adheres to
> > > > > > message-based systems in general, not specific to Camel.
> > > > > > Let's discuss both ways of processing messages:
> > > > > >
> > > > > > 1. One big message
> > > > > >
> > > > > > Say the message is 100 GB+ and this is processed by some
> > integration
> > > > > > software on a server, you need to scale the server
> > > > > > for that amount. This means both memory and CPU must be capable
> of
> > > > doing
> > > > > > processing so amount of data. When you want to perform
> > > > > > EIP's (like filters or transformation) this will be difficult,
> > > because
> > > > > the
> > > > > > needed resources to match that.
> > > > > >
> > > > > > Say this big message comes one's a week, then you have a very big
> > > > server
> > > > > > basically run for nothing.
> > > > > >
> > > > > > 2. Many small messages
> > > > > >
> > > > > > Because of 1 it's generally the best practice to have fixed sized
> > > > smaller
> > > > > > messages. When possible, directly on the source.
> > > > > > If this is somehow not possible, you can split them and move it
> > back
> > > > to a
> > > > > > Kafka topic, then you use streaming the messages
> > > > > > and do the actual EIP's on the small message. Some advantages
> are:
> > > > > >
> > > > > > 1. 

Re: Camel use case

2024-01-31 Thread Anthony Wu
Hi folks - I had thought that the loop EIP was meant only for testing
purposes? In the 3.14.x LTS docs the doc page reads, my emphasis:

The Loop EIP allows for processing a message a number of times, possibly in
a different way for each iteration. _Useful mostly during testing._

See
https://stackoverflow.com/questions/51257248/camel-stackoverflow-error-when-route-is-called-recursively
as well.

In the past I've used a SEDA queue like the following in Java DSL:

from("seda:foo").process(processorThatTerminatesWhenBodyIsExhausted).to("seda:foo")

Any insight on whether the loop EIP is safe to use (no longer suffers from
memory overrun) here is greatly appreciated.

On Wed, Jan 31, 2024 at 8:45 AM Jeremy Ross  wrote:

> If you keep copy=false (default), loop sends the same exchange for each
> iteration. This allows you to manipulate headers inside the loop and
> subsequent iterations would see those header changes.
>
> On Wed, Jan 31, 2024 at 2:18 AM Ghassen Kahri 
> wrote:
>
> > Hi Jeremy,
> >
> > The idea of using the loop EIP crossed my mind as well, but I'm uncertain
> > about the feasibility of manipulating headers for each iteration.
> >
> > I appreciate your concern.
> >
> > Thank you.
> >
> > Le lun. 29 janv. 2024 à 18:35, Jeremy Ross  a
> > écrit :
> >
> > > > To achieve this, I iterated through the route X times, each time
> > > executing
> > > a query with a different offset. I utilized Camel headers to store the
> > > offset and other flags, as mentioned in my initial email.
> > >
> > > This is a perfectly reasonable approach IMO.
> > >
> > > > Does Camel have any built-in functionality that
> > > accomplishes the same task? Additionally, since I was "improvising,"
> I'm
> > > curious if my code adheres to best practices. I sensed that it might
> not,
> > > given that I implemented business logic at the route level.
> > >
> > > The EIPs are the building blocks that allow you to accomplish this type
> > of
> > > use case. Apart from EIPs, Camel doesn't have specific functionality to
> > > query and process paged resources. The Loop EIP (
> > > https://camel.apache.org/components/4.0.x/eips/loop-eip.html) might
> be a
> > > little more idiomatic than a route calling itself recursively.
> > >
> > >
> > > On Fri, Jan 26, 2024 at 3:07 AM Ghassen Kahri <
> ghassen.ka...@codeonce.fr
> > >
> > > wrote:
> > >
> > > > Hey Raymond, I appreciate your response.
> > > >
> > > > We are both on board with the idea of dividing the query response
> into
> > > > chunks. Let's discuss the "how" in Camel.
> > > >
> > > > To achieve this, I iterated through the route X times, each time
> > > executing
> > > > a query with a different offset. I utilized Camel headers to store
> the
> > > > offset and other flags, as mentioned in my initial email.
> > > >
> > > > My primary question is: Does Camel have any built-in functionality
> that
> > > > accomplishes the same task? Additionally, since I was "improvising,"
> > I'm
> > > > curious if my code adheres to best practices. I sensed that it might
> > not,
> > > > given that I implemented business logic at the route level.
> > > >
> > > > Le jeu. 25 janv. 2024 à 15:46, ski n  a
> > écrit
> > > :
> > > >
> > > > > Yes, dividing it into chunks is a good practice. This adheres to
> > > > > message-based systems in general, not specific to Camel.
> > > > > Let's discuss both ways of processing messages:
> > > > >
> > > > > 1. One big message
> > > > >
> > > > > Say the message is 100 GB+ and this is processed by some
> integration
> > > > > software on a server, you need to scale the server
> > > > > for that amount. This means both memory and CPU must be capable of
> > > doing
> > > > > processing so amount of data. When you want to perform
> > > > > EIP's (like filters or transformation) this will be difficult,
> > because
> > > > the
> > > > > needed resources to match that.
> > > > >
> > > > > Say this big message comes one's a week, then you have a very big
> > > server
> > > > > basically run for nothing.
> > > > >
> > > > > 2. Many small messages
> > > > >
> > > > > Because of 1 it's generally the best practice to have fixed sized
> > > smaller
> > > > > messages. When possible, directly on the source.
> > > > > If this is somehow not possible, you can split them and move it
> back
> > > to a
> > > > > Kafka topic, then you use streaming the messages
> > > > > and do the actual EIP's on the small message. Some advantages are:
> > > > >
> > > > > 1. Predictable: Every message is of the same size, so you load test
> > > this
> > > > > and match resources.
> > > > > 2. Resources: A small message needs less resources (CPU/Memory) to
> > > > process
> > > > > 3. Load: The load is spread over time (you can use a smaller
> server).
> > > > > 4. Realtime: You don't need to wait until all data is gathered and
> > then
> > > > > send it in batch, but
> > > > >  you can process it when it happens.
> > > > > 5. Scaling: When the load is high, you may add 

Re: Camel use case

2024-01-31 Thread Jeremy Ross
If you keep copy=false (default), loop sends the same exchange for each
iteration. This allows you to manipulate headers inside the loop and
subsequent iterations would see those header changes.

On Wed, Jan 31, 2024 at 2:18 AM Ghassen Kahri 
wrote:

> Hi Jeremy,
>
> The idea of using the loop EIP crossed my mind as well, but I'm uncertain
> about the feasibility of manipulating headers for each iteration.
>
> I appreciate your concern.
>
> Thank you.
>
> Le lun. 29 janv. 2024 à 18:35, Jeremy Ross  a
> écrit :
>
> > > To achieve this, I iterated through the route X times, each time
> > executing
> > a query with a different offset. I utilized Camel headers to store the
> > offset and other flags, as mentioned in my initial email.
> >
> > This is a perfectly reasonable approach IMO.
> >
> > > Does Camel have any built-in functionality that
> > accomplishes the same task? Additionally, since I was "improvising," I'm
> > curious if my code adheres to best practices. I sensed that it might not,
> > given that I implemented business logic at the route level.
> >
> > The EIPs are the building blocks that allow you to accomplish this type
> of
> > use case. Apart from EIPs, Camel doesn't have specific functionality to
> > query and process paged resources. The Loop EIP (
> > https://camel.apache.org/components/4.0.x/eips/loop-eip.html) might be a
> > little more idiomatic than a route calling itself recursively.
> >
> >
> > On Fri, Jan 26, 2024 at 3:07 AM Ghassen Kahri  >
> > wrote:
> >
> > > Hey Raymond, I appreciate your response.
> > >
> > > We are both on board with the idea of dividing the query response into
> > > chunks. Let's discuss the "how" in Camel.
> > >
> > > To achieve this, I iterated through the route X times, each time
> > executing
> > > a query with a different offset. I utilized Camel headers to store the
> > > offset and other flags, as mentioned in my initial email.
> > >
> > > My primary question is: Does Camel have any built-in functionality that
> > > accomplishes the same task? Additionally, since I was "improvising,"
> I'm
> > > curious if my code adheres to best practices. I sensed that it might
> not,
> > > given that I implemented business logic at the route level.
> > >
> > > Le jeu. 25 janv. 2024 à 15:46, ski n  a
> écrit
> > :
> > >
> > > > Yes, dividing it into chunks is a good practice. This adheres to
> > > > message-based systems in general, not specific to Camel.
> > > > Let's discuss both ways of processing messages:
> > > >
> > > > 1. One big message
> > > >
> > > > Say the message is 100 GB+ and this is processed by some integration
> > > > software on a server, you need to scale the server
> > > > for that amount. This means both memory and CPU must be capable of
> > doing
> > > > processing so amount of data. When you want to perform
> > > > EIP's (like filters or transformation) this will be difficult,
> because
> > > the
> > > > needed resources to match that.
> > > >
> > > > Say this big message comes one's a week, then you have a very big
> > server
> > > > basically run for nothing.
> > > >
> > > > 2. Many small messages
> > > >
> > > > Because of 1 it's generally the best practice to have fixed sized
> > smaller
> > > > messages. When possible, directly on the source.
> > > > If this is somehow not possible, you can split them and move it back
> > to a
> > > > Kafka topic, then you use streaming the messages
> > > > and do the actual EIP's on the small message. Some advantages are:
> > > >
> > > > 1. Predictable: Every message is of the same size, so you load test
> > this
> > > > and match resources.
> > > > 2. Resources: A small message needs less resources (CPU/Memory) to
> > > process
> > > > 3. Load: The load is spread over time (you can use a smaller server).
> > > > 4. Realtime: You don't need to wait until all data is gathered and
> then
> > > > send it in batch, but
> > > >  you can process it when it happens.
> > > > 5. Scaling: When the load is high, you may add multiple threads or
> even
> > > > multiple pods/containers to scale, when you
> > > > don't need it anymore, you can scale back.
> > > >
> > > > Raymond
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Thu, Jan 25, 2024 at 2:32 PM Ghassen Kahri <
> > ghassen.ka...@codeonce.fr
> > > >
> > > > wrote:
> > > >
> > > > > Hello community,
> > > > >
> > > > > I am currently working on a feature within the Camel project that
> > > > involves
> > > > > processing Kafka messages (String) and performing a query based on
> > that
> > > > > message. Initially, I implemented a classic route that called a
> > service
> > > > > method responsible for executing the query. However, I encountered
> an
> > > > issue
> > > > > with the size of the query result, as the memory couldn't handle
> > such a
> > > > > massive amount of data.
> > > > >
> > > > > In response to this 

Re: How to stop aggregate on exception

2024-01-31 Thread Jeremy Ross
Aggregated exchanges are fundamentally disconnected from the point of
aggregation. I don't know of a way to circumvent this using standard
features. You might consider creating a processor that performs your
aggregation logic.

On Wed, Jan 31, 2024 at 6:07 AM Aditya Kavathekar <
kavathekar.adi...@gmail.com> wrote:

> Hi Jeremy,
> Thanks for your response.
>
> I tried the SynchronusExecutorService it is still not stopping incase of
> exception.
> The aggregation is getting continued.
> Please let me know if I can try something else.
>
> Thanks,
> Aditya
>
> On Wed, 31 Jan 2024, 12:58 am Jeremy Ross, 
> wrote:
>
> > Using a standalone aggregator as you've configured will result in the
> > aggregations completing in a separate thread with the side effect that
> the
> > aggregated exchange is completely disconnected from the split.
> >
> > And you're correct that if you use the splitter's aggregation slot that
> you
> > don't get to provide completion parameters. That's because the
> aggregation
> > is complete when there are no more items to split.
> >
> > Using your standalone aggregator, you can try populating the aggregator's
> > executorService option with a SynchronousExecutorService. This will
> result
> > in aggregations completing on the same thread as the splitter. But I'm
> not
> > sure if that means that exceptions in the aggregator bubble up to the
> > splitter. You'll have to test this.
> >
> >
> > On Tue, Jan 30, 2024 at 11:24 AM Aditya Kavathekar <
> > kavathekar.adi...@gmail.com> wrote:
> >
> > > Hello community,
> > > I am using camel 3.20.5 in XML DSL.
> > > I am trying to achieve batch processing and below is my source code.
> > >
> > > 
> > >  > > completionSize= 500 completionTimeout= 1000>
> > > //Some code
> > > //Exception occurs here
> > > 
> > > 
> > >
> > > OnException block here
> > >
> > > Now here I want to stop the aggregation incase any exception occurs
> > inside
> > > it but the aggregator just executes the code from onException block and
> > > continues its execution. Please suggest a way to stop aggregation
> incase
> > of
> > > exception.
> > >
> > > Alternatively if I add the aggregation strategy to the split then It
> > stops
> > > on exception but I am not able to provide completionSize or
> > > completionTimeout options in split. Please suggest if I am missing
> > > something here.
> > >
> > > Thanks,
> > > Aditya
> > >
> >
>


Why has timePeriodMillis for the Throttle EIP been removed and how can I account for this

2024-01-31 Thread Schmeier, Jannik
Hello,

I'm wondering why the timePeriodMillis option has been removed for the throttle 
EIP.

I have an endpoint that can only receive about 5 requests per minute, else the 
request will fail. I accounted for that by using a timePeriodMillis setting of 
6 ms and a throttle value of 4.
Now with the updated Throttle EIP I can't do that anymore.

The upgrade guide suggests that the default time period is 1000 ms now, but 
obviously I can't work with that: 
https://camel.apache.org/manual/camel-4x-upgrade-guide-4_3.html#_throttle_eip

Any suggestions?

Best regards


Camel-Opentelemetry: inject traceid to create new span

2024-01-31 Thread Chio Chuan Ooi
Hi,

I currently using camel-opentelemetry with jaeger.
Understand that jaeger is using header of key “uber-trace-id”. is that
possible to inject and replace the trace id within the route?

As we have some asynchronous transaction from external which the callback
don’t have the traceid return and we wish to inject that from cache.

Thanks and Regards,
Chio Chuan


Re: How to stop aggregate on exception

2024-01-31 Thread Aditya Kavathekar
Hi Jeremy,
Thanks for your response.

I tried the SynchronusExecutorService it is still not stopping incase of
exception.
The aggregation is getting continued.
Please let me know if I can try something else.

Thanks,
Aditya

On Wed, 31 Jan 2024, 12:58 am Jeremy Ross,  wrote:

> Using a standalone aggregator as you've configured will result in the
> aggregations completing in a separate thread with the side effect that the
> aggregated exchange is completely disconnected from the split.
>
> And you're correct that if you use the splitter's aggregation slot that you
> don't get to provide completion parameters. That's because the aggregation
> is complete when there are no more items to split.
>
> Using your standalone aggregator, you can try populating the aggregator's
> executorService option with a SynchronousExecutorService. This will result
> in aggregations completing on the same thread as the splitter. But I'm not
> sure if that means that exceptions in the aggregator bubble up to the
> splitter. You'll have to test this.
>
>
> On Tue, Jan 30, 2024 at 11:24 AM Aditya Kavathekar <
> kavathekar.adi...@gmail.com> wrote:
>
> > Hello community,
> > I am using camel 3.20.5 in XML DSL.
> > I am trying to achieve batch processing and below is my source code.
> >
> > 
> >  > completionSize= 500 completionTimeout= 1000>
> > //Some code
> > //Exception occurs here
> > 
> > 
> >
> > OnException block here
> >
> > Now here I want to stop the aggregation incase any exception occurs
> inside
> > it but the aggregator just executes the code from onException block and
> > continues its execution. Please suggest a way to stop aggregation incase
> of
> > exception.
> >
> > Alternatively if I add the aggregation strategy to the split then It
> stops
> > on exception but I am not able to provide completionSize or
> > completionTimeout options in split. Please suggest if I am missing
> > something here.
> >
> > Thanks,
> > Aditya
> >
>


Re: Camel use case

2024-01-31 Thread Ghassen Kahri
Hi Jeremy,

The idea of using the loop EIP crossed my mind as well, but I'm uncertain
about the feasibility of manipulating headers for each iteration.

I appreciate your concern.

Thank you.

Le lun. 29 janv. 2024 à 18:35, Jeremy Ross  a
écrit :

> > To achieve this, I iterated through the route X times, each time
> executing
> a query with a different offset. I utilized Camel headers to store the
> offset and other flags, as mentioned in my initial email.
>
> This is a perfectly reasonable approach IMO.
>
> > Does Camel have any built-in functionality that
> accomplishes the same task? Additionally, since I was "improvising," I'm
> curious if my code adheres to best practices. I sensed that it might not,
> given that I implemented business logic at the route level.
>
> The EIPs are the building blocks that allow you to accomplish this type of
> use case. Apart from EIPs, Camel doesn't have specific functionality to
> query and process paged resources. The Loop EIP (
> https://camel.apache.org/components/4.0.x/eips/loop-eip.html) might be a
> little more idiomatic than a route calling itself recursively.
>
>
> On Fri, Jan 26, 2024 at 3:07 AM Ghassen Kahri 
> wrote:
>
> > Hey Raymond, I appreciate your response.
> >
> > We are both on board with the idea of dividing the query response into
> > chunks. Let's discuss the "how" in Camel.
> >
> > To achieve this, I iterated through the route X times, each time
> executing
> > a query with a different offset. I utilized Camel headers to store the
> > offset and other flags, as mentioned in my initial email.
> >
> > My primary question is: Does Camel have any built-in functionality that
> > accomplishes the same task? Additionally, since I was "improvising," I'm
> > curious if my code adheres to best practices. I sensed that it might not,
> > given that I implemented business logic at the route level.
> >
> > Le jeu. 25 janv. 2024 à 15:46, ski n  a écrit
> :
> >
> > > Yes, dividing it into chunks is a good practice. This adheres to
> > > message-based systems in general, not specific to Camel.
> > > Let's discuss both ways of processing messages:
> > >
> > > 1. One big message
> > >
> > > Say the message is 100 GB+ and this is processed by some integration
> > > software on a server, you need to scale the server
> > > for that amount. This means both memory and CPU must be capable of
> doing
> > > processing so amount of data. When you want to perform
> > > EIP's (like filters or transformation) this will be difficult, because
> > the
> > > needed resources to match that.
> > >
> > > Say this big message comes one's a week, then you have a very big
> server
> > > basically run for nothing.
> > >
> > > 2. Many small messages
> > >
> > > Because of 1 it's generally the best practice to have fixed sized
> smaller
> > > messages. When possible, directly on the source.
> > > If this is somehow not possible, you can split them and move it back
> to a
> > > Kafka topic, then you use streaming the messages
> > > and do the actual EIP's on the small message. Some advantages are:
> > >
> > > 1. Predictable: Every message is of the same size, so you load test
> this
> > > and match resources.
> > > 2. Resources: A small message needs less resources (CPU/Memory) to
> > process
> > > 3. Load: The load is spread over time (you can use a smaller server).
> > > 4. Realtime: You don't need to wait until all data is gathered and then
> > > send it in batch, but
> > >  you can process it when it happens.
> > > 5. Scaling: When the load is high, you may add multiple threads or even
> > > multiple pods/containers to scale, when you
> > > don't need it anymore, you can scale back.
> > >
> > > Raymond
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Jan 25, 2024 at 2:32 PM Ghassen Kahri <
> ghassen.ka...@codeonce.fr
> > >
> > > wrote:
> > >
> > > > Hello community,
> > > >
> > > > I am currently working on a feature within the Camel project that
> > > involves
> > > > processing Kafka messages (String) and performing a query based on
> that
> > > > message. Initially, I implemented a classic route that called a
> service
> > > > method responsible for executing the query. However, I encountered an
> > > issue
> > > > with the size of the query result, as the memory couldn't handle
> such a
> > > > massive amount of data.
> > > >
> > > > In response to this challenge, I devised an alternative solution that
> > > might
> > > > be considered unconventional. The approach involves querying the
> > database
> > > > multiple times and retrieving the results in manageable chunks.
> > > > Consequently, the route needs to be executed multiple times. The
> > current
> > > > structure of my route is as follows:
> > > >
> > > >
> > > > from(getInput())
> > > > .routeId(getRouteId())
> > > >
> > > > .bean(Service.class, "extractDataInChunks")
> > > >
> > > >  

[ANNOUNCE] Apache Camel 3.22.1 (LTS) Released

2024-01-31 Thread Gregor Zurowski
The Camel PMC is pleased to announce the release of Apache Camel 3.22.1 (LTS).

Apache Camel is an open source integration framework that empowers you
to quickly and easily integrate various systems consuming or producing
data.

This release contains 7 new features and improvements.

The release is available for immediate download at:

https://camel.apache.org/download/

For more details please take a look at the release notes at:

https://camel.apache.org/releases/release-3.22.1/