[ 
https://issues.apache.org/jira/browse/CAMEL-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17437466#comment-17437466
 ] 

Jeremy Ross commented on CAMEL-17144:
-------------------------------------

Please have a look at [^parallel split test 2.txt]. You can see that if you 
filter down to a particular exchange sent by the scheduler, that the split 
children do in fact all complete prior to the split returning. Here's an 
excerpt:

{code}
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.260: Before the processor 
the body must be empty : []
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.267: Exchange updated 
number form 0 to 9 : [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.280: The child exchange: 0
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.280: The child exchange: 1
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.280: The child exchange: 2
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.280: The child exchange: 3
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.284: The child exchange: 4
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.284: The child exchange: 5
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.285: The child exchange: 6
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.286: The child exchange: 7
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.287: The child exchange: 8
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.289: The child exchange: 9
1FCC72820C953BE-0000000000000000-2021-10-30T09:38:37.290: Split done: [[0, 1, 
2, 3, 4, 5, 6, 7, 8, 9]]
{code}

The confusing part with greedy=true is that the scheduler seems to be 
generating subsequent exchanges concurrently. So you're seeing "Split done" 
messages from *other* threads complete before a certain thread's split children 
are complete. But again, within a particular thread and parent exchange, the 
split *is* waiting for completion of children before returning. The 
documentation says that the thread pool default size is 1, so perhaps there is 
a related bug, because I don't think you should be seeing concurrent exchanges 
from the scheduler.


> Split with aggregation strategy does not wait all substasks to be completed
> ---------------------------------------------------------------------------
>
>                 Key: CAMEL-17144
>                 URL: https://issues.apache.org/jira/browse/CAMEL-17144
>             Project: Camel
>          Issue Type: Bug
>          Components: camel-core
>    Affects Versions: 3.0.0, 3.12.0
>            Reporter: Thomas Sergent
>            Priority: Major
>         Attachments: parallel split test 2.txt, parallel split test.txt, 
> test-camel-scheduler.zip
>
>
> Hi after trying to update from camel 2.25.4 to 3.12 I have a trouble with the 
> usage of the split with parallelProcessing enabled. I cannot share my project 
> but here is a little sample to reproduce the issue. 
>  
> {code:java}
> from("scheduler:testBug?initialDelay=1000&useFixedDelay=true&delay=5000&greedy=true")
>               .to("direct:test");
> from("direct:test")
>               .log("Before the processor the body must be empty : [${body}]")
>               .process((exchange) -> {
>                       exchange.getIn().setBody(IntStream.range(0, 
> 10).mapToObj(i -> "" + i).collect(Collectors.toList()));
>               })
>               .log("Exchange updated number form 0 to 9 : [${body}]")
>               .split(body())
>                               .parallelProcessing(true)
>                               .log("The child exchange: ${body}")
>               .end()
>               .log("Split done: [${body}]")
>               .setProperty(Exchange.SCHEDULER_POLLED_MESSAGES, 
> simple("false"));
> {code}
>  
> With camel 2.25.X the expected behavior (to process each split before 
> returning to the caller) is ok but not with the 3.X where the process loop 
> indefinillty. 
>  
> Note: I have added a kill switch to avoid process to be hanged. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to