DO NOT Upgrade camel 2.2.0 to 2.3.0 - IMPORTENT!!!
--------------------------------------------------
Key: JAMES-1013
URL: https://issues.apache.org/jira/browse/JAMES-1013
Project: JAMES Server
Issue Type: Bug
Components: SpoolManager & Processors
Affects Versions: Trunk
Reporter: tadpale
god save me.
i have been in this trouble for 4 days.
this is a really big problem in camel 2.3.0 through 2.2.0 in James 3 now.
--------------------------------------------------------------------------------------------------
Channel[
Splitter[
on: BeanExpression[
bean:org.apache.james.transport.camel.matchersplit...@70fa2904 method: null
] to: Pipeline[
Channel[
choice{when
org.apache.james.transport.camel.matcherma...@379aff8e: Channel[
org.apache.james.transport.camel.mailetproces...@20ffe027
]}
], Channel[
choice{when
org.apache.james.transport.camel.mailstateequ...@7ed5315d: Pipeline[
Channel[
org.apache.james.transport.camel.disposeproces...@4eb8a7ce
], Channel[
Stop
]
], when
org.apache.james.transport.camel.mailstatenotequ...@5e9c11b8: Pipeline[
Channel[
BeanProcessor[
bean: mailClaimCheck
]
], Channel[
RecipientList[
BeanExpression[
bean:org.apache.james.transport.camel.jmsrecipientl...@584f778e method: null
]
]
], Channel[
Stop
], Channel[
removeProperty(matcher)
], Channel[
removeProperty(onMatchException)
], Channel[
removeProperty(logger)
]
]}
]
] aggregate: UseOriginalAggregationStrategy
]
]
---------------------------------------------------------------------------
see Stop Chanel in MailStateEquals part generated by camel(test with
RecipientIsLocal mailet in transport processor in spoolmanager.xml)
it will not kill the current processor complete.
HostIsLocal mailet will always be continue.
i think it is caused by camel 2.3.0's thread lock or some other sync mechanism.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]