Thanks Otavio.  Let me know if there is anything I can do to help.

Art


On Wed, Jun 5, 2024 at 7:46 AM Otavio Rodolfo Piske <[email protected]>
wrote:

> Thanks,
>
> I tried w/ 4.7.0-SNAPSHOT and it seems to happen with it too.
>
> I can't dig into that right now this week, but I'll try to keep this one on
> the back of my mind so I try to take a look at it as soon as I can (in case
> no one picks it before me).
>
> Kind regards
>
> On Wed, Jun 5, 2024 at 12:27 AM Arthur Naseef <[email protected]>
> wrote:
>
> > Jira issue created: https://issues.apache.org/jira/browse/CAMEL-20835
> >
> > Added tests, finished wiring the reifer for multicast, and updated
> > constructors for multicast, splitter, and recipientlist.
> >
> > Thoughts?  Given that the impact is an OOM condition with no known
> > workaround outside of changing the route definition to no longer use
> > multicast with dynamic URLs, I hope this can get addressed promptly.
> >
> > Note that I did not test XML parsing.  Some of the code generation during
> > the maven build - that modifies committed files - is confusing, so
> > definitely could use some input on the correct way to manage/update some
> of
> > those files.  Note that I can be reached on ASF slack - in the Camel
> > channel, or via DM.
> >
> > Art
> >
> >
> > On Tue, Jun 4, 2024 at 9:41 AM Arthur Naseef <[email protected]>
> wrote:
> >
> > > Here is a commit on my personal fork of the camel github repo that
> adds a
> > > setting to disable that cache.  I tested it with the reproducer and it
> > > appears to be working well:
> > >
> > >
> > >
> >
> https://github.com/apache/camel/compare/main...artnaseef:camel:asn/disable-error-handler-cache-setting
> > >
> > >
> > > Art
> > >
> > >
> > > On Mon, Jun 3, 2024 at 12:46 PM Arthur Naseef <[email protected]>
> > > wrote:
> > >
> > >> Out Of Memory (OOM) occurs when using the Recipient List with a large
> > >> number of dynamic URLs.  For example:
> > >>
> > >>     .recipientList(simple("http://
> > >> {{downstream-server}}/employee/${header.emplId}"))
> > >>
> > >> with a large number of values for ${header.emplId} leads to the OOM.
> > >>
> > >> REPRODUCER:
> > >> =============
> > >> https://github.com/artnaseef/camel-recipient-list-oom-reproducer
> > >>
> > >> - See the README.md for instructions to reproduce and detect the
> problem
> > >>
> > >> DETAILS
> > >> =======
> > >>  The MulticastProcessor, which RecipientListProcessor extends, has the
> > >> following "unlimited" cache:
> > >>
> > >>     private final ConcurrentMap<Processor, Processor> errorHandlers =
> > >> new ConcurrentHashMap<>();
> > >>
> > >> Entries are added to this map for every unique processor created -
> every
> > >> unique URL generates a unique processor.  The entries themselves are
> > >> wrapped processor instances for error handling IIUC (to support the
> > custom
> > >> error handling used by multicast and recipient-list).  Entries are
> only
> > >> removed from this map on shutdown.  Ironically, there is an LRUCache
> for
> > >> the processors themselves, with a default maximum size of 1000, so the
> > >> wrapped processors may get recreated even though the error handler
> > remains
> > >> in the map indefinitely.
> > >>
> > >> IMPACT VERSIONS:
> > >> ================
> > >> Appears to impact versions >= 3.10.0
> > >>
> > >> COMMIT: 0d9227ff16fb00e047fdd087740c87cce01bb545
> > >> =======
> > >> It appears this commit introduced the use of the errorHandlers
> > >> "unlimited" cache for recipient lists.
> > >>
> > >> FOLLOW-UP
> > >> ==========
> > >> I have ideas and questions for implemeting a fix:
> > >>     - IDEA 1: We can use an LRUCache for this data structure as well.
> > >>     - Does it make more sense to remove the entries from errorHandlers
> > >> when the related Processor entry is removed from it's LRUCache?
> > >>     - IDEA 2: setting on recipient list to disable the errorHandler
> > cache
> > >> (for dyamic urls with little chance of duplicates, this could be the
> > best)
> > >>
> > >> Art
> > >>
> > >
> >
>
>
> --
> Otavio R. Piske
> http://orpiske.net
>

Reply via email to