I'm saying that in the scenario where you're preventing the producer from
sending messages because the consumer has fallen behind, the inability to
send messages can impact the producer process if it's not designed
explicitly to handle that situation. Maybe the sending thread blocks
synchronously, preventing the process from doing other work. Or maybe
exceptions are thrown, so the process isn't blocked but maybe it never
executes the code after the message send. Or maybe it goes into a tight
retry loop and spams the logs, eventually filling the disk because the logs
were never configured to rotate.

It's possible to architect the producer process so it continues to execute
correctly even if it's unable to send these messages, but you have to do
that consciously, and it's easy to mess up.

The biggest architectural advantage of using messaging middleware is that
it lets you decouple the producer and consumer, of which one major
advantage is that slowness in one doesn't impact the other, and typically
systems are architected to take advantage of that. So while it's possible
to architect a system to limit the producer when the consumer is behind,
I'd be very hesitant to do so, and would strongly consider other approaches
such as speeding up the consumer or configuring the message broker to drop
messages not consumed within some time window (i.e. TTL).

Tim

On Tue, Mar 23, 2021, 11:51 AM David Martin <dav...@qoritek.com> wrote:

> Hi Tim,
>
> That's an interesting point. This is what I had in mind (assuming that the
> STOMP interface doesn't support the consumerMaxRate or consumerWindowSize
> parameters as they are only documented for Core & JMS?):
>
> ** Current state **
>
> Fast Producer -> Outbound Queue -> Exclusive Slow Consumer using STOMP
> (which is sometimes offline and cannot handle a backlog)
>
> ** Proposal **
>
> Fast Producer -> Outbound Queue -> server-side JMS/Core Relay using
> producerWindowSize=(small number) -> Buffer Queue -> Slow STOMP Consumer
>
> ---
>
> So are you saying that, as the Relay component will be blocked from
> publishing if the buffer queue is backed up, this will cause problems
> upstream?
>
>
>
> Dave
>
>
>
> On Tue, 23 Mar 2021 at 11:47, Tim Bain <tb...@alumni.duke.edu> wrote:
>
> > As an aside, while we wait for the OP to tell us whether any of these
> > suggestions are relevant to his situation:
> >
> > In most cases, you want producers and consumers to be decoupled, so that
> a
> > slow consumer doesn't block its producers. Flow control is typically used
> > to protect the broker and to prevent misbehaving clients on one
> destination
> > from affecting clients on other destinations. I would be very cautious
> > about any architecture that proposed the intentional linking of producer
> > processes and consumer processes via a flow control window, since it can
> > broaden the impact of problems beyond the process that is experiencing
> > them.
> >
> > Tim
> >
> > On Sat, Mar 20, 2021, 10:31 AM David Martin <dav...@qoritek.com> wrote:
> >
> > > Hello,
> > >
> > > You could possibly try producer window-based flow control to stop
> > messages
> > > backing up on the queue when consumers are offline (e.g. using an
> > > intermediate queue to store the backlog) -
> > >
> > >
> >
> https://activemq.apache.org/components/artemis/documentation/1.0.0/flow-control.html
> > >
> > >
> > >
> > > Dave
> > >
> > >
> > > On Fri, Mar 19, 2021, 11:01 PM Christopher Pisz, <
> > > christopherp...@gmail.com>
> > > wrote:
> > >
> > > > I am using Artemis with Websockets and STOMP
> > > >
> > > > A third party I am working with suspects that their software is
> having
> > > > trouble when there are many messages queued up and they connect, then
> > > > receiving back to back messages until the queue drains.
> > > >
> > > > Is there a way to configure "Please pause x milliseconds between
> > sending
> > > > messages out to subscribers" or "pause x milliseconds between sending
> > > each
> > > > message?"
> > > >
> > > > I know their network code is probably flawed, but this might provide
> a
> > > > stopgap, so thought I'd ask.
> > > >
> > >
> >
>

Reply via email to