Roiocam commented on issue #71:
URL: 
https://github.com/apache/rocketmq-eventbridge/issues/71#issuecomment-1702207731

   > Do not post too much code in an issue; it is hard to read.
   > 
   > For example, in cases like ThreadPoolTaskExecutor, consider extending 
ThreadPoolExecutor instead of delegating it. Some getter and setter methods 
have meaningless comments.
   > 
   > There are methods to achieve back pressure, depending on how the producer 
delivers records to the consumer:
   > 
   > When the producer communicates with the consumer through a queue, you can 
use a fixed queue size to achieve this. When the producer cannot push to the 
queue, it immediately goes into idle mode. There should be a common 
specification for this everywhere.
   > 
   > If the producer communicates with the consumer through some RPC mechanism, 
you can let the consumer poll from the producer's delivery buffer. This works 
well in a P2P mode, but once you have multiple consumers for one producer, you 
must fix the delivery buffer size and react to it, which means applying back 
pressure to the producer upstream.
   
   This is only the internal implementation within a program. If we are talking 
about back pressure between multiple applications, I believe there are two ways:
   
   PUSH Mode: In this mode, the consumer explicitly returns an ACK message. The 
producer delivers messages in an "ack-pre-record" manner. To improve 
throughput, messages can be sent in batches, and the ACK returned would be the 
UUID at the end of the batch. When the producer receives this ACK, it considers 
the preceding messages as consumed.
   
   POLL Mode: In this mode, the consumer actively pulls data from the producer. 
It's evident that the traffic will never exceed the processing limit of the 
consumer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to