equanz commented on code in PR #23352:
URL: https://github.com/apache/pulsar/pull/23352#discussion_r1796751741


##########
pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentStickyKeyDispatcherMultipleConsumers.java:
##########
@@ -131,60 +131,74 @@ public synchronized CompletableFuture<Void> 
addConsumer(Consumer consumer) {
             consumer.disconnect();
             return CompletableFuture.completedFuture(null);
         }
-        return super.addConsumer(consumer).thenCompose(__ ->
-                selector.addConsumer(consumer).handle((result, ex) -> {
-                    if (ex != null) {
-                        synchronized 
(PersistentStickyKeyDispatcherMultipleConsumers.this) {
-                            consumerSet.removeAll(consumer);
-                            consumerList.remove(consumer);
-                        }
-                        throw FutureUtil.wrapToCompletionException(ex);
+        return super.addConsumer(consumer).thenCompose(__ -> 
selector.addConsumer(consumer))
+                .thenAccept(impactedConsumers -> {
+            // TODO: Add some way to prevent changes in between the time the 
consumer is added and the
+            // time the draining hashes are applied. It might be fine for 
ConsistentHashingStickyKeyConsumerSelector
+            // since it's not really asynchronous, although it returns a 
CompletableFuture
+            if (drainingHashesRequired) {
+                
consumer.setPendingAcksAddHandler(this::handleAddingPendingAck);
+                consumer.setPendingAcksRemoveHandler(new 
PendingAcksMap.PendingAcksRemoveHandler() {
+                    @Override
+                    public void handleRemoving(Consumer consumer, long 
ledgerId, long entryId, int stickyKeyHash,
+                                               boolean closing) {
+                        drainingHashesTracker.reduceRefCount(consumer, 
stickyKeyHash, closing);
                     }
-                    return result;
-                })
-        ).thenRun(() -> {
-            synchronized (PersistentStickyKeyDispatcherMultipleConsumers.this) 
{
-                if (recentlyJoinedConsumerTrackingRequired) {
-                    final Position lastSentPositionWhenJoining = 
updateIfNeededAndGetLastSentPosition();
-                    if (lastSentPositionWhenJoining != null) {
-                        
consumer.setLastSentPositionWhenJoining(lastSentPositionWhenJoining);
-                        // If this was the 1st consumer, or if all the 
messages are already acked, then we
-                        // don't need to do anything special
-                        if (recentlyJoinedConsumers != null
-                                && consumerList.size() > 1
-                                && 
cursor.getNumberOfEntriesSinceFirstNotAckedMessage() > 1) {
-                            recentlyJoinedConsumers.put(consumer, 
lastSentPositionWhenJoining);
-                        }
+
+                    @Override
+                    public void startBatch() {
+                        drainingHashesTracker.startBatch();
                     }
-                }
+
+                    @Override
+                    public void endBatch() {
+                        drainingHashesTracker.endBatch();
+                    }
+                });
+                registerDrainingHashes(consumer, impactedConsumers);
+            }
+        }).exceptionally(ex -> {
+            internalRemoveConsumer(consumer);
+            throw FutureUtil.wrapToCompletionException(ex);
+        });
+    }
+
+    private synchronized void registerDrainingHashes(Consumer skipConsumer,
+                                                     ImpactedConsumersResult 
impactedConsumers) {
+        impactedConsumers.processRemovedHashRanges((c, removedHashRanges) -> {
+            if (c != skipConsumer) {

Review Comment:
   What happens if a hash is moved between existing consumers? Is it handled?
   https://github.com/apache/pulsar/pull/23309#discussion_r1766382998
   
   memo (but not yet tested): 
   
   * If before-consumer is processed first,
     * add before-consumer's pending acks.
     * After-consumer can't receive messages for blocking specification  // 
it's ok
   * If after-consumer is processed first,
     * add after-consumer's pending acks.
     * The after-consumer can receive messages for unblocking specification
       * even if before-consumer has some pending acks
     * Question: Can after-consumer be added first?
       * => I noticed it was already fixed in new commit.
         * before: 
https://github.com/apache/pulsar/blob/46209d8e016db22ea72ab67c931b2e85be4274cc/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/HashRanges.java#L46-L51
         * after: 
https://github.com/apache/pulsar/blob/3d0625ba64294fb0fe7dafc27c7a34883b4be51b/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/ConsumerHashAssignmentsSnapshot.java#L120-L124
   
   According to https://github.com/apache/pulsar/issues/23421 , it seems 
multiple consumers(e.g. before and after consumer) can't add to the draining 
hash.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to