Hello, I have a use case for the merge content processor where I have split the flow in two branches (original flowfile and PDF one branch may or may not take longer than the other) and I want to rejoin those branches using the defragment strategy based on the flowfile UUID of the flowfile before the split to determine whether both branches have successfully completed. I noticed that as I increased the amount of flowfiles generated into the system, I got more merge failures because bins were forced to the failure relationship before they were able to fully defragment. I can increase the number of buckets, but this is just a workaround because it doesn't solve the main problem. Is there a design pattern for accurately merging diverted branches back together that holds up under load and doesn't require me to guess a magic number for the number of bins?
Thanks, Eric
