jackjlli commented on code in PR #8694:
URL: https://github.com/apache/pinot/pull/8694#discussion_r871976850


##########
pinot-core/src/main/java/org/apache/pinot/core/data/manager/BaseTableDataManager.java:
##########
@@ -72,6 +73,8 @@ public abstract class BaseTableDataManager implements 
TableDataManager {
   private static final Logger LOGGER = 
LoggerFactory.getLogger(BaseTableDataManager.class);
 
   protected final ConcurrentHashMap<String, SegmentDataManager> 
_segmentDataManagerMap = new ConcurrentHashMap<>();
+  // Semaphore to restrict the maximum number of parallel segment downloads 
for a table.
+  private Semaphore _segmentDownloadSemaphore;

Review Comment:
   Good question! Let me explain a bit on why I didn't choose to use the same 
existing semaphore in this PR:
   * For segment reload, it doesn't have to always download the segment. If the 
same semaphore is reused, pinot server could lose the ability to download for 
an actual download request while it's blocked by the same semaphore which is 
used for reloading a local segment.
   * The default number of segment to be refresh/reload is 1, meaning that for 
reloading all segments request, there is at most 1 segment to be reloaded, 
which is also different from the download request.
   * For segment refresh/reload, the target segments are already shown in 
external view, which means that these segments are already used for querying. 
While download requests could be made by just adding a segment or table 
rebalance, during which the EV could not have been converged yet. Thus, it's 
okay to have higher parallelism for those cases.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to