westonpace commented on code in PR #14524:
URL: https://github.com/apache/arrow/pull/14524#discussion_r1018414137


##########
cpp/src/arrow/util/async_util.h:
##########
@@ -145,206 +188,216 @@ class ARROW_EXPORT AsyncTaskScheduler {
     /// acquired and the caller can proceed.  If a future is returned then the 
caller
     /// should wait for the future to complete first.  When the returned 
future completes
     /// the permits have NOT been acquired and the caller must call Acquire 
again
+    ///
+    /// \param amt the number of permits to acquire
     virtual std::optional<Future<>> TryAcquire(int amt) = 0;
     /// Release amt permits
     ///
     /// This will possibly complete waiting futures and should probably not be
     /// called while holding locks.
+    ///
+    /// \param amt the number of permits to release
     virtual void Release(int amt) = 0;
 
     /// The size of the largest task that can run
     ///
     /// Incoming tasks will have their cost latched to this value to ensure
-    /// they can still run (although they will generally be the only thing 
allowed to
+    /// they can still run (although they will be the only thing allowed to
     /// run at that time).
     virtual int Capacity() = 0;
+
+    /// Pause the throttle
+    ///
+    /// Any tasks that have been submitted already will continue.  However, no 
new tasks
+    /// will be run until the throttle is resumed.
+    virtual void Pause() = 0;
+    /// Resume the throttle
+    ///
+    /// Allows taks to be submitted again.  If there is a max_concurrent_cost 
limit then
+    /// it will still apply.
+    virtual void Resume() = 0;
   };
-  /// Create a throttle
+
+  /// Pause the throttle
+  ///
+  /// Any tasks that have been submitted already will continue.  However, no 
new tasks
+  /// will be run until the throttle is resumed.
+  virtual void Pause() = 0;
+  /// Resume the throttle
   ///
-  /// This throttle is used to limit how many tasks can run at once.  The
-  /// user should keep the throttle alive for the lifetime of the scheduler.
-  /// The same throttle can be used in multiple schedulers.
-  static std::unique_ptr<Throttle> MakeThrottle(int max_concurrent_cost);
+  /// Allows taks to be submitted again.  If there is a max_concurrent_cost 
limit then
+  /// it will still apply.
+  virtual void Resume() = 0;
 
-  /// Add a task to the scheduler
+  /// Create a throttled view of a scheduler
   ///
-  /// If the scheduler is in an aborted state this call will return false and 
the task
-  /// will never be run.  This is harmless and does not need to be guarded 
against.
+  /// Tasks added via this view will be subjected to the throttle and, if the 
tasks cannot
+  /// run immediately, will be placed into a queue.
   ///
-  /// If the scheduler is in an ended state then this call will cause an 
program abort.
-  /// This represents a logic error in the program and should be avoidable.
+  /// Using a throttled view after the underlying scheduler has finished is 
invalid.

Review Comment:
   I removed this sentence.  This is not a change in the API contracts.  For 
all schedulers it is safe to access the scheduler from within a task submitted 
to that scheduler (e.g. follow-up tasks).
   
   However, it is not safe to access a scheduler from a peer scheduler, though 
that concept doesn't have much meaning and adds confusion so probably doesn't 
need explanation.  For example, it would not be safe to add a task to the task 
group for file A from a task in the task group for file B, because file A might 
have finished before file B.
   
   The various returned pointers should make this fairly obvious (e.g. to do 
the above you would have to call `.get` on the `unique_ptr` returned when 
creating the task group for file A).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to