dlmarion opened a new issue, #4076:
URL: https://github.com/apache/accumulo/issues/4076

   In Elasticity we intend to support dynamic scaling of the TabletServer, 
ScanServer, and Compactor processes. When making decisions based on metrics, 
you could determine that you don't need X number of processes, but you don't 
really know if the current processes are idle or not. For example, you may 
decide that you don't need X compactors running because the metric for the 
compaction queue length is zero. You decide to shut down some number of 
Compactors, but some of them are doing work, which is lost and then those 
compactions get re-queued.
   
   Additionally, some frameworks may not have a downscale mechanism that allows 
for the graceful termination of a process. For example, the Kubernetes pod 
termination 
[process](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
 sends a SIGTERM to the pod, waits a configurable amount of time, then sends a 
SIGKILL.
   
   We could create two properties for the Compactor, ScanServer, and 
TabletServer processes to help with graceful downscaling: 
`idle.shutdown.enabled` and `idle.shutdown.period` . These processes could exit 
when:
   
     1. `idle.shutdown.enabled` is true (default: false)
     2. and, in the case of the Compactor, it has been `idle.shutdown.period` 
since it completed its last compaction, or in the case of the TabletServer and 
ScanServer, it has not hosted a Tablet in `idle.shutdown.period`.
   
   
   In the case of the Kubernetes Horizontal Pod Autoscaler [HPA], scale down 
can be 
[disabled](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#example-disable-scale-down).
 This would allow using the HPA to scale up based on metrics (queue lengths, 
etc), and to scale down based on inactivity.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to