dill21yu opened a new issue, #17670:
URL: https://github.com/apache/dolphinscheduler/issues/17670

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and 
found no similar feature requirement.
   
   
   ### Description
   
   Current Behavior:
   DolphinScheduler's current disk monitoring mechanism 
(max-disk-usage-percentage-thresholds) only monitors the disk usage of the 
entire disk partition, rather than monitoring the specific data.basedir.path 
directory 
   Problem Scenario:
   The data.basedir.path directory (configurable in common.properties, e.g., 
/tmp/dolphinscheduler) stores task scripts and temporary files. When it resides 
on a separate disk partition, the current monitoring cannot detect its usage. 
If the disk gets full, tasks fail to write commands, causing execution failures.
   
   Optimization Point:
   Add independent disk usage monitoring for the data.basedir.path directory
   Provide configuration options to set disk usage thresholds for this directory
   Trigger overload protection when the threshold is exceeded, refusing to 
accept new tasks
   Expose disk usage metrics for this directory through Prometheus metrics for 
external monitoring and alerting
   
   worker:  
     server-load-protection:  
       enabled: true  
       max-disk-usage-percentage-thresholds: 0.8  # Overall disk  
       max-data-basedir-disk-usage-percentage-thresholds: 0.8  # 
data.basedir.path directory
   
   ### Are you willing to submit a PR?
   
   - [x] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
[email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to