dill21yu commented on PR #17677:
URL:
https://github.com/apache/dolphinscheduler/pull/17677#issuecomment-3600060004
> Adding `max-data-basedir-disk-usage-percentage-thresholds` will conflict
with the current `max-disk-usage-percentage-thresholds`, which will make it
more difficult for users to understand.
>
> I think we should configure multiple directories in the following two ways
1.
>
> ```
> max-disk-usage-percentage-thresholds:
> /data1: 0.8
> /data2: 0.9
> ```
>
>
> ```
> max-disk-usage-percentage-thresholds:
> path: /data1,/data2
> percentage: 0.9
> ```
>
> This needs to be discussed. cc @ruanwenjun @zhongjiajie @Gallardot
Thank you for your suggestion! I understand your concerns about potential
configuration conflicts. To maintain backward compatibility and reduce the
burden on users to manually specify the Worker’s deployment directory, would
the following approach work?
server-load-protection:
max-disk-usage-percentage-thresholds: 0.8 # Continue monitoring the
Worker's deployment directory (backward compatible)
# Optional: monitor additional directories
additional-disk-paths:
/data01: 0.9
/var/log: 0.85
**Benefits of This Approach**
**Full backward compatibility**: Existing configurations like
max-disk-usage-percentage-thresholds: 0.8 will keep working as before,
automatically applying to the Worker’s deployment directory.
**User-friendly**: Users don’t need to know or configure the exact
deployment path—the system handles it automatically.
**No frontend changes required**: The UI can continue displaying disk usage
for the Worker’s deployment directory without modification.Avoid
overcomplicating the UI.
**Extensible**: When needed, users can optionally define additional paths to
monitor via additional-disk-paths.
What do you think of this proposal? @SbloodyS @ruanwenjun @zhongjiajie
@Gallardot
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]