Consider a long running task, which will take days or weeks (which is the norm in simulation and science domains in general). System emitted a warning after three days, that it'll delete my files in three days. My job won't be finished, and I'll be losing three days of work unless I catch that warning.

Now consider these tasks are run on (dark) servers, where users' daemons login to run the tasks but users do not. How can the user know? What can they do? Same can be said for long running daemons like mail servers, CI runners and such.

One may argue that we can change the configuration, which is true.

On the other hand, if we need to change the configuration 99% of the time, why are we making the change to a worse one in the first place?

On 7.05.2024 ÖS 3:59, Alexandru Mihail wrote:
Maybe putting the cleanup task for /var/tmp on a longer timer and warning users 
ahead of time of impending deletion (maybe 3 days before, 2 days, etc) would 
help with files of unsuspecting users getting deleted. A log entry could also 
be emitted. I could see a gentle warning on ssh login (minimal, one or two 
lines) and desktop notification (for desktop only users who never see the 
terminal) be helpful. A smarter implementation could perhaps only warn if 
dirs/files that are going to be deleted are not systemd generated random items. 
This does not fix issues with applications depending on stuff being there long 
term; yet again nothing's perfect in software

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

Reply via email to