Package: fcheck
Version: 2.7.59-19
Severity: normal
Tags: d-i newcomer

Dear Maintainer,

*** Reporter, please consider answering these questions, where appropriate ***

   * What led up to the situation?
        A: Just leaving the system up & running, generally on my system it
starts to impact the wholesystem performance within 24 hours, because the
fcheck script is launched every 2 hours from cron. As 'fcheck' does consume
quite someresources, every additional instance makes the performanc worse (and
makes it less likely for any of the scheduled processes to ever finish, so the
problem amplifies itself. The effects become severely noticable on the
graphical desktop as well, moving the mouse pointer shows sluggish behaviour,
to the point  that it appears that the system has frozen (where in fact is has
not, it's just unimaginably slow). Additional cronjobs from other packages then
also make things worse. I was able to regain control and restore performance
only by ssh'ing into the system and kill all CPU-hogs and duplicate processes
(the duplicate fcheck jobs also caused a few other duplicate cronjobs; those
could also benefit from a check if they aren't already running, but 'fcheck'
was really the root cause).

(NOTE: as described above, fcheck is not the only command or program that is
scheduled to run over-frequently from CRON, I also caught a few others with
similiar behaviour (i.e. that they do not check if their previous instance has
fnished running; ubnfortunately I can't remember which jobs those where.)

   * What exactly did you do (or not do) that was effective (or
     ineffective)?
        A: For the moment, I have changed the command to run from once every 2
hours, to once a day for now.
Since it's written in Perl, I intend to change the script and built in a simple
check using pid- of lock-files under /var/run, with a paranoia check that these
aren't stale, as well as a check of the proces-list in the event that the
proces is running, but has not written a lock-file or pid-file. I am not sure
if I should take into account that concurrent fcheck-process may or should be
possible, when each is checking a different set of directory trees, but at
first sight I can only find a single global configuration, so if concurrent
sessions on different trees may occur, then it probably pertains to a user-
customized instance.


   * What was the outcome of this action?
        A: For the moment, after killing off the mulitple instances of
'fcheck', it's no longer hogging the CPU's and the sytem is responsive again.
But it hasn't been 24 hours yet, so I can't be sure if the next run will
complete within 24 hours (I know that a single run takes well over 2 hours. I
do not know if this is perhaps the indication of a different problem, and that
the 'fcheck'  process really shouldn't take that long.

   * What outcome did you expect instead?
        A: For now, as only little time has passed, the outcome cannot yet
differ from what I expect. If I find the next 'fcheck' run takes well over 24
hours (or even 12 hours), this would be a serious concern.

GENERAL NOTES & SUGGESTIONS, not pertaining to 'fcheck' allone but the Debian
installation/distribution as a whole:
===================================================================================================================
I am also wondering if the control (checks & balances) on whether the
(previous) jobs have finished), cannot be built into cron - the scheduler that
launches the jobs. Perhaps rather than launching or forking off the job in the
backgroup and never look back at its children, cron could be changed to only
launch a new scheduled process if its certain that the previous one has
terminated?

On a general note to the maintainers that consolidate all packages to 'a full
distribution', I believe that some thought should be given to packages that
require/install cron-entries, and should question if the commandsor jobs should
really run as often as the package maintainers suggest (or populate the at- and
cron tables) and that the package maintainers should justify with valid
arguments how often they find that their (package) jobs should be scheduled,
taking into account how long they may run on less powerfull systems, and also
that for a standard installation, I find there are really *a lot* of cron
entries (compared to some years ago). Are they really all necessary (and do
they need to run so frequently?). Especially for less knowledgable users, this
is a mysterious portion of the system configuration where it's hard to
ascertain from the crontable itself/allone what the impact is of disabling or
decreasing the frequency of jobs. Kindly take this into consideration.


*** End of the template - remove these template lines ***



-- System Information:
Debian Release: buster/sid
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: amd64 (x86_64)

Kernel: Linux 4.13.0-1-amd64 (SMP w/4 CPU cores)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_US:en (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages fcheck depends on:
ii  bsd-mailx [mailx]  8.1.2-0.20160123cvs-4
ii  file               1:5.32-1
ii  mailutils [mailx]  1:3.2-1
ii  nocache            1.0-1

fcheck recommends no packages.

fcheck suggests no packages.

-- Configuration Files:
/etc/cron.d/fcheck changed [not included]
/etc/logcheck/ignore.d.server/fcheck [Errno 13] Permission denied: 
'/etc/logcheck/ignore.d.server/fcheck'

-- no debconf information

Reply via email to