On Wed, 30 Sep 2020, Dave Sill via clamav-users wrote:

"G.W. Haywood via clamav-users" <clamav-users@lists.clamav.net> wrote:

In the second scan, how did clamscan manage to do what it claims to
have done in the time that it did it?

OK, you could have just said that the cache is internal to each invocation
of clamscan, but that helps.

For further enlightenment, on one of your systems try doing something
similar to what I did above but using 'clamdscan'.

The problem with clamdscan is that it runs into permissions since it's
not running as root.

Consider using a
central clamd server for all your scanning needs.

How would that work? Clamd only scans files on the system on which it's
running.

No. clamD scans data passed to it by clamdscan, usually over a socket or
pipe. As a special case,
    clamdscan --fdpass filename
passes an open file handle (the man page suggests that that is not
technically accurate) to clamd which means clamd can scan any file which
clamdscan can read, avoiding the running as root problem. --fdpass only
works over local (unix) sockets, not network (tcp) sockets.


I doubt anyone is doing that.  I'm sure it isn't necessary, as it's
already taken care of by both clamscan and clamd.  Perhaps if you can
be a bit more forthcoming about your use case(s) we may be able to
help reduce scan times.  One of the best ways of doing that is not to
scan so much junk so often.

We've got about 3000 Linux systems that we'd like to periodically scan,
primarily to ensure that they're not being used to redistribute
Windows malware. We'd like to scan all of the local file systems for
completeness. Any attempt to skip "junk" will potentially skip malware,
and hand crafting scans for each system is not an option.

Skipping multiple copies of the same file won't really help because
the duplication is across systems, and because every file will be
rescanned every time clamscan is run.

We could do a full scan on the first run and then weekly scans of files
modified in the past week. That's kludgy but may be the best we can do.

That does mean that any malware which is missed in the first run
will not be detected in subsequent runs.

3000 machines per week, gives you about 3.36 minutes for each machine to
send all its local data to the scanning machine.
Instead I would run a local, mirror, repository of the database
and use freshclam on each machine to keep its database in sync with your
mirror, then run clamd and a clamdscan cron? script on each machine.
I would also look at on-access scanning.
Scanning files as they are used might mean more or less work
than scanning every file every week.

--
Andrew C. Aitchison                                     Kendal, UK
                        and...@aitchison.me.uk

_______________________________________________

clamav-users mailing list
clamav-users@lists.clamav.net
https://lists.clamav.net/mailman/listinfo/clamav-users


Help us build a comprehensive ClamAV guide:
https://github.com/vrtadmin/clamav-faq

http://www.clamav.net/contact.html#ml

Reply via email to