> From: clamav-users <clamav-users-boun...@lists.clamav.net> on behalf of "Joel 
> Esler (jesler) via clamav-users" <clamav-users@lists.clamav.net>
> Reply-To: ClamAV users ML <clamav-users@lists.clamav.net>
> Date: Monday, March 8, 2021 at 9:47 AM
> To: ClamAV users ML <clamav-users@lists.clamav.net>
> Cc: "Joel Esler (jesler)" <jes...@cisco.com>
> Subject: [EXTERNAL] Re: [clamav-users] Not able to use curl to download the 
> cvd files successfully

> No!  Don’t “bypass” it.
>
> And “protecting” does not need to be in quotes, it’s quite literally what we 
> are doing.  And people doing the above are the problem.
>
> As I said in countless other emails, either use Freshclam or 
> https://github.com/micahsnyder/cvdupdate.  The more people that do the above 
> will force us to take drastic
> measures

Here's the reason I bypassed it.

I had a very old machine that I needed to do a scan on.  I had to boot the 
machine with a recovery CD which was a very basic version of Linux.  I compiled 
a statically linked version of ClamAV on another machine and transferred it to 
the problem machine, but needed to transfer two additional libraries (libpcre2 
and libltdl I believe) before clamscan would run.  Trying to get freshclam was 
a pain because it required all sorts of extra libraries, so rather than fetch 
them one at a time and transfer them, I decided to download main.cvd, 
daily.cvd, and bytecode.cvd myself.  No Python on the machine, so I couldn't 
use the cvdupdate script.  So I figured out that changing the User Agent string 
would allow me to use wget to download the files, and that's what I did.

If you want to protect your site, I completely understand, but do so by 
limiting or rate limiting the amount of transfers that happen from IP addresses 
to the database sites.  There is nothing stopping people from abusing 
downloading full copies of these files using a real browser with some sort of 
automated download plugin, especially when you provide links to these files on 
your download page.  Blocking valid transfer applications like wget from 
downloading legitimately just because they don't send a browser as a user agent 
is a dumb way of protection.

As well, if you don't want people using stuff like wget or curl to download 
these files, why do you specifically tell them to do so in your own 
Troubleshooting FAQ?  A quote from the page 
https://www.clamav.net/documents/troubleshooting-faq: "Try to download 
daily.cvd with curl, wget, or lynx from the same machine that is running 
freshclam."

I am not being stupid as G.W. Haywood claimed, I was just trying to solve a 
problem that I had, and that other legitimate, responsible people might have in 
the future.



Todd A. Aiken

Systems Analyst & Administrator

ITS Department

BISHOP'S UNIVERSITY

2600 College Street

Sherbrooke, Quebec

CANADA   J1M 1Z7

--------

"What's going on around here?" - RS

Having a technology issue?

Visit http://octopus.ubishops.ca to place a ticket directly into our ITS work 
order system.

This is the best way to get your requests to ITS and provide more detailed 
information for our analysts and technicians.

_______________________________________________

clamav-users mailing list
clamav-users@lists.clamav.net
https://lists.clamav.net/mailman/listinfo/clamav-users


Help us build a comprehensive ClamAV guide:
https://github.com/vrtadmin/clamav-faq

http://www.clamav.net/contact.html#ml

Reply via email to