You should abandon the notion of first time perfect with these kinds of things. There is a false sense of urgency that is imposing a workload on a team that is providing a free produce and service. The tools for correcting a moment zero malware exists in the tool for the operator to use. The real problem is the discovery and validation and that is why moment zero solutions will never be possible.

There is a finite time required to receive a malware instance, discover it is a malware, discover what it applies to, and to create a signature that reasonably avoids false positives. I'm convinced that interval exceeds the delay due to sync problems by such a margin that the first interval needs as much focus as can be committed while the distribution issues are handled at a lower priority.

There are other probabilities - as an example, the probability that a new malware is sufficiently in the wild to pose a threat to an important number of recipients and which can be very low. Those can be queued for release cutting down on the number of low-value updates. And somebody has to decide what is an important number.

Evidence of self-replication is recognizable by the rate of increase of infestations and is data that can be used in setting priorities. How to collect that? How to collect any metrics? So far it is largely buzz generated by responders and which is largely anecdotal.

To be honest, many problems would be solved if all outbound mail were scanned in real time.

dp

On 10/20/18 8:10 AM, Paul Kosinski wrote:
Yes, file synchronization is difficult. But we *started out* using the
provided (i.e., standard) freshclam tool to update our daily.cvd (etc.).
I only built our current non-standard tool (reading the file header)
when the Cloudflare mirrors started serving out-of-date file versions
which caused freshclam to fail and blacklist the mirror (which
eventually resulted in all mirrors being blacklisted).

This says to me that the old, "standard", DNS TXT approach built in to
freshclam doesn't play well with Cloudflare (or similar mirrors?).



On Sat, 20 Oct 2018 06:57:55 -0700
Dennis Peterson <denni...@inetnw.com> wrote:

Caching file systems do validate the requested file against a master
file to see if there has been a change. De-dupe caches do the same.
It isn't instantaneous but they also don't have to wait for the cache
to refresh as they can deliver a pass through request at the same
time they're updating the cache. This is more expensive than
scheduled sync methods, but those necessarily have a delay. These
systems should reject requests for files they don't have but that is
difficult if the updated file has the same name as the one it
replaces. I know it was always a big deal for the dot com I worked
for to update Akamai because of sync problems around the world.
Atomic synchronized file updates are pretty much impossible when you
have a million page requests/minute.

I agree with Joel about using non-standard tools to request
signatures and people that do so should have no expectation of
consistent high reliability, and support requests should go in the
bit bucket. The risk associated with self-service falls on the
operator, not the vendor.

dp

On 10/19/18 2:19 PM, Paul Kosinski wrote:
I'm glad modern multi-core / multi-thread CPU's don't operate this
way.

Imagine if, when your code on CPU1 tried to access memory location
M, your code got what CPU1 happened to have in its cache, instead
of what CPU2 stored into M a few microseconds ago. Fortunately,
with real CPUs, CPU2 invalidates the other CPUs' caches, and CPU1
takes  the extra time to fetch the new and correct data from memory.

Thus, what Cloudflare *should* have (if you can't explicitly upload
a file), is a mechanism to tell it that a file is out of date. This
mechanism could operate very quickly. Then, what Cloudflare would
do is either to stall the HTTP response -- I doubt it would have to
stall for long -- or reply with the appropriate HTTP status code
warning the requester that something is amiss. (Codes 503, 504 or
409 might be applicable.)


On Thu, 18 Oct 2018 22:34:03 +0000
"Joel Esler (jesler)" <jes...@cisco.com> wrote:

Cloudflare will grab the file from our infrastructure once it's
been requested.  (Otherwise it wouldn't know it was there, we
can't push into Cloudflare.). But we have discussed a few ideas
internally that I think will fix this, let us try a couple things
and see if it cuts down on this.

On Oct 18, 2018, at 1:55 PM, Eric Tykwinski
<eric-l...@truenet.com<mailto:eric-l...@truenet.com>> wrote:

As far as I know you don't upload to cloudflare, it's more of how
often does cloudflare check to see if the files have changed.
So you setup a TTL on the check frequency on the cloudflare
website.

Since updates are new they should just be pulled when you ask from
the main clam server.
So you ask for daily-25048.cdiff, and Cloudflare will ask Clam's
main server for that file and cache it.

So my guess would be same as the TTL on the DNS check:
current.cvd.clamav.net<http://current.cvd.clamav.net>. 1800
IN      TXT "0.100.2:58:25048:1539883740:1:63:48006:327"
I.E. 30 minutes for older files, and new ones are when they come
in.

Sound about right Joel, Micah?

Sincerely,

Eric Tykwinski
TrueNet, Inc.
P: 610-429-8300

-----Original Message-----
From: clamav-users [mailto:clamav-users-boun...@lists.clamav.net]
On Behalf Of Paul Kosinski
Sent: Thursday, October 18, 2018 1:23 PM
To:
clamav-users@lists.clamav.net<mailto:clamav-users@lists.clamav.net>
Subject: Re: [clamav-users] Latest report on update "delays"

How can it take 10, 20 30 or more minutes (and I've seen well over
an hour at times) to upload the ClamAV database to Cloudflare?
Does it have to be uploaded separately (and maybe sequentially)
from Cisco to each Cloudflare mirror? Or is Cloudflare's automatic
propagation slow?


On Thu, 18 Oct 2018 16:07:38 +0000
"Micah Snyder (micasnyd)"
<micas...@cisco.com<mailto:micas...@cisco.com>> wrote:

Hi Paul,

I realize it may look misleading to state that you're up to date
when a newer database has been announced.  However, if the newer
database is still being uploaded to the CDN, it is more accurate
to say that the DNS announcement is premature.

The change to freshclam is an effort to ignore potentially
premature database version numbers listed via DNS.

Micah Snyder
ClamAV Development
Talos
Cisco Systems, Inc.
_______________________________________________
clamav-users mailing list
clamav-users@lists.clamav.net
http://lists.clamav.net/cgi-bin/mailman/listinfo/clamav-users


Help us build a comprehensive ClamAV guide:
https://github.com/vrtadmin/clamav-faq

http://www.clamav.net/contact.html#ml


_______________________________________________
clamav-users mailing list
clamav-users@lists.clamav.net
http://lists.clamav.net/cgi-bin/mailman/listinfo/clamav-users


Help us build a comprehensive ClamAV guide:
https://github.com/vrtadmin/clamav-faq

http://www.clamav.net/contact.html#ml

Reply via email to