On Friday 18 June 2004 02:08, Christopher Sharp wrote: > On Thu, 17 Jun 2004 21:40:36 -0800, John Andersen <[EMAIL PROTECTED]> wrote: > >On Thursday 17 June 2004 03:32, Christopher Sharp wrote: > >> Figures to date indicate it's NOT a drop in detection rates, but > >> instead has all the hallmarks of someone having produced a spam package > >> that actually manages to avoid all SpamNet/Razor algorithms. > > > >Razor does not have detection "algorithms" as such but relies on matching > >spam previously reported to razor by someone prior to your getting it. > >So if those reporting spam are less active, or if razor has network issues > >then spam gets thru. > > OK, I'll bite... > > To quote the Razor readme: > "Detection is done with statistical and randomized signatures that > efficiently spot mutating spam content." > > Razor as a database naturally does not contain any detection algorithms. > The clients, namely razor-agents, do this. The database simply stores the > hashed signatures for determined spam. Naturally it's more complicated > than that once you add in TeS, revocations etc. > > Ultimately anything contributing signatures to the Razor database must > apply sufficiently similar algorithms for the database to be of any value > to the client.
Razor's mode of operation is to have hashes of known spam submitted to a database. When your client gets an email, it hashes that email and queries the databse in a process not unlike a DNS query, except it uses the hash rather than a domain name. If the hash is found in the DB, then your client knows its spam. The client's can't tell its spam on their own, they need the database. The client's only "smarts" is in the hashing of the message body and to a lesser extend the hashing of any url's found in the body. So, getting back to the original posters question, spam need only be NEW (not yet reported to the Razor db) and it will get thru. It is not correct to think of spam that gets thru the razor check as having been designed to evade razor, or to think someone having "produced a spam package that actually manages to avoid all SpamNet/Razor algorithms". The postulated algorithms do not exist in that sense. The algorithms do not "detect" spam. The only algorithms that exist (if you can call them that) are those that HASH the spam body and submit it for a razor check. Once submitted to the database (enough times) anything, ANYTHING AT ALL will be henseforth considdered as spam. Therefore it is not necessary to postulate a method to avoid razor's algorithms, it is merely necessary to assume (in order of probability): A flood of New spam not previously seen Slowness or technical problems with the razor database Spam reporting to razor falling off (people on vacations , etc) Further you stated (or were you still quoting?) "> Razor as a database naturally does not contain any detection algorithms. > The clients, namely razor-agents, do this. " No they don't. They simply contain hashing algoithms. Client's do not detect spam. They hash all email and submit each hash to the razor database to ask if it has any record of that hash. -- _____________________________________ John Andersen
pgpG0W5Aon1DJ.pgp
Description: signature