On Wed, Apr 21, 2004 at 10:55:34PM -0800, John Andersen wrote:

# You will remember that Vipul several times requested that people
# review each submission by eyeball before submitting to Razor.  When
# it became obvious that this was not being done, in favor of
# automated submissions from SA and bayes and bogofilters, he suddenly
# became quiet on this subject.  Perhaps he realized that there was
# just too much spam out there for human review, and that the spammers
# were changing it just enough with each batch to evade Razor, and
# therefore he was in an arms race he could not win.

I think that's a seriously off-base conclusion to come to, and your
negative bias really shows.  What about a more likely explanation, he
got busy scaling the system and building more functionality and
features?  Or is tending to this list more important?

# That is, he could not win (or even keep up) unless automated
# submission became the norm.

Transparent submission is the goal, not automated.

# What I maintain, is that it will STILL always be hopelessly behind,
# and therefore effective ONLY at detecting yesterday's spam.  What
# you get tomorrow will be totally different and will not appear in
# the razor database.

You have obviously not used SpamNet, which is the commercial
equivalent of Razor2 that accesses the exact same network of
information/backend as the razor2-agents.

I suggest you try it (free for the first 30 days) before perpetuating
any more of this factually incorrect, negatively-biased nonsense.

# Part of the problem is the trust system:

Upon what facts do you base your presumptions that "it takes a
significant number of submissions to get something to rank high enough
to be called spam", to yield the underlying implication that the
significant number is untenable for real-time scenarios?  Or that
using "a few submitters with customized spam-traps" are more effective
than a network of several hundred thousand human beings transparently
working in concert?  Or that "by the time [the trust level rises] the
spammer has morphed the spam so that the process has to begin all over
again"?  Or that polymorphisms "defeat the very core of razors hashing
mechanism"?

You speak so authoritatively without *any* knowledge of the facts
relating to how the system actually works or scales.  With all due
respect, I have not seen something so ignorant in a very long time.

Perhaps you should ask more directed questions before presuming the
worst and posting such an ill-conceived, factually incorrect
indictment of what Razor2/SpamNet actually is -- currently the world's
most effective example (that I know of) of a real-time collaborative
filtering system supported by a reputation/trust metric.


--jordan



-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
Razor-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/razor-users

Reply via email to