On Mon, Oct 28, 2002 at 01:59:17PM -0500, Matt Kettler wrote: > Yes, it is definitely security through obscurity. However, I'll suggest > that you consider if it is possible to have a publicly disclosed algorithm > for this system which is NOT possible to abuse. I'll admit I'm not a god of > coding, but I can think no such possible mechanism. If you can think of a > rough general mechanism that suits the need and isn't subject to abuse I'd > love to hear it. I don't think the code even needs to be made public to > discuss the possibility of a secure algorithm that does this. > > The system must: > 1) Have some algorithm for increasing the trust of a reporter automatically > based on their good submissions. > > 2) It must have a strong mechanism for penalizing bad reporters based on > revokes from trusted users. Reporters configured using bad spamtraps that > wind up reporting every post to a given legitimate mailing list must be > reduced to the point their input is completely ignored. > > As a caveat to 1) To be secure, there must be no way for a malicious user > to artificially increase his score by sending 'don't care' spams and then > reporting them. It should also be impractical for a malicious user to > increase their score to absurdly high levels by aggressively reporting > emails from competing spammers. > > As a caveat to 2) It needs to be impossible for a malicious user who has > somehow gained a reasonable level of trust to send spam and then use the > report function to drive the trust score of a legitimate reporter down. > > There's arguments about security based on how much it takes to build and to > loose trust, but in the case of Razor it is very easy for a spammer to open > several accounts, build their trust, and then leverage that trust to > decimate the scores of legitimate reporters. > > It's unfortunately very difficult to meet the needs of 2) without making it > so that a trusted user can do more harm to others than they will do to > themselves by back stabbing others.
[continues to put his nose in where it doesn't belong] Yes, this is very difficult to do, and it requires a lot of work, assuming it is even possible. I am not saying that people on this list should come up with a brand new, fool proof trust algorithm. I am saying that people should be aware that such things *already exist* in research/academic circles, and there is little to no need to reinvent the wheel. These solutions are not fool proof, but might be better than a security-by-obscurity algorithm that has been developed by coders whom already have all sorts of other responsibilites. Even if the TeS algorithm is head and shoulders above the huge existing body of research in this area, it is still good to know how it compares to other systems. Try reading: http://www.cs.umd.edu/projects/nice/papers/trust-closures.pdf and looking through some of the papers/projects listed in the appendix, and you will find that this is a topic that a lot of smart people (not necessarily including myself in this category :) have already put a lot of time into this. In the end, I agree with the guy who posted that the reason why Cloudburst hasn't published the algorithm has less to do with security, and more to do with economic and marketing factors... but what do I know. - Rob . ------------------------------------------------------- This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. http://thinkgeek.com/sf _______________________________________________ Razor-users mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/razor-users