James Craig Burley wrote:
They've already done it to cause C/R to be disabled. Remember last
fall, when C/R was the One Big Solution That Would Solve Everything? I was having similar arguments with people claiming it was 100% effective, with 0% false positives, blah blah blah.
SPF doesn't claim to solve everything. it just aims to reduce forgery.
I am currently using it at the end-of checking, in my test. If an email makes itThere are plenty of other reasons to reject SPF as a useful anti-forgery tool, which I haven't really touched on (other than SRS, they include complexity of managing SPF information for some types of hosts, low degree of actual utility of what SPF calls "forgery", which is not really detection of forgery so much as lack of authorization to send email under a domain name, and so on). See other lists (like the qmail list) for those discussions, going back a few months at least.
all the way past everything else, then it looks at the SPF result. Actually it
only cares about a hard fail and a hard pass. A hard fail cans the message,
a hard pass lets it through. That is my current experiment. I don't expect that
to be the norm.
SRS doesn't appear to me to be that exciting at the moment. It is an attempt to
encrypt the return path so that legit bounce-backs get passed through. The
problem is that it appears to me that the encrypted "return path" doesn't change.
So if you know the SRS header is
<[EMAIL PROTECTED]>
you can send me a whole bunch of "pretend" bounce backs. And basically this header will definitely be available on every newsgroup and message board.
From what I understand, they realize this issue and are working it out.
And what I've come to believe is that any system that depends on assessing trust based on *incoming*, externally controlled information, or that predictably overreacts to a correspondingly small external stimulus, has some big strikes against it from the get-go.
Such systems include C/R, SPF, DomainKeys, and im2000, among others. They'll all (probably) work reasonably well in the small; I believe they won't scale up sufficiently to deal with today's Internet, and are provably unable to scale up to, say, 10 billion users.
Well, email shemail. The bigger picture is everyone having at least 30-50 devices in their house that all connect to the Internet and send and receive messages. Like refridgerators, and toasters and what-not. People will be hacking these things and mysteriously infinite cases of beer will arrive via grocery delivery. Talk about denial of service.
There has to be human armored ssl or pgp signed messages and messaging servers that can handle them.
I am trying to design (in my head) a sort of "ideal reverse DNS", from the ground up, that permits scalable lookups of externally-provided identifiers (i.e. domain names) in the context of something as large and as loosely managed as the Internet.
So far, I have not been able to do it. The real-world problems to which it corresponds seem similarly unsolvable.
I agree that DNS could use a re-write.
Waitman