-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Matthew Toseland wrote: >> Well, is some sort of auditing mechanism plausible?
I think the concept is definitely plausible, all I'm saying is that we're not in a position to roll out a mechanism at short notice if someone releases a hacked node that gets better performance by dropping inserts - which they might, if we implement a tit-for-tat mechanism based on requests. But maybe I'm being too paranoid - by definition, leechers aren't prepared to go to much effort. We could probably deter them by ROT13ing the protocol docs. ;-) >>>> This isn't necessarily insoluble either: Although >>>> most successful requests will likely come from a new node (on a large >>>> network), we won't necessarily succeed in our attempt to connect to >>>> them. > >> Sorry, "come with" not "come from". OK, so if almost every successful request gives us a new node to try, can't we just work our way round the network getting one free request from each node, and starting again when the first victim's forgotten about us (which it has to do eventually, since we might have used a static IP that's now been given to an innocent node)? Cheers, Michael -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) iD8DBQFE64OMyua14OQlJ3sRAl2uAJ9qsbvJ1oRN/7WcnDY4BShnT3FGWQCg2SKX N7Vb0vOFiCQfi7jtZ9cPOhk= =aIEA -----END PGP SIGNATURE-----
