This entrapment approach is off-the-scale stupid. Not to mention probably unconsitutional, but IANAL.
It presupposes that all accesses of the URL are human-generated, moreover, that they're deliberately human-generated. And of course, neither of those things are true. For example, an HTTP proxy may pre-fetch content from links shown on a given page and scan them for malware before making a decision about whether to present those links to a user (or whether to present them in a modified fashion that signals its assessment of their contents). As another example: someone who noticed one of these and figured out what it was could embed that link in their latest spam run, give it an innocuous title, and probably get people to hit it without any idea what it was. (Or they could just embed it in malware, and have the malware access the link as soon as it fires up on any system it penetrates.) And as yet another example, anyone running a web crawler against their own email, web cache, and other materials would hit this (among potentially millions of others) without ever noticing. And it just gets worse from there. Good thing there are actually no serious issues to investigate, like, oh, I dunno, massive corporate looting of the country or high-level government corruption and cronyism, or anything like that. ---Rsk _______________________________________________ Fun and Misc security discussion for OT posts. https://linuxbox.org/cgi-bin/mailman/listinfo/funsec Note: funsec is a public and open mailing list.
