On Fri, Aug 29, 2008 at 6:04 AM, T Biehn <[EMAIL PROTECTED]> wrote:
> Heartily Disagree,
> Standards are (usually) parse-friendly. I really don't feel like
> inventing some new indicator for your scripts of dubious worth.

They aren't meant to be indicators. Any scripts (I imagine - I don't
have any and don't plan on doing so) will work with unobfuscated URLS
just as well.

> Obfuscate the URLs with ROT13 so it's harder for humans and easier for
> machines?

I'm willing to be corrected on this, but I believe the main reasons
for obfuscation are to do with
1) Stopping search engines like Google following such harmful links
(and potentially caching them for the unwary)
2) Preventing accidental[1] or unknown[2] following of the links
leading to potential local unwanted copies.

> What's hard about matching against a URI then checking the link
> yourself (wget/curl/* + TOR).

Whats so hard about decrypting ROT13?

> NOW, if you WOULDN'T want your post
> automagically parsed then use whatever non-standard notation you feel
> like.
> I may be missing something but this seems like a highly irrational
> request. What are the advantages of knowing if the machine-parsed link
> was posted as a broken or live link? If you could parse the context of
> the post itself you could determine the state of the link if it were
> provided at all, knowing that your script doesn't address the context
> of the post when are you in a situation where the link(s) and the
> poster-provided stat(us/ii :]) thereof useful independent of the
> context of the post?

[1] I would hope that most of the people on this list wouldn't be
doing this however.
[2] See 
http://www.pcworld.com/article/148004/avg_fixes_antivirus_software_skewing_web_site_statistics.html

> -Travis
> On Thu, Aug 28, 2008 at 7:58 PM, Paul Herring <[EMAIL PROTECTED]> wrote:
>> For some reason, GMail decided to actually turn the www.example.com
>> part of that string into a link (first one I've noticed it do,) which
>> suggests that Google at least will recognise some of these URLs and
>> possibly cache/scan them (prevention of which is what I thought at
>> least one premise behind breaking them was for.)
>>
>> If it's munging that's easily reversible (especially for automated
>> scripts) that's required, perhaps ROT13? Anyone using Firefox may find
>> the LeetKey addon useful.
>>
>> uggc://jjj.rknzcyr.pbz
>>
>> On Thu, Aug 28, 2008 at 8:12 PM, Tyler <[EMAIL PROTECTED]> wrote:
>>> I concur.  I'm trying to set up an automated URL retrieval script and
>>> there are just too many formats.  May I suggest that is be as simple
>>> as just replacing http/https with hxxp/hxxps?
>>>
>>> ie. hxxp://www.example.com
>>>
>>> Tyler
>>>
>>>> Hi,
>>>> I was wondering if it would be more helpful if we could propose a
>>>> "standard" for posting broken URLs with some form of start/end indicator to
>>>> allow easier automated processing from the listings?
>>>>
>>>> ChrisB.
>>
>>
>>
>> --
>> PJH
>>
>> http://shabbleland.myminicity.com/sec
>> _______________________________________________
>> botnets@, the public's dumping ground for maliciousness
>> All list and server information are public and available to law enforcement 
>> upon request.
>> http://www.whitestar.linuxbox.org/mailman/listinfo/botnets
>>
> _______________________________________________
> botnets@, the public's dumping ground for maliciousness
> All list and server information are public and available to law enforcement 
> upon request.
> http://www.whitestar.linuxbox.org/mailman/listinfo/botnets
>



-- 
PJH

http://shabbleland.myminicity.com/tra
_______________________________________________
botnets@, the public's dumping ground for maliciousness
All list and server information are public and available to law enforcement 
upon request.
http://www.whitestar.linuxbox.org/mailman/listinfo/botnets

Reply via email to