Deleting the link.  When it goes away, we don't need to be historians.

Proactive?  I don't see much value in adding that as an additional
developer process. 

> the page:
> http://man.openbsd.org/OpenBSD-current/man1/gcc-local.1
> has this link: 
> http://www.research.ibm.com/trl/projects/security/ssp/
> that is 404. 
> 
> Only ex.: 
> https://web.archive.org/web/
> has it archived, lastly on 2014. 
> 
> Please fix the link, ex.: point it to web.archive.org ?
> 
> Maybe a proactive solution that scans for URLs and "wget"s them, finding out 
> that they are 404 or not?
> 
> [user@notebook Desktop]$ wget 
> http://www.research.ibm.com/trl/projects/security/ssp/ -O - /dev/null
> --2016-10-16 18:34:36--  
> http://www.research.ibm.com/trl/projects/security/ssp/
> Resolving www.research.ibm.com (www.research.ibm.com)... 129.34.20.3
> Connecting to www.research.ibm.com (www.research.ibm.com)|129.34.20.3|:80... 
> connected.
> HTTP request sent, awaiting response... 404 Not Found
> 2016-10-16 18:34:37 ERROR 404: Not Found.
> [user@notebook Desktop]$ 
> 
> Many thanks!
> 

Reply via email to