Try wget (http://www.gnu.org/software/wget/wget.html). It 's a small and
very
usefull command line utility that downloads almost anything from the web. 
It supports HTTP and FTP. In your case, you can point it to the html page, 
And instruct it to follow the links, downloading any file that is
referenced. 
And I you can also filter to include only *.pdf. 

Regards, 

Martin

-----Original Message-----
From: PUB: Andreas Ernst [mailto:[EMAIL PROTECTED]] 
Sent: Friday, February 14, 2003 8:10 AM
To: Dittmar, Daniel; SAPDB
Subject: Re: Documentation


Hit Dittmar,

i am not very familar with writing scripts.  Maybe i think about in JAVA.

Maybe is something for the future.

regards
Andreas

Dittmar, Daniel schrieb:

>>i searched in ftp.sapdb.org and did not found the docu-PDF's from:
>>
>>http://www.sapdb.org/7.4/sap_db_documentation.htm
>>
>>It is not very comfortable to get all this files by hand.
>>
>>Where can i find them?
>>    
>>
>
>We do not provide an archive with all the PDFs as we'll probably forget 
>to update it with every changed PDF anyway. Because the privileges are 
>set up differently for www.sapdb.org and ftp.sap.com, it is also not 
>easy to uload the files to both servers at the same time. Depending of 
>your Perl or Python skills, it should be easy to write a short script 
>to extract all the href="[^.][.]pdf" links from the page and to 
>download them automatically.
>
>Daniel Dittmar
>
>  
>



_______________________________________________
sapdb.general mailing list
[EMAIL PROTECTED]
http://listserv.sap.com/mailman/listinfo/sapdb.general
_______________________________________________
sapdb.general mailing list
[EMAIL PROTECTED]
http://listserv.sap.com/mailman/listinfo/sapdb.general

Reply via email to