Re: Wget download of eMule part File via HTTP or FTP

2005-11-22 Thread gentoo

On Mon, 21 Nov 2005, faina wrote:

 I'm very interesting in a extension of Wget that is able to download via the 
 standard protocols .part file created by the eMule pear to pear protocol, 
 during the downloading process from the remote pear
 The idea is to download pieces of the incoming .part from a remote site using 
 ftp protocol or http protocols.
 To do this wget should continuously read the .met file related to a 
 specific .part file (for example every 5-10 minutes). The .met file contains 
 the information about the pieces (parts) of .part file that eMule is 
 downloading. Knowing these information Wget can fetches the appropriates part 
 of the .part file using the FTP/HTTP protocol (it pass through all firewall), 
 and than reassemble them in a locally .part file. When the pear to pear 
 program eMule completes the download from the remote pear, at the same time 
 Wget completes the reassemble process on the local computer.
 
 Do anybody think this feature make sense in wget ?

*** Hi all.

IMO this feature should NOT be part of wget functionality. I would like to 
have wget only FTP/HTTP downloader (with the most features of FTP and HTTP 
like authentications etc), but not functions specific for some other 
protocols.

You can do the requested functionality in shell script and linux basic 
utilities like diff, sed/grep, dd etc.


Bye.


Wolf.


Re: File name too long

2005-03-21 Thread gentoo


On Mon, 21 Mar 2005, Martin Trautmann wrote:

 is there a fix when file names are too long?
 
 bash-2.04$ wget -kxE $URL
 --15:16:37--  
 http://search.ebay.de/ws/search/SaleSearch?copagenum=3D1sosortproperty=3D2sojs=3D1version=3D2sosortorder=3D2dfts=3D-1catref=3DC6coaction=3Dcomparesoloctog=3D9dfs=3D20050024dfte=3D-1saendtime=3D396614from=3DR9dfe=3D20050024satitle=wgetcoentrypage=3DsearchssPageName=3DADME:B:SS:DE:21
= 
 `search.ebay.de/ws/search/SaleSearch?copagenum=3D1sosortproperty=3D2sojs=3D1version=3D2sosortorder=3D2dfts=3D-1catref=3DC6coaction=3Dcomparesoloctog=3D9dfs=3D20050024dfte=3D-1saendtime=3D396614from=3DR9dfe=3D20050024satitle=wgetcoentrypage=3DsearchssPageName=3DADME:B:SS:DE:21'
 
 Proxy request sent, awaiting response... 301 Moved Perminantly
 Location: 
 /wget_W0QQcatrefZ3DC6QQcoactionZ3DcompareQQcoentrypageZ3DsearchQQcopagenumZ3D1QQdfeZ3D20050024QQdfsZ3D20050024QQdfteZ3DQ2d1QQdftsZ3DQ2d1QQfltZ3D9QQfromZ3DR9QQfsooZ3D2QQfsopZ3D2QQsaetmZ3D396614QQsojsZ3D1QQsspagenameZ3DADMEQ3aBQ3aSSQ3aDEQ3a21QQversionZ3D2
  [following]
 --15:16:37--  
 http://search.ebay.de/wget_W0QQcatrefZ3DC6QQcoactionZ3DcompareQQcoentrypageZ3DsearchQQcopagenumZ3D1QQdfeZ3D20050024QQdfsZ3D20050024QQdfteZ3DQ2d1QQdftsZ3DQ2d1QQfltZ3D9QQfromZ3DR9QQfsooZ3D2QQfsopZ3D2QQsaetmZ3D396614QQsojsZ3D1QQsspagenameZ3DADMEQ3aBQ3aSSQ3aDEQ3a21QQversionZ3D2
= 
 `search.ebay.de/wget_W0QQcatrefZ3DC6QQcoactionZ3DcompareQQcoentrypageZ3DsearchQQcopagenumZ3D1QQdfeZ3D20050024QQdfsZ3D20050024QQdfteZ3DQ2d1QQdftsZ3DQ2d1QQfltZ3D9QQfromZ3DR9QQfsooZ3D2QQfsopZ3D2QQsaetmZ3D396614QQsojsZ3D1QQsspagenameZ3DADMEQ3aBQ3aSSQ3aDEQ3a21QQversionZ3D2'
 
 Length: 46,310 [text/html]

 search.ebay.de/wget_W0QQcatrefZ3DC6QQcoactionZ3DcompareQQcoentrypageZ3DsearchQQcopagenumZ3D1QQdfeZ3D20050024QQdfsZ3D20050024QQdfteZ3DQ2d1QQdftsZ3DQ2d1QQfltZ3D9QQfromZ3DR9QQfsooZ3D2QQfsopZ3D2QQsaetmZ3D396614QQsojsZ3D1QQsspagenameZ3DADMEQ3aBQ3aSSQ3aDEQ3
 a21QQversionZ3D2.html: File name too long


*** This is not problem of wget, but your filesystem. Try to do 

touch 
search.ebay.de/wget_W0QQcatrefZ3DC6QQcoactionZ3DcompareQQcoentrypageZ3DsearchQQcopagenumZ3D1QQdfeZ3D20050024QQdfsZ3D20050024QQdfteZ3DQ2d1QQdftsZ3DQ2d1QQfltZ3D9QQfromZ3DR9QQfsooZ3D2QQfsopZ3D2QQsaetmZ3D396614QQsojsZ3D1QQsspagenameZ3DADMEQ3aBQ3aSSQ3aDEQ3a21QQversionZ3D2.html

 ... apart from that the main thing I look for is how to obtain
 the search results. I still don't manage how to get the result from
 search.ebay.de and then download the links to cgi.ebay.de in one:
 
   wget -kxrE -l1 -D cgi.ebay.de -H $URL

*** maybe to create SHA1 sum of the request and store the result in this file
(but you will not know what was the original request, if you don't create some
DB of requests). Or do just simple counting

URL=.
sha1sum=$( echo -n $URL | sha1sum )
echo $sha1sum $URL  SHA1-URL.db
wget -O sha1sum.html [other options] $URL

or

URL=
i=0
echo $i $URL  URL.db
wget -O search-$i.html $URL

Could be this your solution?

Wolf.