I presume wget needs to actually download the files otherwise
how will it know what other files to link to (if it's a html 
file). However if you don't mind downloading the file
and just want a 0 byte structure afterwards you could 
do something like this.

find -type f -exec dd count=0 if=/dev/zero of='{}' \;

However you'll have to download the file this way. 

Tushar

On Thu, Jun 02, 2005 at 12:56:02PM -0700, wierzbowski wrote:
> Hi! Perhaps someone can help me craft a wget command that will
> accomplish the following. I want to mirror a website but I only want
> to download the directory structure and names of each file. I do not
> want to download the files themselves, but I need the names. A zero
> byte copy of each file would be ideal. I can't seem to figure out how
> to do this. Maybe it's not even possible. Any suggestions would be
> appreciated. Thanks!

-- 
|                 Turtle Networks Ltd.                 |
|  Unit 48, Concord Road, London W3 0TH                |
|  Tel: (020) 8896 2600     |  Fax: (020) 8992 7017    |
|  www.turtle.net           |  [EMAIL PROTECTED]       |

Reply via email to