On Fri, 17 Nov 2023, Michael Barnes wrote:
I have used this command string successfully in the past to download
complete websites.
$ wget --recursive --no-clobber --page-requisites
--html-extension --convert-links --restrict-file-names=windows
--domains website.com --no-parent website.com
Michael,
If I correctly understand, this wget command will produce a mirror web site
in a directory I select, but still leave me to manually extract each of the
very many data files (.xls, .xlsx, .md, .pdf) from each subdirectory/web
page.
Is there a way to download each web page and its contents into separage
subdirectories here?
Thanks,
Rich