Hi, i'm using this amazing tool. I want to download recursively a website with a lot fo big file located in subfolders with the home just made of folders.

I want to be sure to retrieve the progress of operation if something happen but it's like wget ignore folder and return

|--2021-01-09 15:46:11-- https://domain/subfolder/ Reusing existing connection to domain:443. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] domain/subfolder: Is a directory Cannot write to ‘domain/subfolder’ (Success). |

|I just kill the process during the download of the first big file and when i start it again it finish very soon.|

|Those are some attept of parameters that i used|

|wget --recursive -l inf --no-clobber --page-requisites --no-parent --domains xxx https://xxx/|

||

||||wget -c -N -mirror -pc --convert-links -P ./mirror $SITE_UR||

||||
||The structure of the target website is made like, if can help:
||

||Home /
||

||    A1 / A2
||

||    B1 / Files
||

||    C1
||

||And when i kill the process the second iteration just stop at the folders after the home.||

||I want to ask if i'm using the tool in the bad way or if i can't resume operation like that.||

||
||

||Thanks a lot
||

Reply via email to