I know curl is a standard but in my experience dealing with millions of random
URLs from Wikipedia , curl will fail on websites where wget succeeded. You'd
think something as basic as retrieving a web page would would work equally well
but they have different internal assumptions and I find wget to be more
reliable when on random sites. Wget is tailored for that purpose and curl is
not. <https://www.howtogeek.com/816518/curl-vs-wget/> .. as such wget has many
defaults set but with curl you need to enable things and it can be complicated
to get it right. Curl is a general purpose toolbox that can be webpage centrix
with the right configurations. Wget is webpage centrix out of the box.