I am aware of requests (or httpx, ...), but the idea is to do all that with the
standard library. If I have to install stuff, I could just install wget/curl 
and be done.

Feature creep is an issue here, but just like http.server one could be really 
strict about just
covering 90% of the use cases (download stuff and print or save it) and not 
trying to handle
any corner cases.

The first code snippet was not supposed to be production-ready. Here's an 
improved version
which only downloads 1MB a time and prints it. The only parameter could be the 
URL:

from urllib.request import urlopen
from sys import stdout

with urlopen("https://coherentminds.de/";) as response:
    while data := response.read(1024 * 1024):
        stdout.buffer.write(data)

The user of this function could still decide to divert stdout into a file, so 
both use cases
printing and saving would be covered.

IMHO, the benefit-cost ratio is quite good:
* can be a lifesaver (just like http.server) every once in a while in 
particular in a container or testing context
* low implementation effort
* easy to test and to maintain

Tom
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/LV53MUS27TPYMJ3RZQCF5LPYJNPMGCL2/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to