I notice that requestAux in httpclient.nim does a await client.parseBodyFut at the start of any request. This is not precisely correct behavior, though I'm being piccayunish here. HTTP/1.1 servers are required to continue accepting requests, before the response has been delivered, queuing them up in what they call an HTTP pipeline <[https://en.wikipedia.org/wiki/HTTP_pipelining](https://en.wikipedia.org/wiki/HTTP_pipelining)> That can seriously speed things up in high bandwidth, low latency conditions, especially when loading a lot of small files.
What it should do instead is (if the connection is HTTP/1.1) save the future where it says await client.socket.send(body) or await client.socket.send(headersString) if there's no body. Substituting that future in for client.parseBodyFut should enable HTTP pipelining in httpclient.nim, without any other modifications AFAICT. Of course, that leaves it to the user to ensure that they do send several requests in parallel, but it wouldn't be hard to write a requestURLs(urls: string[]) function that grouped the URLs by hostname, then for a given group, a single client could fire off several requests without awaiting them, then await them collectively. I think that would produce HTTP pipelining behavior, provided that clients are not waiting on parseBodyFut but instead waiting on the completion of sending the previous request. Honestly not a huge deal. HTTP pipelining has its own problems <[https://en.wikipedia.org/wiki/Head-of-line_blocking](https://en.wikipedia.org/wiki/Head-of-line_blocking)> and if you're not writing a web browser loading up 300 thumbnails in parallel, most nim scripts are going to only ever request files serially. I just wanted to mention it, since I noticed httpclient.nim doesn't seem to support pipelining. I think it's kind of cool how simply switching the future from "after the previous request's response has been received" to "after the previous request has been sent" pipelining would just sort of... work. Nim's futures really are a powerful abstraction.
