On Thursday, June 30, 2016 10:56:20 Tim Ruehsen wrote: > On Thursday 30 June 2016 10:50:13 you wrote: > > On Thursday, June 30, 2016 09:37:09 Tim Ruehsen wrote: > > > On Wednesday 29 June 2016 16:07:04 Jeff Pohlmeyer wrote: > > > > On Wed, Jun 29, 2016 at 12:59 PM, Tim Rühsen <[email protected]> wrote: > > > > > I recently made a few comparisons between curl 7.50.0-DEV and wget > > > > > 1.18 > > > > > and > > > > > was astonished about wget outperforming curl by some fair amount on > > > > > single > > > > > HTTPS request/response cycles. > > > > > > > > > > So my question goes... what is 'wrong' with that version of curl. Or > > > > > what > > > > > did I oversee - maybe some special options ? > > > > > > > > The requested page does not exist, so wget does not download anything. > > > > curl, on the other hand, will download the error page by default. > > > > > > Wget *does* download the page, it just doesn't save it by default. But > > > output is redirected to /dev/null anyways and the page just has 1,5kb. > > > > > > Tim > > > > Why does wget download pages to /dev/null? It does not sound exactly > > useful. > > "The output is redirected to /dev/null". The response is downloaded, but not > saved. Redirecting output (console and/or files) makes sense in this case > to get any disk I/O timings out of the timings. But maybe I just don't the > intention of your question... > > Tim
Sorry. I did not realize you had -o/dev/null on the command line and mistakenly thought that wget downloads all 404 responses to /dev/null by default. Kamil ------------------------------------------------------------------- List admin: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
