Follow-up Comment #1, bug #44966 (project wget):
For me, it makes perfect sense to make -q to be quiet *unless* an error
happens, rather than adding an extra command line switch. When things screw up
(and specially if you're running wget in a script) you want wget to fail
noisily, instead of
On Thu, Apr 30, 2015 at 8:39 AM, Daniel Stenberg dan...@haxx.se wrote:
Is there another reason to use libevent than simply the fact that an
nghttp2 example uses it?
As current wget isn't libevent based, it seems like a pretty major
redesign to change it to use libevent only for the purpose
Follow-up Comment #2, bug #44966 (project wget):
Not really. Sometimes I expect Wget to fail and do not want it's output
messing my script up. In other cases, I know how to handle Wget's failures
using exit codes and do not want the output to make things ugly.
I usually run wget with -q
On 04/29/2015 03:03 PM, m_balewski wrote:
Hi,
It's my first mail here, so I would like to welcome you all! wget isgreat (I'm using it
for years) and I wish to try to look into it's codeand maybe do some changens some day :)
But for now, I have a problem... I've just downloaded the sources and
Hi Hubert,
congrats for being selected for GSOC !
Here is a researchers report about testing TFO.
https://reproducingnetworkresearch.wordpress.com/2014/06/03/cs244-14-tcp-fast-open-2/
Some additional thoughts:
- TFO won't work with HTTPS as long as the used SSL library does not support
TFO.
-
Follow-up Comment #3, bug #44966 (project wget):
1. For heavens sakes give wget a way to print nothing except error
messages, like most other commands can do.
2. You need a new switch to do this as to not break old scripts.
___
Reply to
On Thursday 30 April 2015 13:35:06 Gisle Vanem wrote:
Tim Ruehsen wrote:
Some additional thoughts:
- TFO won't work with HTTPS as long as the used SSL library does not
support TFO.
Isn't SSL in Wget already rather slow? Due to the way SSL_Read()
is called in a SIGALRM-handler or
Tim Ruehsen wrote:
Some additional thoughts:
- TFO won't work with HTTPS as long as the used SSL library does not support
TFO.
Isn't SSL in Wget already rather slow? Due to the way SSL_Read()
is called in a SIGALRM-handler or separate Win32-thread for
all (?) HTTPS reads.
There it goes. Comments are more than welcome.
http://www.burgersoftware.es/gsoc2015.pdf
On 04/29/2015 11:06 AM, Ander Juaristi wrote:
Hi all,
I'm happy to announce that my proposal for GSoC '15 Improve Wget's Security
has been accepted.
In the following months, I'll work together with the
Also a pretty good view on TFO that I just stumbled upon:
https://bradleyf.id.au/nix/shaving-your-rtt-wth-tfo/
Regards, Tim
signature.asc
Description: This is a digitally signed message part.
On Thu, 30 Apr 2015, Gisle Vanem wrote:
Hard to tell since I didn't find any large files I could D/L via SSL. You
have one? But some quick tests (only a 48 kByte file):
Here's a HTTPS URL that gives you a 40651008 bytes Firefox installation:
On Thursday 30 April 2015 17:01:03 Daniel Stenberg wrote:
On Thu, 30 Apr 2015, Gisle Vanem wrote:
Hard to tell since I didn't find any large files I could D/L via SSL. You
have one? But some quick tests (only a 48 kByte file):
Here's a HTTPS URL that gives you a 40651008 bytes Firefox
Tim Ruehsen wrote:
BTW, 1000 cycles on a GHz CPU is 1 micro second. How much does it influence
the overall download duration for your use case ? How often is SSL_Read called
in a real life use-case (e.g. downloading 1GB on a 2/10/50/100 mbps
connection).
Hard to tell since I didn't find any
On Thu, 30 Apr 2015, Tim Ruehsen wrote:
Originally, Gisle talked about CPU cycles, not elapsed time.
That is quite a difference...
Thousands of cycles per invoke * many invokes = measurable elapsed time
--
/ daniel.haxx.se
Daniel Stenberg wrote:
On Thu, 30 Apr 2015, Tim Ruehsen wrote:
Originally, Gisle talked about CPU cycles, not elapsed time.
That is quite a difference...
Thousands of cycles per invoke * many invokes = measurable elapsed time
True it seems, but Iv'e not tried SSL times on a local-net.
Am Donnerstag, 30. April 2015, 18:45:05 schrieb Daniel Stenberg:
On Thu, 30 Apr 2015, Tim Ruehsen wrote:
Originally, Gisle talked about CPU cycles, not elapsed time.
That is quite a difference...
Thousands of cycles per invoke * many invokes = measurable elapsed time
Again: That is quite
Am Donnerstag, 30. April 2015, 12:04:18 schrieb User Goblin:
The situation: I'm trying to resume a large recursive download of a site
with many files (-r -l 10 -c)
The problem: When resuming, wget issues a large number of HEAD requests
for each file that it already downloaded. This triggers
On Thu, Apr 30, 2015 at 5:41 PM, Daniel Stenberg dan...@haxx.se wrote:
HTTP/2 over plain text is annoying with the Upgrade: and wasted RTT, it is
much easier and simpler over HTTPS and ALPN.
Do you suggest starting off with HTTPS first then? It seems like most
client/browsers only plan to
Gisle, Ángel, Darshit, Tim,
Thank you all for the useful suggestions and links! I'll read through it
and think it over.
Hubert
Nikolay Merinov kim.roa...@gmail.com writes:
Hello,
Previously you created code for processing simultaneous options -N and
-c with HTTP protocol:
https://lists.gnu.org/archive/html/bug-wget/2010-07/msg00025.html
I attached patch with same changes for FTP.
---
Nikolay Merinov
From
gob...@uukgoblin.net (User Goblin) writes:
The situation: I'm trying to resume a large recursive download of a site
with many files (-r -l 10 -c)
The problem: When resuming, wget issues a large number of HEAD requests
for each file that it already downloaded. This triggers the upstream
Mariusz Balewski m_balew...@tlen.pl writes:
On Wed, Apr 29, 2015 at 08:37:49PM +0200, Ángel González wrote:
Onl 29/04/15 15:03, m_balewski wrote:
Hi,
It's my first mail here, so I would like to welcome you all! wget
isgreat (I'm using it for years) and I wish to try to look into
it's codeand
On Thu, 30 Apr 2015, Miquel Llobet wrote:
I'm glad to see this project come to life, and I'm happy to see you chosing
to base it on Tatsuhiro's awesome nghttp2 library. We do this in the curl
project too.
I blame Giuseppe for suggesting it :-), and nghttp2 looks really good, can't
wait to
The situation: I'm trying to resume a large recursive download of a site
with many files (-r -l 10 -c)
The problem: When resuming, wget issues a large number of HEAD requests
for each file that it already downloaded. This triggers the upstream firewall,
making the download impossible.
My initial
24 matches
Mail list logo