[EMAIL PROTECTED] writes:
> > One thing I don't understand: why do you optimize for size? Doesn't
> > it almost always make sense to optimize for speed instead?>
>
> Because I like small and sleek executables :-)
No comment.
> Are there any processor-intensive bits in wget ? Most of the time
> it'll wait for the "Internet" anyway.
That's what I thought. But that's patently false for downloads of
large sites -- for each page, Wget has to extract all the links, and
do a number of operations for each link. When wget was using lists
instead of hash tables (this includes 1.6), performance would become a
problem after some time.
> BTW, compiling with DEBUG_MALLOC reveals three memory leaks :
> 0x13830432: mswindows.c:72 <- *exec_name = xstrdup (*exec_name); in
> windows_main_junk
> 0x13830496: mswindows.c:168 <- wspathsave = (char*) xmalloc (strlen
> (buffer) + 1); in ws_mypath
Can't say what these two should be.
> 0x13830848: utils.c:1525 <- (struct wget_timer *)xmalloc (sizeof
> (struct wget_timer));
I think this is a timer allocated in show_progress. It's allocated
only once so it's not really a leak.