Michael <[EMAIL PROTECTED]> writes: > I often run wget on both a Fedora box and a Debian box. I let it > collect data for days at a time. On Fedora it does this without > seeming to use a large amount of memory while on Debian it quickly > uses up as much swap memory as it can get ahold of. Is there any > reason why this is so or any way I could control this?
There should be no difference between how Wget works on Fedora and Debian, except that they may be shipping different versions. It would also help if you provide more information as to what you're doing with Wget -- what kind of data are you collecting, how many files, which version of Wget you are using, and so on. Wget's recursive download must keep track of URLs it has retrieved and the URLs it has yet to retrieve, as well as a mapping between URLs and file names. Therefore the amount of memory used should be roughly proportional to the number of downloaded URLs. > It'd be nice if this 'bug' was fixed so that wget could be limited > to a certain amount of memory and told to keep it's working data on > the file system. Running wget bogs my Debian computers down a lot > mostly due to it's memory usage. I tested Wget with site consisting of ~30,000 HTML files and the memory usage was acceptable given the size of the dataset -- under 10MB RSS. If Wget is bogging down the machine, there must be a memory leak somewhere.
