Hi,
I have yet another issue with the PARQ implementation.
The problem I observe is the following (the exact numbers are only rough 
estimations):

One remote peer was once downloading some files from my server. He was 
requesting about 100 different files at a time, so he was given about 
100 different PARQ IDs. Of course, this is correct.

But then, after he downloaded some files, the remote client disappeared 
and never came back online. So he left over 90 entries in my PARQ 
queue. After a while, my host started to try QUEUE callbacks to the 
other peer, none of which was successful. And now, WEEKS later, my host 
is still trying to do QUEUE callbacks to that specific host.

I assume that this is because, when a callback finally fails, only that 
particular PARQ entry is deleted. Wouldn't it make much more sense to 
delete all PARQ entries of a certain host when the last try of any 
callback to that host fails? Or will I have to see those futile 
callback attempts of my servent till the end of the universe?

BTW, since I recently came up with lots of PARQ issues, I just want to 
make clear that I don't do so to annoy you ;-). It's just that I 
somewhat 'specialize' in uploading, and so I am a happy and 
conscientious tester of GTKG's upload functionality.

Greetz,
Hauke Hachmann


-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Gtk-gnutella-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/gtk-gnutella-devel

Reply via email to