devj wrote:
> Hi,
> I am making a web crawler using Python.To avoid dupliacy of urls,i have to
> maintain lists of downloaded urls and to-be-downloaded urls ,of which the
> latter grows exponentially,resulting in a MemoryError exception .What are
> the possible ways to avoid this ??
>   
get more RAM, store the list on your hard drive, etc. etc. 
Why are you trying to do this?  Are you sure you can't use existing 
tools for this such as wget?
-Luke
_______________________________________________
Tutor maillist  -  [email protected]
http://mail.python.org/mailman/listinfo/tutor

Reply via email to