So I've been using urlgrabber for awhile in a number of projects and I love it because it usually works great. In my most recent project I'm using it to access Google's ajax translation api. Since I'm translating a lot of stuff I keep urlgrabber a lot busier than in my previous projects. I noticed though, that after fetching translations for around 1000 words my python process would run out of available file descriptors to use and urlgrabber and other parts of the code that used files would fail. I figured out that urlgrabber was keeping a socket open for each word I translated since keepalive is on by default. So my current simple fix is to pass the close_connection=1 keyword argument to make keepalive close its sockets.
But I'm curious why I have to do this. Isn't the point of keepalive that further requests (all of which go to the google translation servers) should use the same socket and not open new sockets? Or do I need to use urlgrabber in a different way (currently I'm just using the urlread with the default_grabber)? Thanks, Andrew _______________________________________________ Yum-devel mailing list Yum-devel@lists.baseurl.org http://lists.baseurl.org/mailman/listinfo/yum-devel