If this is supposed to be a scalable application, then I'd really suggest using some middleware layer. This way your front end(s) (web2py instances) can make a request which could then be distributed to multiple services.
Example: First thought on a solution that comes to mind is web2py using ZeroMQ requests to distribute tasks to servers (round robin) that then perform the queries and return the results. (A quick search shows the 'Cocaine Application Engine' could be a good fit for a quick trial setup). Again, this extra work really depends on your own scenario.. amount of requests, expected response sizes, size of sites to be scanning, location of sites (theoretically your zeromq task recipients could be geographically located near specific hosts to provide quicker site searches). On Tuesday, February 28, 2012 10:33:58 AM UTC-6, Anthony wrote: > > Case 1: >> User request a search => you (web2py) dispatch all the 5 (up to 20) >> sub-external-searches, *only* after finishing all 5 - 20 >> sub-external-searches, send the response for the user. >> >> Case 2: >> User request a search => you (web2py) put all the 5 - 20 >> sub-external-searches in a background tasks, send a response to the user >> like "Your search is being performed " (you can use javascript to polling >> the server. And show the final result after the background tasks have >> finshed. >> > > A couple other options: > > - Similar to Case 1, but make the request to web2py via Ajax, and > flash the "Your search is being performed" message to the user while > waiting for the Ajax request to complete. Similar user experience to Case > 2, but without a background task. > - Assuming you don't really need to process any of the results on the > server (i.e., to store in the db, etc.), you might consider doing the > whole > thing from the browser in Javascript (i.e., have the browser directly > fetch > the URLs via Ajax and assemble the results using Javascript). > > Also, check out > http://stackoverflow.com/questions/3490173/how-can-i-speed-up-fetching-pages-with-urllib2-in-python. > > The Requests library also does async fetching using gevent: > http://docs.python-requests.org/en/v0.10.6/user/advanced/#asynchronous-requests > . > > Anthony >

