hey, I have some experience building a "real-time collaborative" app - for my app. I looked at solutions like Comet, etc. - in the end, I went with what was simple to implement right away, i.e. basic short polling - client makes a GET request to a page, gets the response, waits for 1000ms and queries again. I am not long polling or any such thing. It is a normal GET request, made every few seconds. I do not have, nor do I expect to have high server loads (more than 10-15 simultaneous users, in my case), so this simple technique has held up remarkably well and I've faced no problems with it, much to my surprise :). Some replies inline..
On Sun, May 22, 2011 at 7:07 PM, Michael <[email protected]> wrote: > I have not done any measurements. > > The advice on server load is entirely based on commentary on various > websites. I suppose, as I set this thing up, a test is a good idea. > > The likely use case here is that there will be a few thousand users at any > one time who will need updates every five minutes or so. According to what > I have read, if I use a technique that holds a normal Apache server > connection open for each of those connections it will bog the server down > badly. > Don't long-poll. It doesn't seem like you need to long-poll. From the thread, the impression I get is also that your data is real-time-ish, but probably doesn't require the same kind of immediacy as say Google Docs / collaborative editors. In this case, I think the client making a normal request every 10 seconds (or whatever frequency you need updates) or so, fetching the latest data for the boats, and rendering it, maybe the way to go. And it looks to me like you will be serving the same data (or very similar) to all the users at a particular time. Hence, you should definitely be not hitting the database for every request - ideally, you would make one database query every few seconds, and then store the data in memcache or some sort of cache, and when user requests come in, just pull data from memcache, which should speed up the time that your server can respond to requests and reduce its load.. In the end, I think implementing something you understand and then testing it thoroughly is a better idea than implementing something like Comet and hoping it will take care of all problems :) > In fact, for the basic data upload, I am using the techniques you describe. > This is a matter of good design when more than one visitor is expected! > The concern I have is multiple long-polling AJAX calls. > Any good reason why you seem to be fixated on "long-polling" rather than regular short-polling? From what I understand, the idea with long-polling is for the client to keep the connection with the server open until the server responds with something.. then close and re-open the connection. This is definitely going to be hugely expensive on the server as it will have to keep all these connections open simultaneously. This *may* work out with another webserver, but with Apache, I think it is definitely going to die. Have you tested, for your case, just simple normal GET or POST ajax requests made to some page every few seconds (whatever frequency you want updates at?) -- I would really explore other techniques only if it was certain that the simple way of doing things does not work. Anyway, just my two cents - hope it works out for you. -Sanjay _______________________________________________ Users mailing list [email protected] http://lists.osgeo.org/mailman/listinfo/openlayers-users
