Brendan Lally wrote:
On Friday 10 June 2005 06:27, Mark Wedel wrote:

 A few more notes/thoughts:

 For the server, switching to tcp is perhaps a good thing.  What I'd
actually think is the best thing is there to be a small helper program that
the server executes, and then talks to that helper program then a named
socket (or perhaps just a pipe).  The server could send the helper program
things like number of players and any other dynamic data (for the static
date, the helper program could just read the settings file).


This seems like a nice idea, it has the added benefit of making it possibile to send data about servers that are down, (assuming that it keeps running, and new servers speak to it...) as well as those that are up. (if the server isn't updating data to the subproccess, then it could notify the metaserver that the server is unresponsive, which is probably more useful than having a server suddenly disappear from the list.

Yes - that depends on implementation. Ideally, a down server is automatically restarted, but I suppose there are cases that doesn't happen.

Since server updates may be sporadic, presumably the metaservers won't drop the listing for a server until some amount of time passes (30 minutes or something). Note also that the current metaserver tracks when it last got an update from a server, and does provide that information to the client (I haven't heard from this server in xyz seconds).


For a well configured web server, something like mod_php or zend or similar will be running anyway, the scripts will be acting like compiled code in many respects. It will still be slower than a well written independent program, but then that is the price that is paid for having a web server handle all of the availability stuff.

Right - I'm not sure the cost of doing it web server based vs independent program. For the number of crossfire servers we're talking about, probably not a big deal in any case - although with it being web server based, you do have to be concerned with things like file locking, which may not scale will with large number of updates happening - this is mostly because it has to do all updates through a file. A standalone metaserver has the advantage it only has to do some locking on the data structures, and only periodically needs to write it out to a few (say every few minutes for backup purposes). The php one has to read/write the file every time in gets an update. As said, for the number of servers we have, may not be a big dea.


By comparison, however the final system is implemented, the client /will/ connect to a server and parse some information recieved from it. However that server is configured, libcurl can pretty much cope, so writing a fairly generic parser attached to libcurl is a nice base to begin from, should libcurl be disliked as a dependancy, then it is simply a matter of adding the appropriate socket code later.

I never like adding new dependencies if it can be avoided. As a data point, my system did not have libcurl installed on to it.

In my ideal world, the metaservers should be able to provide information in both 'pretty' html and something raw. One could envision this by something like:

http://myhost/metaserver.php
  giving nice html output, and something like:

http://myhost/metaserver.php?output=raw

providing real raw output (something like we have now). I think however the client would still have to toss the http headers, but that shouldn't be too bad.



watchdog when it is compiled in has the server send UDP packets to a local socket. AFAICT it doesn't really matter to much /what/ it sends, so it might as well send the data that the metaserver will use, in that case then the program you describe would end up looking similar to crossfire/utils/crossfire-loop.c (though maybe in perl?)

IMO, the metaserver notification helper has to be started automatically by the server, and not be an external entity started via script (like the watchdog one uses). This is simply easy of use - otherwise the problem is that someone starts it by hand, or uses their own monitoring script and don't send metaserver updates when they should.

Also, it is probably nice for the metaserver updater to be connected via tcp or pipe to the server, so each can know when the other dies. If the helper sees the server dies, it can send a last update to the metaserver informing it that the server just died, and then exit. If the server sees the helper dies for any reason, it can start up another copy. The problem with the udp notification and watchdog is that the watchdog could be dead and server would never know.

The helper program also needs to have some basic security - making sure the connection is coming from the local host and not something remote (don't want some remote host connecting to the helper and forging information).

The other nice bit about the helper and server talking via tcp is the potential, perhaps in the future, for the helper to talk back to the server with bits of information. I'm not sure what info that would be, but would still be nice to be able to do it.



_______________________________________________
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire

Reply via email to