Can you run multiple servers with this application stack behind a load balancer or floating ip, so while one is under maintenance - the other can provide the web service?
Not sure about commercial options, but nagios has event handlers where you can run a command when a service changes states. You can likely configure this to rerun your priming routine until the service is online. --Dan > On Mar 16, 2014, at 5:46 AM, "Chase Hoffman" <[email protected]> > wrote: > > At $WORK we have $WEBAPP which is somewhat poorly constructed. > > It's a single tenant app, so we have $URL/$CLIENTNAME, where each $CLIENTNAME > is a separate app pool within IIS. > > The dev team decided that they didn't need to optimize their SQL queries on > our business analytics app if they put everything in memory. So rather than > use any previously created RAMdisk solution, or something like memcached, > they have created an abomination wherein each app pool eats about 10-15GB of > memory, and requires priming (wherein data is sucked into RAM from the SQL > server) before the site is accessible. The problem is that that priming > process is CPU intensive. So when we do server maintenance (or the server > crashes, etc), all the sites on the server (generally about 50 sites per > server - the application uses 3.84TB of RAM in aggregate) have to be > reprimed. But since the priming is CPU intensive, after about 4 sites start > priming, no more can be primed or they grind the whole process to a halt. > > But we have a need for monitoring each $CLIENTNAME URI for uptime. The > problem is, that's fine during the normal course of business, but we can't > use it until all the sites are primed (which we currently do with a very > basic script that does a CURL/WAIT 300, so that no server is overloaded). > > Ideally I guess I'm looking for some sort of system wherein I can a) pull > URIs from a database, b) monitor up/downtime, and c) introduce some sort of > priming logic after maintenance. > > Is there a commercial product out there that would do this, or are we going > to have to roll our own? > _______________________________________________ > Discuss mailing list > [email protected] > https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss > This list provided by the League of Professional System Administrators > http://lopsa.org/ _______________________________________________ Discuss mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/
