On Nov 26, 2008, at 9:44 AM, Gili Tzabari wrote:

>
> Brian Pontarelli wrote:
>> On Nov 25, 2008, at 2:06 PM, Dhanji R. Prasanna wrote:
>>> On Tue, Nov 25, 2008 at 12:33 PM, Gili Tzabari
>>> <[EMAIL PROTECTED]> wrote:
>>>>      Amazon.com might disagree with you on this point ;)
>>> Somehow I doubt amazon.com is backed by one server process. At  
>>> least,
>>> here at Google, we haven't figured out how to do that yet.
>>
>> Nor would you ever want to. Ouch!
>
>       Are you saying you update individual servers behind a load-balancing
> front-end and "updating" involves shutting down individual web
> containers? Or are you saying something else?

Not sure how Google or Amazon do it, but at Orbitz we had banks of  
servers that we would update. Our tiers were something like:

Redlines (load balancers)
Apaches
WebLogics
Jini cluster
Backend services

When we did releases, we would push the service changes out into the  
Jini cluster early. Jini is discovery based and I wrote some  
additional logic to provide for versioning so the services would be  
discovered by the WebLogics and immediately start running and accept  
requests if they were compatible versions (i.e. minor or patch  
versions). Obviously we didn't have double hardware, so it was a  
rolling process during the release. You could start up the new  
services on 10 boxes, ensure they were stable and then start up 10  
more machines while you took 10 old versions down. This would continue  
until the services were up and running. You had to start from the  
bottom and work your way up because each tier might depend on new  
logic and data from the lower tiers. Plus, each tier had 2-3 sub-tiers  
and we would roll each of those in order. The key to this type of  
process is that the services should understand how to handle requests  
from older clients.

Then you would roll the WebLogic servers in sequence. We had stuff in  
the code to shut off bookings so that transaction weren't lost, but we  
never really figured out a good way to handle mid-stream transaction  
loss during a WebLogic roll. I'm sure it can be done, but that's  
pretty difficult. I would envision something like a distributed  
session using Coherence or something and that the transaction could re- 
attached to a new front end box after it was updated. The kicker is if  
the transaction object was an older version and new transaction needed  
some additional data, you might have to restart that transaction.  
Anytime you version data things get nasty. Required data fields are  
tough to add or remove to new versions of classes.

We almost never touched the Apaches or Redlines at all. The only time  
we might change something up there was when IPs were changed or DNS  
entries changed and that was almost never because of how harsh it is  
on a system.


-bp


If the new services were stable and could start accepting requests
>
>
> Gili
>
> >


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"google-guice" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-guice?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to