I am replying to both posters tryin gto consolidate  both ideas

> 1/ please post a *new* message when writing to the list.

Sorry, I just got distracted after answering to some people's problems on
the list.

> 2/ What I've seen a lot of people (myself included) do: develop your app
> on your test/dev machine; build it into a WAR file; push the WAR out to
> the production servers at some scheduled time and restart/reload Tomcat.
Well, that is doable and it is certainly not difficult. Let me recreate it
the way I am thinking about it:

> 2.1_ develop your app on your test/dev machine

which could be a CVS based one, but I think the synch'ing should be separate
from the CVS, dev. . . .

> 2.2_ push the WAR out to the production servers . . .

the 'pushing' part or better said 'synchronization' of all servers should be
atomic adn automatic, based on

2.2.1_ kind of a synchronization protocoll,

2.2.2_ that knows of the location of the other machines and that they all
were time synch'ed

2.2.3_ their latest tree-like 'signature' structure for the data in:

2.2.3.1_ databases; down to a record level ('creation' and 'last updated'
time stamps must be kept for each record which is always good anyway when
you need (an you always do) optimistic locking, concurrent updates, etc) and
('mirror/rsync' works for file systems only, right?) Separating DB updates
from webapp ones is also good because in DB-driven sites must updates are
made to the data . . .

2.2.3.2_ and the code; down to the classes' MD5 signatures (JARs are way to
grob for this, usually you just change a class, or a web.xml file not the
whole webapp)

> at some scheduled time

I don't quite like the idea of a 'scheduled time', I would rather go with
pushed 'landmark' updates, or maybe giving both as options. Also automation
is always good for DOS attacks, I think updating a live site needs some hot
blood and bony skulls backing/being aware of it

2.2.4 > restart/reload Tomcat

I don't like the idea of having to restart TC in a production server, at
least not as part of the replication strat.

I would rather go with a backend "staging server" that would keep a copy of
the lastest sync'ed 'site images'. This is were all updates are made prior
to 'restarting TC' and this backend "staging server" is also the one
brokering all:

2.2.4.1_ HTTP 404-like errors

2.2.4.2_ and exceptions

with customized redirections, searches, etc. There could also be 'master'
stage servers (just in case that many people work concurrently) and
slave/replicated ones

 This backend server would be also connected to the same DB that front ends
connect too

2.3._ Once these tree-like 'signatures' of all back end servers is the same,
so we know that all copies of the data and code ar OK, the front end servers
would be updated by either:

2.3.1_ 'restarting' the front instances (that would get their data feeds
from the same backend directory structure) or

2.3.2_ CD-ROMs could be burned

2.3.3_ classes could be read/loaded from a DB . . .

 I think this is good also because even if the updates are automatic the
'commited' ones are not and things can be still changed/fine tune prior to
commiting an update. Basically 'deltas' will be visible to all mirror sites'
admins that can check them and decide what should be commited or not . . .

> The push is OS-specific; in Unix-style environments, I've used everything
from a scripted scp or rsync to a manual FTP.
I was kind of thinking about making it happen as part of a synch'ing
protocoll that does not need extra port or nothing it would be a HTTP/SSL
(partially of totally) communication with data transfers and all between all
backend staging servers

> Does this answer your question, or did I misunderstand it?

 I think we understood each other well. We are just looking at the same
problem from different perspective and with a different scope




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to