Title: RE: [Hardhats-members] GTM database replication at remote site
Bhaskar;
You are correct, BCMA Backup is a good
start. It is a full implementation of VistA. It is kept up to date
with a stream of HL7 messaging from the server. Each of the BCMA backup
machines only have to precess these messages until the primary server connection
fails. Then these machines come up as isolated servers and should be able
to support the ward or a couple of wards stacking up the transactions as HL7
traffic sitting in queue until the communications with the primary server can be
restored. The HL7 interface is bi-directional. In this system, there
may be occassion for drift of these shaddow servers (the BCMA Backup
configuration), but this should be able to be restored by a fresh extraction and
file transfer to a local depot for further distribution to other BCMA BU
machines around the hospital. The journal has been kept while the new
configuration was built and pushed out to the distributed machines (while they
are still in service. The swap-over can be relatively fast if there is at
least 3 times as much disk space as needed to hold the configuration (the
current configuration, the packed tar file, space to unpack the files while
thecurrent structure is still in process). Once the files are
positioned and unpacked, the GTM environment is brought down, the files are
re-names, and the GTM configuration is brought back up (in less than 5 minutes
of actual down time). The Journals can then be sent and replayed to the
new configuration.
The situation is that the there are
way more reads than writes in the database. The reads do not have to be
copied to the shadow servers. Only writes and deletesare sent via
HL7.
- Original Message -
From:
Bhaskar,
Kasi
To: hardhats-members@lists.sourceforge.net
Sent: Tuesday, February 15, 2005 5:05
AM
Subject: RE: [Hardhats-members] GTM
database replication at remote site
With GT.M logical dual site operation, there is always a
single primary on which updates occur. A couple of other ways to
accomplish what you want come to mind:
The BCMA backup system developed for the VA so that there is a
local backup system for inpatients for those occasions when the main server is
down has some ability to share records between VistA systems. I don't
know whether this is strictly one way, or if it can be used to share records
between servers.
At the cost of some extra overnight processing (which can
probably be automated, except that someone needs to review that it executes
correctly each night, and the run book must have procedures to specifying what
to do when it doesn't). Imagine a system where all three servers are
logically in sync at the beginning of the day. Each works on its own, in
isolation. At the end of the day, the satellite servers roll back the
entire day's work (accomplished with one command), and ship the extract (the
so called lost transaction report in logical dual site parlance) back to the
main server where application logic reads the extract and applies it to the
main database. An extract of the entire previous day's work is then
shipped to the satellite servers so that all databases are logically in sync
for the next business day.
I don't know what CPRS GUI bandwidth requirements are relative
to the bandwidth used by GT.M's GT.CM client/server operation. If this
is the case, you can run the VistA logic on local servers (or even individual
PCs - client logic can certainly run on a CoLinux virtual machine, for
example) connecting over a secured LAN/VPN to the main server.
Note that from an administrative point of view, you may find
it easier to manage point-to-point links (even if the underlying technology is
DSL) from the satellite offices to the main office, with a link to the
Internet from the main office.
By the way, are you sure that VPN over commercial DSL will not
suffice to run CPRS GUIs at the satellite locations?
-- Bhaskar
-Original Message- From: [EMAIL PROTECTED] on
behalf of Kevin Toppenberg Sent: Mon
2/14/2005 10:20 PM To:
Hardhats Sourceforge Cc:
Subject:
[Hardhats-members] GTM database replication at remote site I have finally printed out the GTM system administrator's manual, and am picking through it. I
have some questions about database replication at a
remote site. I'm guessing this is a Bhaskar
question--but anyone can answer.
From my initial review, it seems that database
replication takes place by sending a journal/log of
events to a remote database. The remote database
applies these events to synchronize the databases.
Here is my situation and question.
We have two sites (and will eventually have 3) that
will be using VistA. Our database will be
largely reading notes that have been input by
transcriptionists. The problem