With GT.M logical dual site operation, there is always a single primary on which updates occur. A couple of other ways to accomplish what you want come to mind:
The BCMA backup system developed for the VA so that there is a local backup system for inpatients for those occasions when the main server is down has some ability to share records between VistA systems. I don't know whether this is strictly one way, or if it can be used to share records between servers.
At the cost of some extra overnight processing (which can probably be automated, except that someone needs to review that it executes correctly each night, and the run book must have procedures to specifying what to do when it doesn't). Imagine a system where all three servers are logically in sync at the beginning of the day. Each works on its own, in isolation. At the end of the day, the satellite servers roll back the entire day's work (accomplished with one command), and ship the extract (the so called lost transaction report in logical dual site parlance) back to the main server where application logic reads the extract and applies it to the main database. An extract of the entire previous day's work is then shipped to the satellite servers so that all databases are logically in sync for the next business day.
I don't know what CPRS GUI bandwidth requirements are relative to the bandwidth used by GT.M's GT.CM client/server operation. If this is the case, you can run the VistA logic on local servers (or even individual PCs - client logic can certainly run on a CoLinux virtual machine, for example) connecting over a secured LAN/VPN to the main server.
Note that from an administrative point of view, you may find it easier to manage point-to-point links (even if the underlying technology is DSL) from the satellite offices to the main office, with a link to the Internet from the main office.
By the way, are you sure that VPN over commercial DSL will not suffice to run CPRS GUIs at the satellite locations?
-- Bhaskar
-----Original Message-----
From: [EMAIL PROTECTED] on behalf of Kevin Toppenberg
Sent: Mon 2/14/2005 10:20 PM
To: Hardhats Sourceforge
Cc:
Subject: [Hardhats-members] GTM database replication at remote site
I have finally printed out the GTM system
administrator's manual, and am picking through it. I
have some questions about database replication at a
remote site. I'm guessing this is a Bhaskar
question--but anyone can answer.
From my initial review, it seems that database
replication takes place by sending a journal/log of
events to a remote database. The remote database
applies these events to synchronize the databases.
Here is my situation and question.
We have two sites (and will eventually have 3) that
will be using VistA. Our database will be largely
reading notes that have been input by
transcriptionists. The problem is that the network
bandwidth/speed between sites it less than we would
like. We currently have a commercial grade DSL line.
My understanding is that our only other option would
be to purchase a T1 for $$/month, which we don't
really want to do.
[KSB] <...snip...>
