On 7/27/06, Rick Troth <[EMAIL PROTECTED]> wrote:
On Thu, 27 Jul 2006, Yu Safin wrote:
> If you are not trying to save disk (we use about 1 Gb for all system
> files), why not use something simpler such as unison/rsync to keep all
> your files synchronized to a master. That way, if the disk takes a
> hit you won't see all your systems go down.
Good suggestion, but induces network traffic and processor load.
-- R;
----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
I am not saying it is a good idea but via hypersockets and for those
file systems that don't change too often it might make sense.
We have a different approach using Mini-Disks.
We create a TECH guest system from the SLES distribution. When we
apply patches or new versions of software, they go first to this TECH
guest. Then under change control we just copy / clone to our DEV
environment for two weeks. Then the QA environment for real testing.
Finally, we copy/clone into PROD. This is for system level data.
This way, we only maintain one version, the TECH version.
In-House applications go through the same process but we keep their
directories separate from the SLES distributions. Mini-Disks are used
with LVM mount points.
----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390